abstract
stringlengths
1
4.43k
claims
stringlengths
14
189k
description
stringlengths
5
1.46M
A ramp-up circuit on an integrated circuit receives a relatively high program (erase) voltage for changing the program state of a memory cell. The ramp-up circuit gradually raises the program (erase) voltage to prevent damage to the memory cell. The ramp-up circuit includes a pass gate and associated control circuitry that provides a controlled, ramped-up version of the program (erase) voltage to the memory cell without raising internal circuit nodes above the program (erase) voltage.
What is claimed is: 1. A programmable logic device comprising:a. plurality of programmable memory cells; b. a test pad adapted to receive a program voltage selected to program at least one of the programmable memory cells; and c. a pass gate having: i. a pass-gate input terminal; ii. a pass-gate output terminal; iii. an N-type transistor having a first current-handling terminal connected to the test-voltage pad, a second current-handling terminal connected to the pass-gate output terminal, and a first control terminal; and iv. a P-type transistor having a third current-handling terminal connected to the test-voltage pad, a fourth current-handling terminal connected to the signal-gate output terminal, and a second control terminal. 2. The programmable logic device of claim 1, further comprising a voltage ramp-up circuit having a ramp-up-circuit output terminal adapted to ramp-up the voltage to the first control terminal.3. The programmable logic device of claim 2, further comprising a second voltage ramp-up circuit having a second ramp-up-circuit output terminal adapted to ramp-up the voltage to the second control terminal.4. The programmable logic device of claim 3, the second voltage ramp-up circuit further comprising an input terminal connected to the first-mentioned ramp-up-circuit output terminal.5. The programmable logic device of claim 3, wherein the ramp-up of the voltage on the first control terminal occurs before the ramp-up of the voltage on the second control terminal.6. The programmable logic device of claim 1, wherein the transistors are MOS transistors.7. The programmable logic device of claim 1, further comprising steering logic having a steering-logic input terminal connected to the pass-gate output terminal, wherein the steering logic is adapted to selectively provide the program voltage to the plurality of memory cells.8. An integrated circuit comprising:a. a plurality of electronically programmable memory cells; b. a device pad adapted to receive a programming voltage, the programming voltage of a voltage level sufficient to program the memory cells; c. steering logic connected to the memory cells and having an input terminal adapted to receive the programming voltage, the steering logic being adapted to provide the programming voltage to selected ones of the memory cells; d. a first transistor having a first p-type current-handling terminal connected to the device pad, a second p-type current-handling terminal connected to the steering logic input terminal, and a first control terminal; and e. a second transistor having a first n-type current-handling terminal connected to the device pad, a second n-type current-handling terminal connected to the steering logic input terminal, and a second control terminal. 9. The integrated circuit of claim 8, wherein the first and second transistors are MOS transistors.10. The integrated circuit of claim 8, further comprising a voltage ramp-up circuit having a ramp-up-circuit output terminal adapted to provide a first ramped-up signal to the first control terminal.11. The integrated circuit of claim 10, wherein the ramp-up circuit is adjustable to alter the ramp-up time of the ramped-up signal.12. The integrated circuit of claim 11, wherein the ramp-up circuit further comprises a clock input terminal adapted to receive a ramp-up clock signal, and wherein the ramp-up time is proportional to the period of the ramp-up clock signal.13. The integrated circuit of claim 10, further comprising a second voltage ramp-up circuit having a second ramp-up-circuit output terminal adapted to provide a second ramped-up signal to the second control terminal.14. The integrated circuit of claim 13, wherein the second voltage ramp-up circuit further comprises a second clock input terminal adapted to receive a second ramp-up clock signal, and wherein the ramp-up time of the second ramped-up signal is proportional to the period of the second ramp-up clock signal.15. An integrated circuit comprising:a. electronically programmable memory cells; b. a circuit node adapted to receive a programming signal of a sufficient voltage to program the memory cells; c. a ramp-up circuit having a ramp-up circuit input node connected to the circuit node and a ramp-up circuit output node, the ramp-up circuit adapted to produce a ramped-up version of the programming signal on the ramp-up circuit output node; and d. steering logic having a steering logic input terminal connected to the ramp-up circuit output node and a steering logic output node connected to the programmable memory cells, wherein the steering logic is adapted to selectively provide the ramped-up version of the programming voltage to ones of the memory cells; e. wherein the ramp-up circuit comprises a plurality of ramp-up-circuit nodes, and wherein each node remains at or below the voltage sufficient to program the memory cells. 16. An integrated circuit comprising:a. electronically programmable memory cells; b. a circuit node adapted to receive a programming signal of a sufficient voltage to program the memory cells; c. a ramp-up circuit having a ramp-up circuit input node connected to the circuit node and a ramp-up circuit output node, the ramp-up circuit adapted to produce a ramped-up version of the programming signal on the ramp-up circuit output node; d. steering logic having a steering logic input terminal connected to the ramp-up circuit output node and a steering logic output node connected to the programmable memory cells, wherein the steering logic is adapted to selectively provide the ramped-up version of the programming voltage to ones of the memory cells; and e. a pass gate having a pass-gate input terminal connected to the circuit node, a pass-gate output terminal connected to the ramp-up circuit output node, and first and second pass-gate control terminals. 17. The integrated circuit of claim 16, the ramp-up circuit further comprising:a. a first ramp-up sub-circuit having an input terminal connected to the circuit node and an output terminal connected to the first pass-gate control terminal; and b. a second ramp-up sub-circuit having an input terminal connected to the circuit node and an output terminal connected to the second pass-gate control terminal. 18. The integrated circuit of claim 15, wherein the circuit node comprises a pad.19. A programmable logic device comprising:a. a plurality of circuit nodes; b. a plurality of programmable memory cells; c. a test pad adapted to receive a program voltage selected to program at least one of the programmable memory cells; and d. means for selectively providing a ramped-up version of the program voltage to the memory cells without raising any of the circuit nodes to voltage levels exceeding the program voltage.
BACKGROUNDComplex programmable logic devices (CPLDs) are well-known integrated circuits that may be programmed to perform various logic functions. Numerous types of memory elements may be used in CPLD architectures to provide programmability. One such memory element, known as a flash memory cell, is both electrically programmable and erasable. Program and erase are performed on a plurality of flash memory cells using either Fowler-Nordheim tunneling or hot electron injection for programming and Fowler-Nordheim tunneling for erasing. Flash memories can also be in-system programmable (ISP). An ISP device can be programmed, erased, and can have its program state verified after it has been connected, such as by soldering, to a system printed circuit board. Some CPLDs do not have ISP capability and must be programmed externally (outside the system) by programming equipment.Continuous advances in integrated-circuit process technology have dramatically reduced device feature size. The reduction in feature size improves device performance while at the same time reducing cost and power consumption. Unfortunately, smaller feature sizes also increase a circuit's vulnerability to over-voltage conditions. Among the more sensitive elements in a modern integrated circuit are the gate oxide layers of MOS transistors. These layers are very thin in modern devices, and are consequently easily ruptured by excessive voltage levels. Modern circuits with small feature sizes therefore employ significantly lower source voltages than was common only a few years ago. For example, modern 0.18-micron processes employ supply voltages no greater than 2 volts.The voltages required to program and erase flash memory cells are dictated by physical properties of the materials used to fabricate memory cells. Unfortunately, these physical properties have not allowed the voltages required to program, erase, and verify the program state of a memory cell to be reduced in proportion to reductions in supply voltages. For example, modern flash memory cells adapted for use with a 0.18-micron processes require program and erase voltages as high as 14 volts, a level far exceeding the required 1.8-volt supply level. For a more detailed treatment of program, erase, and verify procedures, see U.S. Pat. No. 5,889,701, which is incorporated herein by reference.FIG. 1 (prior art) depicts a conventional CPLD 100. The circuitry within CPLD 100 is instantiated on an integrated circuit chip 105, which is later wire bonded to pins 110 of a device package 115 using a number of bond wires 120. Bond wires 120 connect to respective bond pads 125, some of which extend to respective input/output circuits 130. Input/output circuits 130 convey signals to and from other programmable logic and interconnect resources (not shown). The logic of input/output circuits 130 and these configurable elements is dictated by the program state of a collection of configuration memory cells 135.Integrated circuits, including CPLDs, undergo substantial test procedures. Among these tests, memory cells are programmed, erased, and their states verified to insure proper device operation. To accomplish this, a sophisticated test apparatus, or "tester," applies and receives signals via pads on the integrated circuit. These pads might be bond pads, like bond pads 125, or dedicated test pads used only to make contact with the tester.Chip 105 depicts two test-specific pads 145, sometimes called "octal pads," connected to a ramp-up circuit 150. A pair of test pins 155 extends from an external tester (not shown) to pads 145 to convey a relatively high programming voltage VPP and a control signal CTRLB to circuit 150. Circuit 150 uses these two external test signals to develop a ramped version VPP_R of the programming voltage VPP to steering logic 160. While VPP is referred to herein as a "programming" voltage, it is to be understood that the applied voltage on terminal VPP might also be used to erase memory cells. Moreover, as with other designations in the present disclosure, VPP refers to both the signal and the corresponding circuit node. Whether a given designation refers to a node or a signal will be clear from the context.Steering logic 160 selectively applies the ramped up program voltage VPP_R to the bitlines of memory cells within the box labeled memory cells 135. Though shown in FIG. 1 as a discrete area, memory cells 135 are typically distributed throughout chip 105 to control the various programmable resources. A power line 165 conveys a power-supply voltage VDD from one of external supply pins 110 to I/O circuits 105 and the other internal components (not shown).FIG. 2 (prior art) depicts a more detailed schematic of ramp-up circuit 150 of FIG. 1. Ramp-up circuit 150 receives as input the relatively high programming voltage VPP on octal pad 145. EEPROM cells can be damaged if programming and erase voltages are applied too quickly. Ramp-up circuit 150 is therefore provided to raise the external program or erase voltages on the respective pad 145 gradually to the appropriate voltage level.Ramp-up circuit 150 includes a clock terminal 200 adapted to receive a clock signal generated either internally or externally to CPLD 100. Control signal CTRLB is shown here associated with an octal pad 145, but the control signal can also be generated internally, or can be received externally via a different type of pad. The last letter of the designation CTRLB indicates that the control signal is an active low (i.e., the B is for "bar"), and this convention is used throughout the present application.The clock and control signals are fed into a circuit 205 that divides the clock signal into a pair of complimentary clocks C1 and C2. These clocks are then each fed via respective capacitors to an output circuit 210. Output circuit 210 receives the externally generated high-voltage signal VPP as an additional input, and also receives the compliment CTRL of control signal CTRLB.When control signal CTRLB is brought low, output circuit 210 ramps up the voltage on the gate of a transistor 215 from zero volts to a level above VPP. The output VPP_R of ramp-up circuit 150 thus gently approaches the requisite program voltage VPP to be directed to the bit line of one or more memory cells. The output VPP_R ramps up over a time RT determined primarily by the clock signal CLK and the values of the capacitors between circuits 205 and 210. The output VPP_R returns to zero when control signal CTRLB is brought high.The trouble with ramp-up circuit 150 is that the voltage on the gate of transistor 215 must rise above the voltage level VPP. As noted above, modern integrated circuits are becoming ever more sensitive to high voltages, so it is beneficial to keep all voltages presented to CPLD 100 as low as possible.SUMMARYThe present invention is directed to a ramp-up circuit that receives a relatively high program voltage for changing the program state of a memory cell. The ramp-up circuit gradually raises the program voltage to provide a ramped up version of the programming signal to the memory cells. The gradual ramping of the program voltage prevents damage to the memory cells.The ramp-up circuit includes a pass gate and associated control circuitry that provides a controlled, ramped-up version of the program voltage to the memory cells without raising internal circuit nodes above the program voltage. This aspect of the invention reduces the maximum voltage required on nodes within the circuit, and therefore protects sensitive components from potentially damaging over-voltage conditions.This summary does not define the scope of the invention, which is instead defined by the appended claims.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 (prior art) depicts a conventional CPLD 100.FIG. 2 (prior art) depicts a more detailed schematic of ramp-up circuit 150 of FIG. 1.FIG. 3 depicts a ramp-up circuit 300 configured in accordance with the present invention.FIG. 4 is a detailed schematic of level shifters 305 of FIG. 3.FIG. 5 details input ramp-up sub-circuit 310 of FIG. 3.FIG. 6 depicts an embodiment of booster sub-circuit 315 of FIG. 3, another type of ramp-up circuit similar to ramp-up sub-circuit 310 of FIG. 5.FIG. 7 depicts output stage 320 of FIG. 3.FIG. 8 is a waveform diagram depicting the operation ramp-up circuit 300 of FIG. 3, as detailed in FIGS. 4-7.FIG. 9 depicts an output stage 900 similar to output stage 320 of FIG. 7.FIG. 10 depicts a circuit 1000 used to generate the signal VBON, which controls transistor 705 in FIG. 9.DETAILED DESCRIPTIONFIG. 3 is a block-level depiction of a ramp-up circuit 300 configured in accordance with the present invention. Like ramp-up circuit 150 of FIG. 1, circuit 300 receives a programming (erase) voltage level VPP on an octal pin 145 and creates from that voltage a ramped-up program voltage VPP_R. Unlike circuit 150, however, circuit 300 performs this function without raising internal circuit nodes above the voltage level VPP. This and other advantages are discussed below.Ramp-up circuit 300 includes a bank of level shifters 305, an input ramp-up sub-circuit 310, a booster sub-circuit 315, and an output stage 320. These elements are detailed below in FIGS. 4-8.FIG. 4 is a detailed schematic of level shifters 305 of FIG. 3. Level shifters 305 receive a control input CTRL and a pair of clock inputs CLK1 and CLK2. These three inputs can be provided externally, developed using CPLD resources, or a combination of the two. The control and clock signals are logic signals that alternate from zero volts to VDD, or between e.g. zero volts and approximately 1.8 volts for CPLDs manufactured using a 0.18-micron process. Level shifters 305 alter the logic levels of these signals, shifting the voltage level representative of a logic one to the programming voltage VPP. Level shifters 305 also develop complimentary clock signals for both CLK1 and CLK2. The level shifted signals are terminated with the letter "S" to indicate that these signals are sourced from level shift circuit 305. The signal VPP_S is essentially control signal CTRL level-shifted to transition between zero volts and VPP.FIG. 5 details ramp-up sub-circuit 310 of FIG. 3. Ramp-up sub-circuit 310 receives control signal CTRL, programming voltage VPP, the controlled programming voltage VPP_S from level shift bank 305, and the complementary clocks CLK1_S and CLK1B_S, also from level shift bank 305. Complementary clock signals CLK1_S and CLK1B_S connect to respective transistors 500 and 505. These and other transistors with similarly depicted gate structures are pull-back-drain transistors, which are more voltage tolerant than more typical MOS transistors.Clock signal CLK1_S periodically turns on transistor 500 to charge a capacitor 510. Clock signal CLK1B_S then turns on transistor 505 to dump the charge collected on capacitor 510 onto a second capacitor 515. Capacitor 510 is substantially smaller than capacitor 515 (400 times smaller in one embodiment), so the output signal on a terminal VPPR1 rises gradually from zero to VPP. The frequencies and duty cycles of clock signals CLK1_S and CLK1B_S can be adjusted to alter the rise time RT1 of signal VPPR1.Ramp-up sub-circuit 310 includes some control circuitry 520 that removes the charge collected on capacitors 510 and 515 when control signal CTRL is brought low. Control circuit 520 additionally includes an output terminal RGND that grounds the output terminal VPP_R of the entire ramp-up circuit 300, as discussed below in connection with FIG. 7.FIG. 6 depicts an embodiment of booster sub-circuit 315 of FIG. 3, another type of ramp-up circuit similar to ramp-up sub-circuit 310 of FIG. 5. Booster sub-circuit 315 receives the programming voltage VPP, the complimentary clocks CLK2_S and CLKK2B_S from level shift bank 305, and the ramped programming voltage VPPR1 from input ramp-up sub-circuit 310. Booster 315 includes a pair of high voltage OR gates 600 and 605, a series of inverters 610, and a ramp-up circuit 615 similar to the ramp-up portion of input ramp-up sub-circuit 310 of FIG. 5.Inverters 610 conventionally include both PMOS transistors and NMOS transistors. The first PMOS transistor in the series is wider than the first NMOS transistor, so the threshold voltage Vth of the first inverter is close to the threshold voltage of the first PMOS transistor. The second and third inverters are added to sharpen the edge of the resulting inverted version of signal VPPR1 (VPPR1B). The falling edge of signal VPPR1B is delayed from the beginning of the rising edge of ramped up signal VPPR1 by the time required for VPPR1 to rise to within a threshold voltage Vth of programming voltage VPP. In one embodiment, a single inverter takes the place of inverters 610 to save area.OR gates 600 and 605 OR the inverted ramp signal VPPR1B and respective clock signals CLK2_S and CLK2B_S and provide the resulting complementary output signals to a pair of transistors 620 and 625 within ramp-up circuit 615. Ramp-up circuit 615 functions in the same manner as the similar portion of ramp-up sub-circuit 310, except capacitors 630 and 635 within ramp-up circuit 615 have a ratio of approximately 1 to 225, which is to say that capacitor 630 is 225 times smaller than capacitor 635. Ramp-up sub-circuit 315 produces a second ramp up signal VPPR2, the rise time RT2 of which can be modified by changing the frequencies and duty cycles of clock signals CLK2_S and CLK2B_S. The capacitor values here and in FIG. 5 can also be modified to change the rise time of the various ramped-up voltages.FIG. 7 depicts output stage 320 of FIG. 3. Output stage 320 receives the first and second ramp-up signals VPPR1 and VPPR2, the program voltage VPP, the controlled program voltage VPP_S sourced from level shift block 305, and the signal RGND from ramp-up sub-circuit 310. Output stage 320 includes two N-type transistors 700 and 705 and four P-type transistors 710, 715, 720, and 715. Transistors 700 and 710 are connected together in parallel to form a pass gate 717 capable of pulling terminal VPP_R substantially to the programming voltage VPP without requiring any node within ramp-up circuit 300 to rise above VPP. The operation of output stage 320 and the remaining circuits within ramp-up circuit 300 is explained below in connection with FIG. 8.FIG. 8 is a waveform diagram depicting the operation of ramp-up circuit 300 of FIG. 3, as detailed in FIGS. 4-7. The process begins when the programming voltage VPP is brought high, to 13 volts for example (edge 802). Next, a control signal CTRL, typically brought in externally from a tester, is brought high to enable the various circuits within ramp-up circuit 300 (edge 804). As a result of the control signal being brought high, level shift circuit 305 produces the controlled version VPP_S of the programming voltage VPP. VPP_S transitions between zero and programming voltage VPP when control signal CTRL transitions between zero and VDD. Although not depicted in FIG. 8, level shifter 305 produces complimentary clock signals that oscillate between approximately zero volts and the programming voltage VPP.Turning now to FIG. 5, the voltage on terminal VPP_S and the complementary clock signals CLK1_S and CLK1B_S cause the voltage on output terminal VPPR1 (the first ramp-up voltage) to gradually climb from zero volts to approximately VPP, as indicated by arrow 805 of FIG. 8.Turning next to FIG. 6, inverter chain 610 transitions when the first ramp-up signal VPPR1 rises to approximately within one threshold voltage Vth of the first PMOS transistor of the programming voltage VPP (arrow 810 of FIG. 8). The resulting low voltage on line VPPR1B enables both of OR gates 600 and 605 (FIG. 6), causing their respective outputs to begin oscillating as defined by clocks CLK2_S and CLK2B_S. These clocks then periodically and alternately enable transistors 620 and 625 of ramp-up circuit 615 so that terminal VPPR2 (the second ramp-up voltage) gradually rises from approximately zero volts to VPP. The falling level on line VPPR1B thus initiates the gradual rise of output terminal VPPR2 (arrow 815 of FIG. 8).Referring now to FIG. 7, the rising edge on the first ramp-up signal VPPR1 gradually turns on transistor 700, thus pulling output terminal VPP_R up toward programming voltage VPP. This transition is depicted inFIG. 8 using arrow 820. Because terminal VPPR1 rises only as high as VPP, transistor 700 cannot, by itself, raise output terminal VPP_R to the level of the programming voltage VPP. However, terminal VPPR2 begins going high after VPPR1, gradually turning on transistor 705 to pull the gate of transistor 710 toward ground potential. Grounding the gate of transistor 710 turns on transistor 710, which then pulls output terminal VPP_R the rest of the way to programming voltage VPP (arrow 825 of FIG. 8). Ramp-up circuit 300 thus achieves the goal of providing a substantially undiminished programming voltage on terminal VPP_R to bit lines of selected memory cells without requiring any internal node on the CPLD to rise above programming voltage VPP.Returning to FIG. 3, control terminal CTRL is brought low each time steering logic 160 (FIG. 1) is to convey the programming voltage to a different bit line. Returning the control signal CTRL to ground disables each of the elements in FIG. 3, and control circuit 520 of FIG. 5 pulls output terminal VPP_R to ground. The entire cycle then begins again with the next assertion of control signal CTRL.FIG. 9 depicts an output stage 900 similar to output stage 320, like-numbered elements being the same. Output stage 900, employed in place of output stage 320 in one embodiment, provides better control over the turn-on time of transistor 705. Output stage 900 differs from output stage 320 in that the gate of transistor 705 receives a control signal VBON, where "BON" stands for "booster on." VBON is a ramped-up signal that weakly follows the rising voltage transitions on terminals VPPR1 and VPPR2, and consequently turns transistor 705 on more slowly than does output stage 320. Also different from output stage 320, a signal VPUB, where "PUB" stands for "pull-up bar," is taken from the node connected to the gate of transistor 710. The source of signal VPUB is depicted below in FIG. 10.FIG. 10 depicts a circuit 1000 used to generate the signal VBON, which controls transistor 705 in FIG. 9. Circuit 1000 includes a pair of parallel-connected NMOS transistors 1005 and 1010, the gates of which connect to respective control signals VPPR2 and VPPR1 (FIGS. 5 and 6). Circuit 1000 also includes a pair of transistors 1015 and 1020, the gates of which connect to terminal VPUB of FIG. 9, and a transistor 1025, the gate of which connects to terminal VPPR1B of FIG. 5.In operation, signal VBON rises toward VPP as ramp-up signals VPPR2 and VPPR1 turn on respective transistors 1005 and 1010. Transistors 1005 and 1010 are relatively weak, so the maximum voltage on terminal VBON is several threshold voltages below VPP. The weak transistors 1005 and 1010 provide a slow rise time on the gate of transistor 705. As signal VBON rises, transistor 705 pulls node VPUB toward ground, eventually turning on transistors 710 and 1015. As in the embodiment of FIG. 7, transistor 710 pulls output terminal VPP_R all the way to programming voltage VPP; transistor 1015 likewise pulls terminal VBON all the way to programming voltage VPP, and consequently turns on transistor 705 completely. When terminal VPP_S returns to ground, transistors 1020 and 1025 pull terminal VBON to ground. In one embodiment, transistors 1020 and 1025 are replaced with a single transistor controlled by either signal VPPR1B or signal VPUB.While the present invention has been described in connection with specific embodiments, variations of these embodiments will be obvious to those of ordinary skill in the art. For example, application of the invention is not limited to the above-described CPLD architecture, or even to CPLDs. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance, the method of interconnection establishes some desired electrical communication between two or more circuit nodes, or terminals. Such communication may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Therefore, the spirit and scope of the appended claims should not be limited to the foregoing description.
A compensating initialization module may be automatically inserted into a design to compensate for register retiming which changes the designs behavior under reset. The device configuration circuitrymay provide an adjustment sequence length as well as a start signal to the initialization module to properly reset the retimed user logic implemented on the integrated circuit after initial configuration and unfreezing of the integrated circuit. The auto initialization module may control the c-cycle initialization process and indicate to the user logic when c-cycle initialization has completed. The user logic may subsequently begin a user-specified reset sequence. When the user-specified reset sequence ends, the user logic implemented on the integrated circuit may begin normal operations. Additionally, a user reset request may also trigger the auto initialization module to begin a reset process.
1.An integrated circuit comprising:a logic circuit system that is reset using a reset sequence;Configuring a circuitry that programs the logic circuitry to implement a custom logic function, and the configuration circuitry provides a count value c;An initialization module is inserted between the logic circuitry and the configuration circuitry, and the initialization module automatically delays the reset sequence by c clock cycles.2.The integrated circuit of claim 1 wherein said initialization module receives a clock signal from said logic circuitry.3.The integrated circuit of any of claims 1-2, wherein the initialization module receives a reset trigger signal from the selected one of the configuration circuitry and the logic circuitry.4.The integrated circuit of claim 3 wherein said reset trigger signal comprises a start signal transmitted from said configuration circuitry to said initialization module.5.The integrated circuit of claim 3 wherein said reset trigger signal comprises a request signal transmitted from said logic circuitry to said initialization module.6.The integrated circuit of claim 3 wherein said initialization module includes a synchronization circuit for synchronizing said reset trigger signal with said clock signal to generate a synchronized reset trigger signal.7.The integrated circuit of claim 3 wherein said initialization module further comprises a counter circuit enabled by said synchronized reset trigger signal.8.The integrated circuit of claim 7 wherein said initialization module further comprises a reset control circuit that monitors when said counter circuit has counted c clock cycles.9.The integrated circuit of claim 8 wherein said reset control circuit receives said reset trigger signal and performs a handshake protocol with said reset trigger signal.10.The integrated circuit of claim 9 wherein said reset control circuit asserts an output signal when said counter circuit has counted c clock cycles, and said reset in response to deassertion of said reset trigger signal A control circuit deasserts the output signal, and wherein the output signal is delivered to the logic circuitry.11.A method of operating an integrated circuit comprising a logic circuit system, a configuration circuit system, and an initialization module, the method comprising:Using the configuration circuitry, programming the logic circuitry to implement custom logic functions;Using the configuration circuitry, providing a count value c;Using the initialization module, if the count value c is greater than zero, the reset sequence is automatically delayed by c clock cycles, wherein the initialization module is coupled between the logic circuitry and the configuration circuitry;The logic circuitry is reset using the reset sequence after the c clock cycle delays.12. The method of claim 11 further comprising:Using the configuration circuitry to assert a configuration completion signal indicating when the programming of the logic circuitry is complete;With the configuration circuitry, the initialization completion signal is asserted after the defrost cycle following the assertion of the configuration completion signal.13. The method of any of claims 11-12, further comprising:Generating a counter output using a counter circuit in the initialization module;A counter output from the counter circuit and a request signal from the logic circuitry are received using a reset control circuit in the initialization module.14. The method of claim 13 further comprising:Responding to resetting the control signal using the reset control circuit in response to determining that the counter output is equal to zero;The reset control signal is asserted in response to determining that the request signal is de-asserted.15.The method of claim 13 further comprising:Determining the request signal by using the logic circuit system;In response to determining that the counter output is equal to zero, the reset control circuit is used to assert the reset control signal when asserting the request signal;The reset control signal is asserted as long as the request signal is asserted.16.An integrated circuit comprising:a logic circuit system that outputs a request signal and a clock signal;Configuring a circuitry that outputs a counter value c and a start signal;An initialization module that receives the request signal and the clock signal from the logic circuitry and that also receives the counter value c and the start signal from the configuration circuitry.17.The integrated circuit of claim 16 wherein said initialization module comprises:a counter circuit, the counter circuit being initialized to the counter value c, the counter circuit being selectively enabled by a selected one of the start signal and the request signal, and the counter circuit is comprised of the clock signal control.18.The integrated circuit of claim 17 wherein said initialization module further comprises:A reset control circuit that monitors when the counter circuit exhibits a count value of zero, the reset control circuit receives the request signal, and the reset control circuit outputs a control signal.19.The integrated circuit of claim 18 wherein said logic circuitry comprises:Resetting the state machine, the reset state machine receiving the control signal from the reset control circuit, the reset state machine outputting the request signal, and the reset state machine outputting a reset signal.20.The integrated circuit of claim 19 wherein said logic circuitry further comprises:A register that is reset by the reset signal, wherein the reset state machine further receives an error signal that triggers an assertion of the request signal.21.An integrated circuit comprising:Means for programming the logic circuitry to implement custom logic functions and for providing a count value c;Means for automatically delaying the reset sequence by c clock cycles if the count value c is greater than zero;The reset sequence is used to reset components of the logic circuitry after the c clock cycle delays.22. The integrated circuit of claim 21 wherein said means for programming said logic circuitry further comprises:Means for asserting a configuration completion signal indicating when the programming of the logic circuitry is complete;A means for asserting an initialization completion signal after a defrost cycle following the assertion of the configuration completion signal.23. The integrated circuit of any of claims 21-22, further comprising:The component used to generate the counter output;Means for receiving the counter output and for receiving a request signal from the logic circuitry.24. The integrated circuit of claim 23, further comprising:Means for asserting a reset control signal in response to determining that the counter output is equal to zero;Means for asserting the reset control signal in response to determining that the request signal is de-asserted.25. The integrated circuit of claim 23 wherein said logic circuitry asserts said request signal, said integrated circuit further comprising:Means for asserting a reset control signal when asserting the request signal in response to determining that the counter output is equal to zero;A means for maintaining the reset control signal asserted as long as the request signal is asserted.
Method and apparatus for automatically implementing compensation reset for retiming circuitryBackground techniqueThis invention relates to integrated circuits and, more particularly, to implementing resetting of delays for registers within an integrated circuit design.Each transition from one technology node to the next technology node has produced a smaller transistor geometry and thus potentially more functionality per unit area achieved on the integrated circuit die. Synchronous integrated circuits have further benefited from this development as evidenced by reduced interconnect and cell delays, which has led to performance improvements.To further improve performance, solutions such as register retiming have been proposed in which registers are moved among portions of the combinational logic, thereby achieving a more balanced distribution of delays between registers, and thus operational integration at potentially higher clock frequencies Circuit.Typically, a clock edge triggered flip-flop is used to implement the register. These digital flip flops are also powered to the initial state when the integrated circuit is powered up before retiming, but this initial state may be unknown. Therefore, the reset sequence is typically provided to the trigger to reset the triggers and bring them to a known reset state.However, after retiming, the retimed integrated circuit can behave differently than the integrated circuit prior to retiming. In some cases, the same reset sequence provided to the trigger before retiming will not act on the retimed trigger. Therefore, it would be desirable to account for flip-flops that move during retiming to provide an updated reset sequence for retimed flip-flops and to implement circuitry that uses the updated reset sequence to reset the retimed flip-flops.The embodiments herein are presented within this context.Summary of the inventionField of the Invention This invention relates generally to integrated circuits and, more particularly, to a method and apparatus for automatically resetting retiming circuitry using a delayed reset sequence generated by a computer aided design (CAD) tool. CAD tools implemented on integrated circuit design computing devices are often used to perform register move operations (eg, register retiming, register copy, register merge) to improve the overall circuit performance of the circuit design.To accurately reset the retiming circuitry using a delayed reset sequence, the integrated circuit can include logic circuitry, configuration circuitry, and initialization modules. The original reset sequence can be used on a per-clock domain basis to reset the logic circuitry. The logic circuitry can provide a clock signal for a given clock domain to the initialization module. The initialization module can receive a reset trigger signal from the configuration circuitry or logic circuitry that can be synchronized to the clock signal using a synchronization circuit within the initialization module (eg, a start signal from the configuration circuitry or a reset request from the logic circuitry) signal). The synchronization circuit can generate a reset trigger signal corresponding to the synchronization.The initialization module can include a counter circuit that receives the synchronized reset trigger signal. The configuration circuitry that programs the logic circuitry to implement the custom logic functions can also provide a count value corresponding to the original reset sequence to the counter circuit. The initialization module can be inserted between the logic circuitry and the configuration circuitry to automatically delay the original reset sequence by a count of the number of clock cycles. Thus, the counter circuit in the initialization module can provide a count signal to the reset control circuit in the initialization module.When the counter circuit has counted c clock cycles, the reset control circuit can assert an output signal (eg, a reset control signal) that is delivered to the logic circuitry. In response to the deasserted reset trigger signal, the reset control circuit can assert the output signal. This constitutes a handshake operation between the reset control signal and the reset trigger signal.The logic circuitry can include a reset state machine that receives a reset control signal (or simply a control signal) from the reset control circuit. The reset state machine may output the request signal in response to an erroneous received error signal indicating operation during operation of the user logic circuit. The reset state machine can also output a reset signal to the user logic circuit, more specifically to a register within the user logic circuit, to implement a reset sequence (eg, a sequence of adjustments, an original sequence, etc.).Further features of the invention, its nature, and various advantages will be apparent from the accompanying drawings.The invention provides the following technical solutions:1. An integrated circuit comprising:a logic circuit system that is reset using a reset sequence;Configuring a circuitry that programs the logic circuitry to implement a custom logic function, and the configuration circuitry provides a count value c;An initialization module is inserted between the logic circuitry and the configuration circuitry, and the initialization module automatically delays the reset sequence by c clock cycles.2. The integrated circuit of claim 1, wherein the initialization module receives a clock signal from the logic circuitry.3. The integrated circuit of claim 2, wherein the initialization module receives a reset trigger signal from the selected one of the configuration circuitry and the logic circuitry.4. The integrated circuit of claim 3, wherein the reset trigger signal comprises a start signal transmitted from the configuration circuitry to the initialization module.5. The integrated circuit of claim 3, wherein the reset trigger signal comprises a request signal transmitted from the logic circuitry to the initialization module.6. The integrated circuit of claim 3, wherein the initialization module includes a synchronization circuit for synchronizing the reset trigger signal with the clock signal to generate a synchronized reset trigger signal.7. The integrated circuit of claim 3, wherein the initialization module further comprises a counter circuit enabled by the synchronized reset trigger signal.8. The integrated circuit of claim 7, wherein the initialization module further comprises a reset control circuit that monitors when the counter circuit has counted c clock cycles.9. The integrated circuit of claim 8, wherein the reset control circuit receives the reset trigger signal and performs a handshake protocol with the reset trigger signal.10. The integrated circuit of claim 9, wherein the reset control circuit asserts an output signal when the counter has counted c clock cycles, and in response to de-assertion of the reset trigger signal, the reset control A circuit deasserts the output signal, and wherein the output signal is delivered to the logic circuitry.11. A method of operating an integrated circuit comprising a logic circuit system, a configuration circuit system, and an initialization module, the method comprising:Using the configuration circuitry, programming the logic circuitry to implement custom logic functions;Using the configuration circuitry, providing a count value c;Using the initialization module, if the count value c is greater than zero, the reset sequence is automatically delayed by c clock cycles, wherein the initialization module is coupled between the logic circuitry and the configuration circuitry;The logic circuitry is reset using the reset sequence after the c clock cycle delays.12. The method of claim 11, further comprising:Using the configuration circuitry to assert a configuration completion signal indicating when the programming of the logic circuitry is complete;With the configuration circuitry, the initialization completion signal is asserted after the defrost cycle following the assertion of the configuration completion signal.13. The method of claim 11, further comprising:Generating a counter output using a counter circuit in the initialization module;A counter output from the counter circuit and a request signal from the logic circuitry are received using a reset control circuit in the initialization module.14. The method of claim 13, further comprising:Responding to resetting the control signal using the reset control circuit in response to determining that the counter output is equal to zero;The reset control signal is asserted in response to determining that the request signal is de-asserted.15. The method of claim 13, further comprising:Determining the request signal by using the logic circuit system;In response to determining that the counter output is equal to zero, the reset control circuit is used to assert the reset control signal when asserting the request signal;The reset control signal is asserted as long as the request signal is asserted.16. An integrated circuit comprising:a logic circuit system that outputs a request signal and a clock signal;Configuring a circuitry that outputs a counter value c and a start signal;An initialization module that receives the request signal and the clock signal from the logic circuitry and that also receives the counter value c and the start signal from the configuration circuitry.17. The integrated circuit of claim 16, wherein the initialization module comprises:a counter circuit, the counter circuit being initialized to the counter value c, the counter circuit being selectively enabled by a selected one of the start signal and the request signal, and the counter circuit is comprised of the clock signal control.18. The integrated circuit of claim 17, wherein the initialization module further comprises:A reset control circuit that monitors when the counter circuit exhibits a count value of zero, the reset control circuit receives the request signal, and the reset control circuit outputs a control signal.19. The integrated circuit of claim 18, wherein the logic circuit system comprises:Resetting the state machine, the reset state machine receiving the control signal from the reset control circuit, the reset state machine outputting the request signal, and the reset state machine outputting a reset signal.20. The integrated circuit of claim 19, wherein the logic circuit system further comprises:A register that is reset by the reset signal, wherein the reset state machine further receives an error signal that triggers an assertion of the request signal.DRAWINGS1 is an illustration of an illustrative programmable integrated circuit in accordance with an embodiment.2 is an illustration of an illustrative retiming operation, in accordance with an embodiment.3 is an illustration of an illustrative pipeline for selecting a resource using a register to pipeline a routing signal, in accordance with an embodiment.4 is an illustration of a circuit design system that can be used to design an integrated circuit, in accordance with an embodiment.5 is an illustration of an illustrative computer aided design (CAD) tool that can be used in a circuit design system, in accordance with an embodiment.6 is a flow diagram of illustrative steps for designing an integrated circuit, in accordance with an embodiment.7 is an illustration of an illustrative illustration of possible states in which registers may transition between them, in accordance with an embodiment.Figure 8A is an illustration of an illustrative circuit prior to retiming, in accordance with an embodiment.Figure 8B is an illustration of a retimed version of the circuit of Figure 8A, in accordance with an embodiment.9 is a flow diagram of illustrative steps of circuitry for resetting retiming, in accordance with an embodiment.10 is an illustration of an illustrative reset initialization circuitry, in accordance with an embodiment.11 is an illustration of an illustrative automatic initialization module within the reset initialization circuitry of FIG. 10, in accordance with an embodiment.12 is an illustration of an illustrative user logic circuitry within the reset initialization circuitry of FIG. 10, in accordance with an embodiment.13 is an illustrative timing diagram of operation of the reset initialization circuitry of FIG. 10 after initial configuration of retimed circuitry, in accordance with an embodiment.14 is an illustrative timing diagram of operating the reset initialization circuitry of FIG. 10 after a user reset request, in accordance with an embodiment.Detailed waysThe presented embodiments relate to integrated circuits, and more particularly, circuitry within an integrated circuit for implementing a delayed reset sequence (e.g., a user-specified reset sequence that delays c clock cycles).Performing a retiming operation on an integrated circuit can change the configuration of registers within the integrated circuit. In some cases, using a reset sequence for registers prior to retiming (eg, a reset sequence provided by the designer for the corresponding circuit design), the retimed registers will not be accurately reset.Therefore, it would be desirable to provide an improved way of generating a retiming reset sequence by modeling a register that is moved during retiming to calculate an adjustment value, and using the calculated adjustment value and a reset sequence used prior to retiming. . According to an embodiment, this can be done by tracking the movement of the retimed registers across different types of circuit elements. Operating in this manner, the minimum length of the prepend adjustment sequence used in the retimed reset sequence can be calculated.The reset initialization circuitry within the integrated circuit can provide a pre-added adjustment sequence to the retimed registers. The retimed registers receive the reset sequence provided by the designer after providing the pre-added adjustment sequence. The reset initialization circuitry can properly reset the retimed circuitry after the initial configuration of the integrated circuit and during a reset requested by the user of the integrated circuit.Those skilled in the art will recognize that the present exemplary embodiment can be practiced without some or all of these specific details. In other instances, well-known operations have not been described in detail so as not to unnecessarily obscure the embodiments.An illustrative embodiment of a programmable integrated circuit, such as a programmable logic device (PLD) 100, that can be configured to implement circuit design is shown in FIG. As shown in FIG. 1, a programmable logic device (PLD) can include a two-dimensional array of functional blocks, including, for example, logic array blocks (LAB) 110 and other functional blocks, such as random access memory (RAM) blocks 130 and digital signals. Processing (DSP) block 120. A functional block, such as LAB 110, can include a smaller programmable area (eg, a logic element, a configurable logic block, or an adaptive logic module) that receives an input signal and performs a custom function on the input signal to produce an output signal.Programmable logic device 100 can include programmable memory elements. Configuration data (also referred to as programming data) can be loaded for the memory elements using input and output elements (IOEs) 102. Once loaded, the memory elements each provide a corresponding static control signal that controls the operation of associated functional blocks (e.g., LAB 110, DSP 120, RAM 130, or input and output component 102).In a typical case, the output of the loaded memory element is applied to the gate of the MOS transistor in the functional block to turn certain transistors on or off, and thereby configure logic in the functional block including the routing path . Programmable logic circuit elements that can be controlled in this manner include multiplexers (eg, multiplexers used to form routing paths in interconnect circuits), lookup tables, logic arrays, AND, OR, NAND and ORR logic gates, pass gates, etc.The memory element can use any suitable volatile and/or nonvolatile memory structure, such as random access memory (RAM) cells, fuses, anti-fuse, programmable read only memory memory cells, mask programming, and lasers. The structure of the programming, the combination of these structures, and so on. Since configuration data is loaded for memory elements during programming, the memory elements are sometimes referred to as configuration memories, configuration RAM (CRAM), or programmable memory elements.Additionally, the programmable logic device can have an input and output element (IOE) 102 for driving the signal from the PLD 100 and for receiving signals from other devices. Input and output component 102 can include parallel input and output circuitry, serial data transceiver circuitry, differential receiver and transmitter circuitry, or other circuitry for connecting one integrated circuit to another integrated circuit.The PLD may also include programmable interconnects in the form of vertical routing channels 140 (ie, interconnects formed along the vertical axis of PLD 100) and horizontal routing channels 150 (ie, interconnects formed along the horizontal axis of PLD 100). In the circuitry, each routing channel includes at least one track to route at least one wire. If desired, the interconnect circuitry can include pipeline elements, and the content stored in these pipeline elements can be accessed during operation. For example, a programming circuit can provide read and write access to pipeline components.Note that other routing topologies are intended to be included within the scope of the present invention, in addition to the topology of the interconnect circuitry depicted in FIG. For example, the wiring topology may include diagonally propagating or wires that propagate horizontally and vertically along different portions of their range and wires that are perpendicular to the device plane in the case of a three-dimensional integrated circuit, and the drivers of the wires may be located at one end of the wires. The difference is. The routing topology may include all of the global wires that generally span the PLD 100, portions of global wires such as wires that span portions of the PLD 100, staggered wires of a particular length, smaller local wires, or any other suitable interconnect resource arrangement.Programmable logic device (PLD) 100 can be configured to implement a custom circuit design if desired. For example, the configuration RAM can be programmed such that LAB 110, DSP 120 and RAM 130, programmable interconnect circuitry (ie, vertical channel 140 and horizontal channel 150), and input and output component 102 form a circuit design implementation.FIG. 2 shows an example of different versions of the circuit design achievable by the PLD 100. The first version of the circuit design can include registers 210, 220, 230, 240 and combinational logic 245. Register 210 can send a signal to register 220; register 220 can send a signal to register 230 via combinational logic 245; and register 230 can send the signal to register 240. As an example, the delay on the path from register 220 through combinational logic 245 to register 230 may have a delay of 6 nanoseconds (ns), while the delay between registers 210 and 220 and between registers 230 and 240 may have Minimum delay (for example, a delay of 0.5 ns, very close to a delay of 0 ns, etc.). Therefore, the first version of the circuit design can operate at a frequency of 166 MHz.Performing a register retiming on the first version of the circuit design creates a second version of the circuit design. For example, register 230 can be pushed back (sometimes referred to as backward retiming) by a portion of combinational logic 245, thereby separating the combinational logic 245 of the first version of the circuit design into combinational logic 242 of the second version of the circuit design and 244. In a second version of the circuit design, register 210 may send a signal to register 220; register 220 may send a signal to register 230 via combinational logic 242; and register 230 may send the signal to register 240 via combinational logic 244.As an example, the delay on the path from register 220 through combinational logic 242 to register 230 may have a 4 ns delay, and the delay from register 230 through combinational logic 244 to register 240 may have a delay of 2 ns. Therefore, the second version of the circuit design can operate at a frequency of 250 MHz, which is limited by the path with the longest delay (sometimes referred to as the critical path).Performing register retiming on the second version of the circuit design creates a third version of the circuit design. For example, register 220 can be pushed forward by a portion of combinational logic 242 (sometimes referred to as "forward" retiming), thereby separating the combinational logic 242 of the second version of the circuit design into a combination of the third version of the circuit design. Logic 241 and 243. In a third version of the circuit design, register 210 may send a signal to register 220 via combinational logic 241; register 220 may send a signal to register 230 via combinational logic 243; and register 230 may send the signal to register via combinational logic 244 240.As an example, the delays on the path from register 210 through combinational logic 241 to register 220, from register 220 through combinational logic 243 to register 230, and from register 230 through combinational logic 244 to register 240 may all have a delay of 2 ns. Therefore, the third version of the circuit design can operate at a frequency of 500 MHz, which is three times the frequency at which the first version of the circuit design can operate.If desired, routing resources such as vertical routing channel 140 or horizontal routing channel 150 of Figure 1 can include pipeline elements that can facilitate register retiming. FIG. 3 depicts a pipeline selection resource 300 using registers in accordance with an embodiment. As shown, the pipeline includes a first multiplexer 302, a driver 304, a register 306, and a second multiplexer 308 from the selection resource 300.Multiplexer 302 can be a driver input multiplexer (DIM) or a function block input multiplexer (FBIM). The DIM can select signals from multiple sources and send the selected signals to the driver 304 that drives the corresponding wires. The plurality of sources may include signals from the output of the functional block and other routing conductors that propagate in the same direction as the wires or in orthogonal directions. The FBIM outputs a signal to the function block and can select a signal from a plurality of routing wires.As shown in FIG. 3, multiplexer 302 can be pipelined by providing its output to register 306 for data input. The pipeline may receive the output of multiplexer 302 directly from multiplexer 308 in selection resource 300 and may also receive data output from register 306.Although the pipeline includes registers 306 from the selection resource 300, those skilled in the art will recognize that different register implementations can be used to store routing signals, such as edge triggered flip-flops, pulse latches, low pass latches, high pass latches. , to name a few. Therefore, we will refer to the storage circuit in the pipeline as the pipeline storage element in the pipeline, so as not to unnecessarily obscure the embodiment.Multiplexer 308 can enable the pipeline to be used by select resource 300 in a non-pipelined mode or in a pipeline register mode. In the non-pipelined mode, the output of multiplexer 308 selects the direct output of multiplexer 302. In pipeline mode, multiplexer 308 can select the output of register 306. Multiplexer 308 can provide its output to driver circuit 304, and the output of driver circuit 304 can be used to drive routing wires. The routing wires can span multiple functional blocks (eg, for a pipeline with DIM to select resources). Alternatively, the routing wires can be internal to the function block (eg, for a pipeline with a FBIM selected by a resource).Each DIM/FBIM may include a register such as register 306 such that all routing multiplexers are pipelined. However, in some embodiments, that may be unnecessary as the capabilities provided may exceed design requirements. Thus, in some embodiments, only a portion (such as half or a quarter) of the routing multiplexer can be pipelined. For example, the signal can take up to 150 picoseconds (ps) to traverse a given length of wire, but the clock signal can be constrained to operate at 650 ps clock cycles. Thus, providing a pipeline register, such as register 306, every fourth wire may be sufficient in this example. Alternatively, registers (eg, every second wire) can be placed more frequently than every fourth wire to provide a higher degree of freedom in the choice of which registers to use.Selecting resources, such as a pipeline by a pipeline of selected resources 300, may facilitate register retiming operations, such as the register retiming illustrated in FIG. For example, consider the case where register 230 is implemented by a first instance of a select element by a pipeline operating in pipelined register mode (ie, register 230 is implemented by a pipeline from a register 306 of a first instance of select resource 300). Further consideration of the path from register 220 through combinational logic 245 to register 230 includes a pipelined line operating in a non-pipelined mode by a second instance of the selection element. Thus, switching the pipeline from the first instance of the selection element from operating in the pipeline register mode to operating in the non-pipeline mode and switching the pipeline from the second instance of the selection element from operation in the non-pipelined mode to in the pipeline register mode The operation may transform the first version into a second version of the circuit design presented in FIG.Computer-aided design (CAD) tools in circuit design systems can evaluate whether register retiming can improve the performance of the current version of the circuit design, or whether the current version of the circuit design meets a given performance standard. If desired, and if the CAD tool determines that register retiming will improve the performance of the current version of the circuit design, or if the current version of the circuit design does not meet a given performance standard, the CAD tool can perform the transformation of the current version of the circuit design into a circuit design. Another version of the register retiming operation (eg, as illustrated in Figure 2).An illustrative circuit design system 400 in accordance with an embodiment is shown in FIG. Circuit design system 400 can be implemented on an integrated circuit design computing device. For example, system 400 can be based on one or more processors, such as a personal computer, workstation, or the like. The processor can be linked using a network (eg, a local area network or a wide area network). Memory or external memory and storage devices (such as internal and/or external hard disks) in these computers can be used to store instructions and data.Software-based components such as computer aided design tool 420 and database 430 reside on system 400. Executable software, such as software of computer aided design tool 420, runs on the processor of system 400 during operation. Database 430 is used to store data for operation of system 400. Generally, the software and data can be stored on any computer readable medium (storage device) in system 400. Such storage devices may include computer memory chips, removable and fixed media such as hard drives, flash memories, compact discs (CDs), digital versatile discs (DVDs), Blu-ray discs (BD), other optical media, and floppy disks, Tape, or any other suitable memory or storage device. When the software of system 400 is installed, the storage of system 400 has instructions and data that cause the computing devices in system 400 to perform various methods (processes). In performing these processes, the computing device is configured to implement the functionality of the circuit design system.Computer-aided design (CAD) tools 420 may be provided by a single vendor or by multiple vendors, some or all of which are sometimes collectively referred to as CAD tools, circuit design tools, or electronic design automation (EDA) tools. The tool 420 can be provided as one or more sets of tools (eg, a compiler suite for performing tasks associated with implementing circuit designs in a programmable logic device) and/or as one or more separate software components (tools). provide. Database 430 can include one or more databases that are accessed only by one or more specific tools, and can include one or more shared databases. A shared database can be accessed by multiple tools. For example, the first tool can store data for the second tool in the shared database. The second tool can access the shared database to retrieve the data stored by the first tool. This allows one tool to pass information to another tool. Tools can also pass information between each other without storing information in a shared database if desired.An illustrative computer aided design tool 520 that can be used in a circuit design system, such as circuit design system 400 of FIG. 4, is shown in FIG.The design process can begin with the development of functional specifications for integrated circuit design (eg, functional or behavioral descriptions of integrated circuit designs). The circuit designer can use the design and constraint entry tool 564 to specify the functional operation of the desired circuit design. The design and constraint entry tool 564 can include tools such as a design and constraint entry aid 566 and a design editor 568. Design and constraint entry assistance, such as auxiliary 566, can be used to help the circuit designer locate the desired design from a library of existing circuit designs, and can provide computer-aided assistance to the circuit designer for entering (designating) the desired circuit design.As an example, the design and constraint entry aid 566 can be used to present a screen for the user's options. The user can click on the on-screen option to select whether the circuit being designed should have certain characteristics. The design editor 568 can be used to enter designs (eg, by entering a few lines of hardware description language code), can be used to edit designs obtained from the library (eg, using design and constraint entry aids), or can assist the user in selecting and editing appropriate Prepackaged code/design.The design and constraint entry tool 564 can be used to allow a circuit designer to provide a desired circuit design using any suitable format. For example, the design and constraint entry tool 564 can include tools that allow circuit designers to enter circuit designs using truth tables. The truth table can be specified using a text file or a timing diagram and can be imported from the library. Truth table circuit design and constrained entry can be used for part of a large circuit or for the entire circuit.As another example, the design and constraint entry tool 564 can include a schematic capture tool. Schematic capture tools may allow circuit designers to visually construct integrated circuit designs from components such as logic gates and logic gates. A pre-existing library of integrated circuit designs can be used to allow the use of schematic capture tools to import desired portions of the design.If desired, the design and constraint entry tool 564 may allow the circuit designer to use a hardware description language such as Verilog Hardware Description Language (Verilog HDL), Very High Speed ​​Integrated Circuit Hardware Description Language (VHDL), SystemVerilog, or advanced circuitry such as OpenCL or SystemC. Description language, circuit design is provided to circuit design system 400, to name a few. The designer of the integrated circuit design can enter the circuit design by writing a hardware description language code using the editor 568. If desired, a block of code can be imported from the user hold or commercial library.After the design has been entered using the design and constraint entry tool 564, the behavioral simulation tool 572 can be used to simulate the functional performance of the circuit design. If the functional performance of the design is incomplete or incorrect, the circuit designer can make changes to the circuit design using the design and constraint entry tool 564. The behavioral simulation tool 572 can be used to verify the functional operation of the new circuit design before the synthesis operation has been performed using the tool 574. Simulation tools such as behavioral simulation tool 572 can also be used at other stages in the design flow (eg, after logical synthesis) if desired. The output of behavioral simulation tool 572 can be provided to the circuit designer in any suitable format (eg, truth table, timing diagram, etc.).Once it has been determined that the functional operation of the circuit design is satisfactory, the logic synthesis and optimization tool 574 can generate a gate-level netlist of circuit designs, for example using from a target process supported by a manufacturer that has been selected to produce an integrated circuit. The door to the specific library. Alternatively, the logic synthesis and optimization tool 574 can use the gate of the target programmable logic device (ie, in the logic and interconnect resources of a particular programmable logic device product or family of products) to generate a gate-level netlist of circuit designs. .Logic synthesis and optimization tool 574 can optimize the design by making appropriate selections of hardware used to implement different logic functions in the circuit design based on circuit design data and constraint data entered by logic designer using tool 564. As an example, logic synthesis and optimization tool 574 can perform multi-level logic optimization and technology mapping based on the length of the combined path between registers in the circuit design and the corresponding timing constraints entered by logic designer using tool 564.After logic synthesis and optimization using tool 574, the circuit design system can perform physical design steps (layout synthesis operations) using tools such as placement, routing, and physical synthesis tools 576. Tool 576 can be used to determine where to place each door of the gate level netlist generated by tool 574. For example, if two counters interact with each other, tool 576 can locate these counters in adjacent regions to reduce interconnect delays or meet timing requirements for a specified maximum allowable interconnect delay. Tool 576 creates an ordered and efficient implementation of the circuit design for any target integrated circuit (eg, for a given programmable integrated circuit, such as a field programmable gate array (FPGA)).Tools such as tools 574 and 576 may be part of a compiler suite (eg, a portion of a set of compiler tools provided by a programmable logic device vendor). In some embodiments, tools such as tools 574, 576, and 578 may also include timing analysis tools such as timing evaluators. This allows tools 574 and 576 to meet performance requirements (e.g., timing requirements) prior to actually generating the integrated circuit.By way of example, tools 574 and 576 can perform register retiming by moving registers through combinatorial logic (eg, by logical AND, OR, XOR and other suitable gates, look-up tables (LUTs), multiplexers, operators, etc.) . As illustrated in Figure 2, tools 574 and 576 can push registers forward or backward across the combinational logic. If desired, tools 574 and 576 can perform forward and backward push of the registers by configuring a pipeline such as that of FIG. 3 from the pipeline of the selection resource 300 by the selection resource to operate in the non-pipelined mode or as a pipeline to be operated by the selection element. . The physical synthesis tool 576 used in this manner can therefore also be used to perform register retiming. However, performing the retiming operation on the pipeline by the selection component (e.g., the pipeline is selected by the resource) is merely illustrative. If desired, physical synthesis tool 576 can perform retiming operations on any suitable register.After the tool 576 has been used to generate the desired implementation of the circuit design, the analysis tool 578 can be used to analyze and test the implementation of the design. For example, the analysis tool 578 can include a timing analysis tool, a power analysis tool, or a formal verification tool, to name a few.After a satisfactory optimization operation has been performed using tool 520, and depending on the target integrated circuit technology, tool 520 can generate a mask level layout description of the integrated circuit or configuration data for programming the programmable logic device.An illustrative operation involved in the mask-level layout description of the integrated circuit using the tool 520 of FIG. 5 is illustrated in FIG. As shown in Figure 6, the circuit designer can first provide a design specification 602. Design specification 602 can generally be a behavioral description provided in the form of application code (eg, C code, C++ code, SystemC code, OpenCL code, etc.). In some cases, the design specification can be provided in the form of a register transfer level (RTL) description 606.The RTL description can have any form of descriptive circuit function at the register transfer level. For example, the RTL description can be provided using a hardware description language such as Verilog Hardware Description Language (Verilog HDL or Verilog), SystemVerilog Hardware Description Language (SystemVerilog HDL or SystemVerilog), or Very High Speed ​​Integrated Circuit Hardware Description Language (VHDL). Some or all of the RTL description can be provided as a schematic representation if desired.In general, behavioral design specification 602 can include untimed or partially timed functional code (ie, application code does not describe cycle-by-cycle hardware behavior), while RTL description 606 can include a fully timed design description, which is described in detail at the register transfer stage. The cycle-by-cycle behavior of the circuit.Design specification 602 or RTL description 606 may also include target criteria such as area usage, power consumption, delay minimization, clock frequency optimization, or any combination thereof. Optimization constraints and target criteria can be collectively referred to as constraints.Those constraints can be provided for individual data paths, portions of separate data paths, portions of the design, or for the entire design. For example, constraints may be provided using design specification 602, RTL description 606 (eg, as a pragma or as an assertion), in a constraint file, or by user input (eg, using design and constraint entry tool 564 of FIG. 5), A few cases.At step 604, behavioral synthesis (sometimes referred to as arithmetic synthesis) may be performed to convert the behavioral description into an RTL description 606. If the design specification has been provided in the form of an RTL description, step 604 can be skipped.At step 618, the behavioral simulation tool 572 can perform an RTL simulation of the RTL description, which can verify the functionality of the RTL description. If the functionality described by the RTL is incomplete or incorrect, the circuit designer can make changes to the HDL code (as an example). During the RTL simulation 618, the actual and expected results obtained from the behaviors described by the simulated RTL can be compared.During step 608, the logic synthesis operation may use the logic synthesis and optimization tool 574 from FIG. 5 to generate the gate level description 610. If desired, the logic synthesis operation can perform register retiming as illustrated in FIG. 2 in accordance with the constraints included in design specification 602 or RTL description 606. The output of the logic synthesis 608 is a gate level description 610.During step 612, a different gate in the gate level description 610 can be placed in a preferred location on the target integrated circuit using a placement operation such as the placement tool 576 of FIG. 5 to meet a given target criterion (eg, minimized area and maximum Routing efficiency or minimizing path delay and maximizing clock frequency or any combination thereof. The output of placement 612 is a placed gate level description 613 that satisfies the legal placement constraints of the underlying target device.During step 615, a gate from the placed gate level description 613 can be connected using a routing operation such as routing tool 576 of FIG. Routing operations may attempt to satisfy a given target criteria (eg, minimize congestion, minimize path delay, and maximize clock frequency or any combination thereof). The output of routing 615 is a mask level layout description 616 (sometimes referred to as a gated level description 616 of routing).While the placement and routing are being performed at steps 612 and 615, physical synthesis operations 617 may be performed concurrently to further modify and optimize the circuit design (eg, using the physical synthesis tool 576 of FIG. 5). Register retiming operations may be performed during physical synthesis step 617, if desired. For example, the registers in the placed gate level description 613 or the routed gate level description 616 can be moved around according to the constraints included in the design specification 602 or the RTL description 606.As an example, a register retiming operation may change the configuration of certain pipelines from a selected resource (eg, the pipeline of FIG. 3 by certain instances of the selection resource 300) from operating in the pipeline register mode to operating in a non-pipelined mode and will The other pipelines are changed from operating in the non-pipelined mode to operating in the pipelined register mode by the configuration of the selected resource (e.g., the pipeline of Figure 3 is selected by other instances of the resource 300). To avoid obscuring the embodiment, the change in state of the pipeline for selecting the resource may be referred to simply as the movement (eg, motion) of the pipeline register (eg, the pipeline operating in the pipeline register mode is selected by the resource). However, performing register retiming operations on selected pipelines on the pipeline is merely illustrative. As mentioned previously, the retiming operation can be performed on any suitable type of routing resource (eg, any suitable type of register, a register in a LAB or MLAB, etc.).For example, in a first scenario, changing a given pipeline from a configuration of a selected resource from operating in a pipeline register mode to operating in a non-pipelined mode may be referred to as a remove pipeline register. For example, in the second scenario, changing the configuration of another given pipeline from the configuration of the selected resource from operating in the non-pipelined mode to operating in the pipelined register mode may be referred to as adding pipeline registers. When the first and second cases correspond to each other (eg, simultaneously), the pipeline register can be referred to as moving from the location of the removed pipeline register to the location of the added pipeline register.In accordance with embodiments of the present invention, pipeline registers (e.g., registers 306 in FIG. 3) within an integrated circuit (e.g., PLD 100) may have multiple possible states in common (e.g., during power up, during reset, during During normal operation, etc.). For example, the possible state may be a state in which all of the pipeline registers within the integrated circuit have a value of "0" (eg, a value storing "0"). Another possible state may be a state in which all of the pipeline registers within the integrated circuit all have a value of "1" (eg, a value of "1" is stored). In yet another example, a first group of pipeline registers can store a value of "0" and a second group of pipeline registers can store a value of "1." This is only illustrative. If desired, any set of states can be stored in the pipeline registers within the integrated circuit at any given time.Moreover, the use and modification of the pipeline registers as described in Figure 6 (and any other related figures) are merely illustrative. Any suitable type of register can be used and modified if desired (eg, during retiming operations).FIG. 7 shows an illustration of a number of possible states associated with an exemplary data latch register within an integrated circuit (eg, PLD 100). The registers of FIG. 7 may be general user registers or pipeline registers to help improve the performance of device 100. In particular, possible states associated with registers (a collection of pipeline registers 306 within PLD 100, a collection of any registers within PLD 100, a collection of any registers within a logic circuit, etc.) may include states S1, S2, S3, S4, S10. , S11, S12 and reset.An arrow pointing from the first given state to the second given state may indicate a possible transition from the first given state to the second given state. Possible transitions can occur when one or more clock signals are used to clock the registers (eg, after the clock cycle of the clock signal, at the rising edge of the clock signal, at the falling edge of the clock signal, at the rising and falling edges of the clock signal) Both, etc., to help the register latch the incoming state value. In other words, during a single state transition, the primary input can have a new set of values, and the registers can be clocked for a single clock cycle to provide a new set of values ​​to the registers for hold. As an example, the first rising clock edge may cause the register to transition from state S12 to state S1. As another example, the second rising clock edge may be a self-loop transition, such as when state S11 is not transitioning (eg, maintaining state S11).The arrows can be directional. For example, a transition from state S12 to state S1 may occur as indicated by the direction of the arrow linking the two states. In contrast, state S1 to state S12 may be an impossible transition because the arrow does not point in that direction. Furthermore, state S12 to state S11 may also be an impossible transition because there is no direct path for both of the links. However, this is only illustrative. If desired, the transition between state and state can be based on a given set of registers, which can have any suitable behavior. For example, states S3 and S4 can transition between each other.Power-up circuitry or startup circuitry (eg, initialization circuitry) within integrated circuit 100 can power up registers and thus provide an unknown state for the registers. To operate an integrated circuit, it may be desirable to reset the registers to a known state upon device startup. The known initialization state may be referred to as a reset state (eg, reset state 700). To reach the reset state 700, one or more clock signals can be clocked to provide a reset sequence (eg, a specific set of transitions between possible states) to the registers. The initialization circuitry can be used to provide a reset sequence to the registers if desired. The reset sequence may be a set of transitions that ensure that the reset sequence is used to reach the reset state 700 regardless of which state the register is powered up to. For example, the reset sequence may transition to state S3 for some initial state before proceeding to the final transition to reset state 700. As another example, the reset sequence may cause the register to transition from state S11 to reset state 700. This is only illustrative. Any reset sequence can be used to bring a register (eg, a pipeline register) to the reset state 700, if desired.Upon reaching the reset state 700, the registers can operate in a first set of states called legal states, such as legal state 702. In other words, only the legal state can be accessed by the register after the register is reset. A state that may not be accessed after a reset operation is referred to as an illegal state, such as an illegal state 704. For example, upon reaching reset state 700, all additional transitions from reset state 700 may cycle between states S1, S2, S3, S4 and reset (common legal state 702). In other words, there may be no transition from any of the legal state 702 to the illegal state 704 (eg, states S10, S11, and S12).The illustration of Figure 7 can be applied to registers in a given configuration. The illustration of Figure 7 can be changed accordingly given a configuration change. For example, the tool 576 of FIG. 5 can perform a retiming operation that changes the configuration of the registers (eg, a partial move register across the combinatorial logic). The number of registers may themselves be different in the retimed circuit, which implies that the number of states may be different in the retimed circuit. Therefore, the original reset sequence may not be able to account for configuration changes and properly reset the registers in the changed configuration to the reset state.In particular, FIG. 8A illustrates an illustrative circuit within integrated circuit 100, such as circuit 800, that can be retimed to operate more efficiently as described in FIG. 2. Circuitry 800 can include a logical NAND gate 802, logical AND gates 804 and 806, and registers 810, 812, and 814. One or more of registers 810, 812, and 814 can be pipeline registers.The logical NAND gate 802 can have a first input coupled to the input b. The logical NAND gate 802 can have a second input coupled to the output of the register 810. An input terminal (eg, a data input terminal) of register 810 can be coupled to input a. The AND gate 804 can have two input terminals. A first input terminal of AND gate 804 can be coupled to an output terminal of register 810. A second input terminal of AND gate 804 can be coupled to the output of register 812. An input terminal (eg, a data input terminal) of register 812 can be coupled to input a. The AND gate 806 can have two input terminals. A first input terminal of the AND gate 806 can be coupled to an output terminal of the NAND gate 802. A second input terminal of the AND gate 806 can be coupled to an output terminal of the AND gate 804. Register 814 can receive the output of AND gate 806 and provide an output h. Registers 810, 812, and 814 can be clocked using a given clock signal (eg, registers 810, 812, and 814 can be part of the same clock domain). This is only illustrative. Any suitable clock configuration can be used if desired.The power up process for circuit 800 can be performed using initialization circuitry (not shown) within integrated circuit 100. The initialization circuitry can also use the reset sequence to perform the reset operation. As described in connection with Figure 7, the registers may remain in an unknown state after power up and prior to reset. For example, register 810 can hold a value of "0", register 812 can hold a value of "0", and register 814 can hold a value of "1." Alternatively, registers 810, 812, and 814 can be powered up to any other state. The reset sequence 810, 812, and 814 can be reset using the reset sequence to provide known reset status values ​​to registers 810, 812, and 814. For example, circuit 800 can have register 810 holding a value of "0", register 812 holding a value of "0", and register 814 maintaining a reset state of value "0". The associated reset sequence that brings circuit 800 to the reset state includes a single transition that uses a clock signal to clock a single clock cycle. In particular, during a single clock cycle, NAND gate 802 can receive a value of "1" from input b while registers 810 and 812 can receive a value of "0" from input a.Registers 810 and 812 are provided with known values ​​of "0" after a single clock cycle. Register 814 can also be in a known state of "0" because register 814 will maintain a value of "0" after a single clock cycle, regardless of the previous values ​​stored in registers 810 and 812. For example, as previously described, if both registers 810 and 812 maintain a value of "0" before a single clock cycle, AND gate 806 will receive the value of "1" at its first input terminal and The value of "0" at its second input terminal and will provide a value of "0" to register 814 during a single clock cycle. Other situations are omitted to avoid unnecessarily obscuring the present embodiment.In an embodiment, tool 576 can perform retiming on circuit 800 of FIG. 8 to move register 810 across node 820 (eg, fanout node 820) as indicated by dashed arrow 822. After the retiming operation, the circuit 800 of FIG. 8A can be transformed into the retimed circuit 800' of FIG. 8B.As a result of the retiming, register 810 can be removed from retimed circuit 800' and replaced by registers 810-1 and 810-2. A second input terminal of NAND gate 802 can be coupled to an output terminal of register 810-1. The data input terminal of register 810-1 can receive the value from input a. A first input terminal of AND gate 804 can be coupled to an output terminal of register 810-2. The data input terminal of register 810-2 can receive the value from input a.As previously described in connection with reset register 814, register 814 can be at a known value of "0" regardless of the previous value stored in registers 810 and 812 after a single clock cycle. In other words, the value stored in register 814 can be deterministic after a single clock cycle. However, in retimed circuit 800', the value stored in register 814 may not be deterministic after a single clock cycle. For example, registers 810-1, 810-2, and 812 may have logical values ​​of "0", "1", and "1", respectively, after power up and prior to a reset operation. Thus, after a single clock cycle reset as depicted in FIG. 8A, AND gate 806 can receive a logic value of "1" at its first input and a "1" at its second input. Logical value. Thus, register 814 can maintain a logic value of "1" after a single cycle, and retimed circuit 800' may not be in a reset state. Since the retimed circuit 800' cannot reach a reset state using a single clock cycle reset sequence due to at least one possible power up state of the retimed circuit 800', it would be desirable to provide a circuit 800' that can properly assist in retiming. A new reset sequence that reaches the reset state within the plan.When registers 810-1 and 810-2 may not hold/store the same value (eg, while one holds a logic "0" and another holds a logic "1"), the problem occurs with a single clock cycle reset sequence. If registers 810-1 and 810-2 are to store the same value, the value stored in register 814 can be deterministic, as in circuit 800. To provide the same value to registers 810-1 and 810-2, an additional (more specifically, pre-add) adjustment sequence can be added to a single clock cycle reset sequence. In other words, a single clock cycle reset sequence can be delayed by multiple clock cycles of the adjustment sequence.The adjustment sequence may have a given length (eg, an adjustment length or a length based on the adjustment value) that specifies the number of transitions between states. In other words, a given length can determine the number of clock cycles that can be clocked by the retimed circuitry (eg, retimed circuit 800') to provide a logic value to the register to help reset the register. For example, a given length can be calculated based on the type of circuitry within integrated circuit 100. As described in detail in subsequent embodiments, the computational process can be determined by characterizing different circuits within integrated circuit 100 by their type.To properly reset the retimed circuit 800', the given length can be calculated to be equal to one. During the number of clock cycles (eg, when the adjustment sequence is implemented), by applying a random value to the primary input, a random value (eg, a logical value of "0" and/or "1") can be provided to the register (eg, Registers 810-1, 810-2, and 812). Because both the input terminals of registers 810-1 and 810-2 are coupled to fanout node 820 (the fanout node is coupled to input a), during the adjustment sequence that lasts one clock cycle, when provided at input a, When the random logic value of 1" or "0" is used, the same random value can be provided for the registers 810-1 and 810-2. Thus, after the adjustment sequence is implemented using the initialization circuitry, a single clock cycle reset sequence can be used to properly reset the retimed circuit 800'.The adjustment sequence can be substantially dependent on the number of clock cycles rather than the primary data input (eg, data provided to the registers). In other words, the register is allowed to receive a random primary input value during the entire adjustment sequence. Therefore, the adjustment sequence can be referred to as the number of "empty" clock cycles.The reset sequence added by the adjustment sequence preamble may be referred to as an adjusted reset sequence or a retimed reset sequence. An example of an adjustment sequence having a length of one is merely illustrative. The adjustment sequence can have any suitable length if desired. The adjustment sequence can also provide non-random values ​​if desired.According to an embodiment, circuitry such as circuit 800 shown in FIG. 8A may be referred to herein as a "raw" circuit (eg, a circuit prior to retiming or typically moving in any register for use in, for example, register consolidation, register copying, etc. The circuit before optimization). The reset sequence associated with the original circuit may be referred to herein as the "raw" reset sequence. A circuit that has been modified, such as circuit 800' shown in Figure 8A, may be referred to herein as a retimed circuit (e.g., a circuit after retiming). A reset sequence associated with a retimed circuit may be referred to herein as a "retimed" reset sequence, generally referring to various types of register movements as an "adjusted" reset sequence.In particular, the circuit (eg, circuit 800') may be referred to herein as a c-cycle delayed version of the original circuit (eg, circuit 800), as the adjusted sequence of c clock cycles may be used to delay the original reset sequence to generate an adjustment Reset sequence. For example, circuit 800' may be a one-cycle delayed version of circuit 800 because an adjustment sequence of one clock cycle may be used to delay the original reset sequence to generate an adjusted reset sequence.Further details of a reset sequence for generating a c-cycle delay are provided in U.S. Patent Application Serial No. 15/342,286, the disclosure of which is incorporated herein by reference. In addition, U.S. Patent Application Serial No. 15/352,487, entitled "METHODS FOR BOUNDING THE NUMBER OF DELAYED RESET CLOCK CYLES FOR RETIMED CIRCUITS", provides an additional sequence for generating a reset sequence of c-cycle delays with a user-specified maximum c value or adjustment value. The details, which are thus also incorporated by reference in its entirety. In addition, additional details relating to the verification of the generated c-cycle delayed reset sequence are provided in U.S. Patent Application Serial No. 15/354,809, the disclosure of which is incorporated herein by reference. The overall combination.Figure 9 shows a flow chart of illustrative steps for circuitry for resetting retiming (e.g., retiming circuit 800' in Figure 8B).At step 900, the configuration device can execute a device configuration for a target device (eg, a programmable integrated circuit). As an example, a logic design system (eg, CAD tool 420 in FIG. 4) can provide configuration data to a configuration data loading device, which then uses configuration devices (eg, configuration systems, configuration circuitry, etc.) to provide configuration data to the target device , thereby performing device configuration. If desired, the configuration device can be part of or separate from the target device.At step 902, an initialization module (eg, an automatic initialization module) can thaw the logic circuitry within the target device. In other words, by unfreezing the logic circuit, the initialization module can transition the target device to user mode. For example, the initialization module can be within a target device (eg, a programmable integrated circuit).At step 904, the initialization module may perform c-cycle initialization (ie, provide an adjustment sequence to the target device) on the configured target device, as depicted in Figures 8A and 8B. To perform c-cycle initialization, the initialization module may first receive the length of the adjustment sequence (eg, the number of "empty" clock cycles) that ensures proper reset of the retiming circuitry on the configured target device (eg, from the configuration device). . As an example, the determination of the adjustment sequence and the implementation of the c-cycle initialization can be hidden from the user.If desired, the reset sequence (and in particular the adjustment sequence) can be determined without any user input (eg, no restrictions placed by the user). Alternatively, the user may provide a maximum reset sequence length to the CAD tool 420 to generate an adjustment sequence that adheres to the maximum reset sequence length constraint. Moreover, after the CAD tool 420 generates the adjustment sequence and the corresponding adjusted reset sequence, the verification circuit can verify that the adjustment sequence and the corresponding adjusted reset sequence properly reset the retiming circuitry.At step 906, the user may provide a user-defined reset sequence for the retiming logic design configured on the target device. In other words, the reset initialization circuitry can provide a user-specified reset sequence to the configured target device. After adding a trim sequence to the user-defined reset sequence, the retiming circuitry within the configured target device can be properly reset. For example, the user may perform a user-defined reset only after performing c-cycle initialization in step 904.At step 906, after the user has provided an adjusted sequence of adjustments including the adjustment sequence and the user-specified reset sequence for the configured target device, the reset initialization circuitry may exit the user reset period and operate in FIG. Describe the logic on the target device in the configured legal state. In other words, the reset initialization circuitry can provide a signal indicating that the configured integrated circuit is properly reset and can begin normal operation.FIG. 10 shows an illustrative reset initialization circuitry within PLD 100. The reset initialization circuitry can include a configuration circuitry 1000, an auto-initialization module 1002, and user logic 1004. The reset initialization circuitry can be formed entirely from logic such as soft logic or hardwired. If desired, a portion of the reset initialization circuitry can be formed from soft logic, and different portions of the reset initialization circuitry can be formed from hardwired logic.In particular, configuration circuitry 1000 can be a Security Device Manager (SDM), a local partition manager, or any other type of hard control logic. Alternatively, configuration circuitry 1000 can also be formed from soft logic (eg, formed from configurable logic). Configuration circuitry 1000 can generate a signal (eg, signal ConfigDone) indicating when the configuration of the target integrated circuit is complete. Alternatively, if another portion of the PLD 100 monitors the configuration process of the PLD 100 (eg, step 900 in FIG. 9), the configuration circuitry 1000 can receive the signal ConfigDone. Similarly, configuration circuitry 1000 can generate or receive a signal (eg, signal InitDone) indicating when thawing of logic within PLD 100 can be completed, as described in step 902 of FIG.By noting when the configuration of the PLD 100 and the thawing are completed (eg, when the respective signals ConfigDone and InitDone are asserted), the configuration circuitry 1000 can indicate when the soft auto-initialization module 1002 can begin execution by providing a signal Start to the auto-initialization module 1002. c cycle initialization. Additionally, configuration circuitry 1000 can also receive or internally generate the number of adjustment sequence lengths or "empty" clock cycles from CAD tool 420 in FIG. The adjustment sequence length ci can be provided to the automatic initialization circuitry in parallel with, for example, the signal Start.Automatic initialization module 1002 may sometimes be referred to herein as soft logic module 1002. While the configuration circuitry 1000 can be formed using hard logic including dedicated functions, the soft logic module 1002 can be programmed to perform the functions of the automatic initialization module. For example, during configuration and initialization (eg, defrosting) of PLD 100, automatic initialization module 1002 can be implemented automatically using programmable hardware within PLD 100. As an example, the initialization module 1002 can be implemented on the PLD 100 only when the user indicates (using user input or input signals) that it is desirable to perform c-cycle initialization. As another example, automatic initialization module 1002 may be implemented automatically on PLD 100 when design tool 420 calculates a count value c greater than zero (indicating a potentially improper reset).User logic 1004 can be in communication with automatic initialization module 1002. In particular, user logic 1004 can provide signal Reqi to automatic initialization module 1002 to selectively perform a functional reset of registers within PLD 100. The functional reset can be different from the initial reset after the PLD 100 is initially configured and thawed. The functional reset may be triggered by, for example, a problem occurring during operation of the PLD 100 or other suitable triggering event.As a corresponding signal of signal Reqi sent from user logic 1004 to automatic initialization module 1002, output signal T0i (sometimes referred to herein as reset control signal T0i) may be provided from automatic initialization module 1002 to user logic 1004. In particular, signal T0i may indicate when a user-specified reset sequence may begin. In other words, when signal T0i is asserted, the register within PLD 100 has completed pre-adding the clocking of the adjustment sequence and is ready to receive the original reset sequence (eg, a user-specified reset sequence). Additionally, in response to signal Reqi, signal T0i can be asserted. Specifically, the signal T0i can also be asserted when the signal Reqi is asserted. For example, signal T0i can only be deasserted after signal Reqi is deasserted.The adjusted reset sequence and adjustment sequence can be provided on a per clock basis as previously described in connection with the reset sequence and the adjustment sequence that generate the c-cycle delay. In other words, the register receiving the first clock signal can be processed separately from the register receiving the second different clock signal (eg, performing c-cycle initialization separately, resetting separately). However, if desired, both types of registers can be processed or reset in parallel. As shown in Figure 10, user logic can be used for domain i, where i is for a particular clock domain. User logic 1005 for domain i (eg, user logic 1005 for domain i) may provide signal Clki (which is the clock signal for domain i) to automatic initialization module 1002. In addition, signals Ci, Reqi and T0i can also be generated on the basis of domain i.As an example, design tool 420 can optimize the circuit design to be implemented on PLD 100 (eg, by performing a retiming operation, or by performing any other operations that can affect the reset of the circuit design implemented on PLD 100, etc.). If the count value c for any clock domain is greater than zero (which indicates a possible improper reset of the associated circuitry) by the optimization process (eg, retiming process) performed by design tool 420, then for each clock domain A reset sequence corresponding to the c-cycle delay can be implemented automatically. In other words, if the register retiming requires an adjustment sequence, the adjustment sequence is automatically inserted (ie, automatically inserted) to properly perform the reset operation.FIG. 11 illustrates an illustrative automatic initialization module implemented within PLD 100. The automatic initialization module 1002 can include a c-cycle counter 1100. The C-cycle counter 1100 can receive an adjustment sequence length ci (sometimes referred to herein as the c-cycle length ci). The C-cycle counter 1100 can count down from the c-cycle length ci based on the clock signal Clki received from the user logic 1004. As previously described, the c-period length ci may correspond to the same domain i as the clock signal Clki.In a first mode of operation, the synchronization circuit 1102-1 can receive a signal Start from the configuration circuitry 1000 (sometimes referred to herein as a trigger signal Start or a reset trigger signal Start). The synchronization circuit 1102-2 can synchronously reset the trigger signal Start to the clock signal Clki such that the c-cycle counter 1100 can correctly count the adjustment sequence length ci (eg, providing the correct number of "empty" clock cycles of the clock signal Clki). After analyzing the clock signal Clki for a plurality of clock cycles, the synchronization circuit 1102-1 can provide a synchronized reset trigger signal starti* indicating that the c-cycle counter 1100 can begin the counting operation. In other words, the synchronized reset trigger signal starti* can cause the c-cycle counter 1100 to start the counting operation.At any given time, signal COUNT may provide the current count value of c-cycle counter 1100 to reset control circuit 1104. In particular, reset control circuit 1104 can provide reset control signal T0i to the user logic domain. In the case, when the signal Reqi* is de-asserted and the signal COUNT provides a non-zero current count value, the de-asserted output signal T0i can then be provided to the user logic 1004 indicating that the adjustment sequence is incomplete. When the current count value of zero is supplied to the reset control circuit 1104, the reset control signal T0i can be asserted. The reset control circuit 1104 can perform a handshake protocol with the synchronized signal Reqi* for robust communication, which is described in detail below.The automatic initialization module can further include a synchronization circuit 1102-2 that receives a reset request signal Reqi from the user logic 1004. Similar to the synchronization circuit 1102-1, the synchronization circuit 1102-2 can synchronize the user reset signal Reqi to the clock signal Clki. The synchronization circuit 1102-2 can generate a correspondingly synchronized user reset signal Reqi*. The C cycle counter 1100 and the reset control circuit 1104 can receive the synchronized signal Reqi*.In the second mode of operation, a user reset request (eg, using a reset trigger signal Reqi, sometimes referred to herein as a trigger signal Reqi or a user reset request signal Reqi) may be provided to the automatic initialization module 1002. In other words, the asserted reset trigger signal Reqi can be provided to the automatic initialization module 1002. The corresponding reset trigger signal Reqi* may prompt the c-cycle counter 1100 to start the adjustment sequence. In other words, the c-cycle counter 1100 can perform a (down) counting operation similar to the operation in the first mode of operation. Since the count is initialized by the user request, the signals Reqi, Reqi*, T0i can communicate with each other in order to deliver when each step of the reset operation is completed. In particular, this ordered communication may be referred to herein as a "handshake or handshaking" operation. For example, a handshake operation can be performed using a reset trigger signal and a reset control signal.For example, when the c-cycle counter completes the adjustment sequence (this is signaled by the synchronization of the request signal Reqi from the user logic 1004), the reset control signal T0i can then communicate with the user logic 1004 by asserting the signal T0i. The asserted signal T0i may indicate to the user logic 1004 that the adjustment sequence has been completed and the user-specified reset sequence may begin. After the user logic 1004 receives an acknowledgment from the automatic initialization module 1002 that the adjustment sequence in the form of the asserted signal T0i has been completed, the user logic 1004 may deassert the request signal Reqi. The de-assertion of the request signal Reqi for which the user logic 1004 has confirmed the completion of the adjustment sequence can be communicated to the automatic initialization module 1002 (eg, specifically, the reset control circuit 1104) using the synchronized signal Reqi*.Figure 12 illustrates an illustrative user logic circuitry within the reset initialization circuitry of Figure 10. User logic circuitry 1004 can include a user reset state machine 1200 and a clock generation circuit 1202. The clock generation circuit 1202 can generate a clock signal Clki. In other words, clock generation circuit 1202 can generate one or more clock signals for different clock domains. The clock generation circuit 1202 can provide the generated clock signal Clki to the automatic initialization module 1002 for synchronization operations, as a clocked signal to the user logic circuit, to the user reset state machine 1200 for any suitable use, etc. . The user reset state machine 1200 can generate a user reset request and implement a reset sequence (eg, a sequence of adjustments, an original reset sequence, etc.).User logic 1004 for domain i may include user logic circuitry 1204 (eg, an important portion of user logic 1004 that may be implemented as a logical design on PLD 100). For example, user logic circuit 1204 can include retimed circuitry, registers, or any other desired circuitry. In particular, the user logic circuitry can include registers 1206 and other registers in the same clock domain, as indicated by the ellipse. User logic circuit 1204 may also include registers for other clock domains. Register 1206 can receive clock signal Clki (e.g., clock signal Clki can be used to clock register 1206 and additional registers within user logic circuit 1204). Additionally, register 1206 can include a primary input or input Res. The input Res may be referred to herein as a reset input because the actual input to register 1206 is uncorrelated during the adjustment sequence. Appropriate reset register 1206, for example, may only rely on the number of clock cycles clocked by register 1206 during the trim sequence. The input Res can receive the signal Reseti from the user reset state machine 1200. The signal Reseti can convey a random value in the adjustment sequence or a selected value during a user-specified reset sequence.As an example, reset state machine 1200 can receive an error signal Error from user logic circuit 1204. The error generation circuitry can generate a signal Error after determining that there can be an error within the user logic circuit 1204. The error may be, for example, the presence of an illegal state during normal operation of the user logic circuit 1204. There may be dedicated error generation circuitry. However, portions of the user logic circuit 1204 may generate corresponding error signals to be received at the user reset state machine 1200, if desired.In response to receiving the asserted signal Error, the user reset state machine 1200 can assert the reset request signal Reqi accordingly. The asserted reset request signal Reqi may prompt the automatic initialization module to resolve the error by performing a reset operation.Signal T0i may be asserted when reset control circuit 1104 within automatic initialization module 1002 has determined that the adjustment sequence has been implemented (or c-cycle initialization has completed). As an example, when signal T0i is asserted, user reset state machine 1200 can initiate any user-specified reset sequence to user logic circuit 1204. The user reset state machine 1200 can also deassert Reqi to perform the handshake operation as described in FIG.FIG. 13 is a timing diagram showing a mode for an illustrative operation of the reset initialization circuitry of FIG. In particular, the mode of operation illustrated in FIG. 13 may be implemented, for example, after initial configuration of PLD 100. At time t0, the signal ConfigDone can be asserted to indicate that the configuration of the programmable logic device based on the logic design provided by the user is complete. Therefore, the time period before time t0 can be referred to herein as a configuration period. Signal ConfigDone may be generated by configuration circuitry 1000 in FIG. 10 or may be received by configuration circuitry 1000 from a monitoring circuitry within PLD 100.At time t1, the signal InitDone can be asserted to indicate that the PLD input and output interface is ready to interact with the user. In other words, the signal InitDone may indicate that the PLD 100 has entered user mode (eg, the PLD 100 has been thawed). Thus, the time period between times t0 and t1 may be referred to herein as a thaw cycle or a transition period to user mode. Once the target device has entered user mode, the reset operation can begin preparing the target device for normal operation within the legal state. To explicitly trigger a reset operation, the trigger signal Start and the count value signal Count can also be asserted at time t1. As an example, the assertions of the signals Start and Count can be triggered by the assertion of the signal InitDone and can follow the assertion of the signal InitDone.At time t2, as depicted in FIG. 10, clock signal Clki may be provided from user logic 1004 to automatic initialization module 1002. As depicted in Figures 10 and 11, the signal Count can begin at a value equal to the length ci of the c-cycle. However, when the start signal Starti* of the clock synchronization is asserted at time t3, the signal Count can only begin to decrement. Between time t2 and t3, the synchronization circuit system 1102-1 can generate the signal Starti* by synchronizing the signal Start according to the clock signal Clki.At time t3, the signal Count can begin to decrement to begin the c-cycle initialization. For example, the length of the c period can be four, so the signal Count can start at a value of four. After a four cycle adjustment sequence (at time t4), the signal Count can have a value of zero. The time period between times t1 and t4 may be referred to herein as a c-cycle initialization period, which includes both the entire synchronization process and the actual counting operation.At time t4, signal T0i can be asserted to indicate the completion of the c-cycle initialization. As an example, the user request signal Reqi can be automatically asserted throughout the configuration, thaw, and c-cycle initialization procedures. As an example, once the c-cycle initialization has been completed, the user request signal Reqi can be deasserted. Since Reqi is asserted at the end of the c-cycle initialization, indicating a lack of user-initiated reset request, signal T0i can be asserted at a single clock cycle after time t4 at time t5. In other words, the handshake operation can occur as indicated by arrow 1300 during which the signal T0i is responsively asserted based on the deassertion of the request signal Reqi.After the c-cycle initialization period, the user reset period can begin at time t4 and end at time t7. During this period, the user reset sequence can be appended to the adjustment sequence implemented during the c-cycle initialization period. In particular, the user reset sequence may begin at time t5 in response to the absence of any user reset request to assert signal T0i. The user reset sequence can end at time t6 as indicated by the signal Reseti. In other words, the signal Reseti may provide an adjustment sequence from time t3 to t4 and a user reset sequence from time t5 to t6 to the user logic 1004. Signal Reseti may be provided within user logic 1004 (eg, from user reset state machine 1200 to circuitry within user logic circuit 1204, as described in FIG. 12). Time t7 can begin the normal operation of the user logic. For example, between times t6 and t7, any other reset sequence or operation can be performed to bring the user logic 1004 out of reset at time t7.14 is a timing diagram showing a mode for additional illustrative operations of the reset initialization circuitry of FIG. In particular, the mode of operation shown in FIG. 14 may be implemented after a reset user request for PLD 100, which may occur, for example, during normal operation of user logic implemented on PLD 100.At time t0, an event may occur within the user logic that prompts (eg, triggers) a reset of the user logic. At time t1, in response to the event, trigger signal Reqi may be asserted to provide a reset request from user logic 1004 to automatic initialization module 1002. At time t2, the trigger signal Reqi can be synchronized to the clock signal Clki to assert the synchronized trigger signal Reqi*.At time t2, the signal Count can be counted down during the c-cycle initialization period, similar to the mode of operation described in FIG. As an example, even during normal operation of user logic 1004, a count value of eight (indicating an eight-cycle adjustment sequence) may be retained within configuration circuitry 1000. However, the signal Count can begin to decrement only when the synchronous request signal is asserted. In other words, as indicated by arrow 1400, the assertion of the synchronized request signal Reqi* prompts the decrement c-cycle counter 1100 and signal Count in FIG.At time t3, signal T0i may be asserted in response to completion of the c-cycle initialization period between times t2 and t3. As indicated by arrow 1402, signal T0i may wait for deassertion of the user reset request signal before deasserting T0i. As an example, the deassertion of the user reset request signal Reqi is synchronized to the clock signal Clki to assert the synchronized request signal Reqi* at time t4. As indicated by arrow 1404, this event triggers subsequent de-assertion of signal T0i.As previously described in connection with FIG. 11, the combination of arrows 1400, 1402, and 1404 creates a robust communication system between user logic 1004 and automatic initialization module 1002. In other words, user logic 1004 and automatic initialization module 1002 perform a handshake operation to ensure that information is properly delivered (eg, a service with a given user reset request). For example, as soon as the request signal Reqi (or corresponding signal Reqi*) is asserted, the auto-initialization module 1002 can keep the reset control signal T0i asserted.Similar to the user reset period of Figure 13, during the c-cycle initialization period and during the user reset period between times t1 and t5, the signal Reseti can be similarly asserted to perform any adjustment sequence and user-specified reset sequence. . If desired, the signal Reseti can be asserted at time t2 instead of time t1 to align with the synchronized reset signal Reqi*. At time t5, user logic 1004 may perform normal operations again.These steps are only illustrative. Existing steps can be modified or ignored; some of the steps can be performed in parallel; additional steps can be added; and the order of certain steps can be reversed or changed.Embodiments have heretofore been described with respect to integrated circuits. The methods and apparatus described herein can be incorporated into any suitable circuit. For example, they can be incorporated into various types of devices, such as programmable logic devices, application specific standard products (ASSPs), and application specific integrated circuits (ASICs). Examples of programmable logic devices include programmable array logic (PAL), programmable logic array (PLA), field programmable logic array (FPLA), electrically programmable logic device (EPLD), electrically erasable programmable logic device (EEPLD) ), Logic Cell Array (LCA), Complex Programmable Logic Device (CPLD), and Field Programmable Gate Array (FPGA), to name a few.A programmable logic device as described in one or more embodiments herein may be part of a data processing system including one or more of the following components: a processor; a memory; an IO circuitry; and a peripheral device. Data processing can be used in a wide variety of applications, such as computer networking, data networking, instrumentation, video processing, digital signal processing, or the advantages of using programmable or reprogrammable logic in which any suitable other application is desired. Programmable logic devices can be used to perform a variety of different logic functions. For example, a programmable logic device can be configured as a processor or controller that works in conjunction with a system processor. Programmable logic devices can also be used as an arbiter for arbitrating access to shared resources in a data processing system. In yet another example, the programmable logic device can be configured as an interface between the processor and one of the other components in the system. In one embodiment, the programmable logic device may be one of a family of devices owned by ALTERA/INTEL Corporation.The foregoing is merely illustrative of the principles of the invention and various modifications can be made by those skilled in the art. The previously described embodiments may be implemented alone or in any combination.
An example method of generating a configuration for a network on chip (NoC) in a programmable device includes: receiving (502) traffic flow requirements for a plurality of traffic flows; assigning (508) routes through the NoC for each traffic flow based on the traffic flow requirements; determining (514) arbitration settings for the traffic flows along the assigned routes; generating (516) programming data for the NoC; and loading (518) the programming data to the programmable device to configure the NoC.
CLAIMSWhat is claimed is:1. A method of generating a configuration for a network on chip (NoC) in a programmable device, comprising:receiving traffic flow requirements for a plurality of traffic flows;assigning routes through the NoC for each traffic flow based on the traffic flow requirements;determining arbitration settings for the traffic flows along the assigned routes;generating programming data for the NoC; andloading the programming data to the programmable device to configure the NoC.2. The method of claim 1 , wherein the step of receiving the traffic flow requirements comprises:receiving source and destination information for each of the plurality of traffic flows.3. The method of claim 2, wherein the step of receiving the traffic flow requirements further comprises:receiving class information for each of the plurality of traffic flows, where the class information includes assignment of one of a plurality of traffic classes to each of the plurality of traffic flows.4. The method of claim 3, wherein the step of assigning the routes comprises:selecting a physical channel for each of the plurality of traffic flows based on assigned source and destination; andselecting a virtual channel for each of the plurality of traffic flows based on assigned traffic class.5. The method of claim 3, wherein the source and destination information includes a master circuit and a slave circuit for each of the plurality of traffic flows.6. The method of claim 3, wherein each of the routes is between a master circuit and a slave circuit having one or more switches therebetween.7. The method of claim 6, wherein each of the one or more switches includes an arbitrator, and wherein the step of determining the arbitration settings comprises assigning weights to one or more virtual channels input to the arbitrator in each of the one or more switches.8. An integrated circuit, comprising:a processing system;a programmable logic region; anda network on chip (NoC) coupling the processing system and the programmable logic region, the NoC including master circuits coupled to slave circuits through one or more physical channels, a first physical channel having a plurality of virtual channels.9. The integrated circuit of claim 8, wherein each of the plurality of virtual channels is configured to convey a different class of traffic.10. The integrated circuit of claim 8, wherein more than one of the plurality of virtual channels is configured to convey the same class of traffic.1 1. The integrated circuit of claims 8-10, wherein each of the one or more physical channels include routes through one or more switches of the NoC.12. The integrated circuit of claims 8-1 1 , wherein each of the switches includes an arbitrator having weights to one or more virtual channels input to the arbitrator.13. The integrated circuit of any of claims 8-12, wherein the NoC includes a peripheral interconnect configured to program the master circuits, the slave circuits, the physical channels, and the virtual channels.
END-TO-END QUALITY-OF-SERVICE IN A NETWORK-ON-CHIPTECHNICAL FIELDExamples of the present disclosure generally relate to electronic circuits and, in particular, to end-to-end quality-of-service in a network-on-chip.BACKGROUNDBus structures have been found to be unsuitable for some system on chip (SoC) integrated circuits (SoCs). With increases in circuit integration, transactions can become blocked and increased capacitance can create signaling problems. In place of a bus structure, a network on chip (NoC) can be used to support data communications between components of the SoC.A NoC generally includes a collection of switches that route packets from source circuits (“sources”) on the chip to destination circuits (“destinations”) on the chip. The layout of the switches in the chip supports packet transmission from the desired sources to the desired destinations. A packet may traverse multiple switches in transmission from a source to a destination. Each switch can be connected to one or more other switches in the network and routes an input packet to one of the connected switches or to the destination.SUMMARYTechniques for end-to-end quality-of-service in a network-on-chip. In an example, a method of generating a configuration for a network on chip (NoC) in a programmable device includes: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flow requirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.In another example, a non-transitory computer readable medium having stored thereon instructions executable by a processor to perform a method of generating a configuration for a network on chip (NoC) in a programmable device includes: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flow requirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.In another example, an integrated circuit include: a processing system; a programmable logic region; and a network on chip (NoC) coupling theprocessing system and the programmable logic region, the NoC including master circuits coupled to slave circuits through one or more physical channels, a first physical channel having a plurality of virtual channels.These and other aspects may be understood with reference to the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.Fig. 1 is a block diagram depicting a system-on-chip (SoC) according to an example.Fig. 2 is a block diagram depicting a network on chip (NoC) according to an example.Fig. 3 is a block diagram depicting connections between endpoint circuits through a NoC according to an example.Fig. 4 is a block diagram depicting a computer system according to an example.Fig. 5 is a flow diagram depicting a method of generating configuration data for a NoC according to an example.Fig. 6 is a block diagram depicting a communication system according to an example.Fig. 7 is a block diagram depicting arbitration in a switch of a NoC according to an example. Fig. 8 is a block diagram depicting assignment of weights to virtual channels according to an example.Fig. 9 is a block diagram depicting a programmable integrated circuit (IC) in which techniques described herein can be employed.Fig. 10 is a schematic diagram of a field programmable gate array (FPGA) architecture in which techniques described herein can be employed.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.It is contemplated that elements of one example may be beneficiallyincorporated in other examples.DETAILED DESCRIPTIONVarious features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.Fig. 1 is a block diagram depicting a system-on-chip (SoC) 102 according to an example. The SoC 102 is an integrated circuit (IC) comprising aprocessing system 104, a network-on-chip (NoC) 106, and one or more programmable regions 108. The SoC 102 can be coupled to external circuits, such as a nonvolatile memory (NVM) 1 10 and/or random access memory (RAM) 1 12. The NVM 1 10 can store data that can be loaded to the SoC 102 for configuring the SoC 102, such as configuring the NoC 106 and theprogrammable logic region(s) 108. Examples of the processing system 104 and the programmable logic region(s) 108 are described below. In general, the processing system 104 is connected to the programmable logic region(s) 108 through the NoC 106. The NoC 106 includes end-to-end Quality-of-Service (QoS) features for controlling data-flows therein. In examples, the NoC 106 first separates data- flows into designated traffic classes. Data-flows in the same traffic class can either share or have independent virtual or physical transmission paths. The QoS scheme applies two levels of priority across traffic classes. Within and across traffic classes, the NoC 106 applies a weighted arbitration scheme to shape the traffic flows and provide bandwidth and latency that meets the user requirements. Examples of the NoC 106 are discussed further below.Fig. 2 is a block diagram depicting the NoC 106 according to an example. The NoC 106 includes NoC master units (NMUs) 202, NoC slave units (NSUs) 204, a network 214, NoC peripheral interconnect (NPI) 210, and registers (Regs) 212. Each NMU 202 is an ingress circuit that connects a master endpoint to the NoC 106. Each NSU 204 is an egress circuit that connects the NoC 106 to a slave endpoint. The NMUs 202 are connected to the NSUs 204 through the network 214. In an example, the network 214 includes NoC packet switches 206 and routing 208 between the NoC packet switches 206. Each NoC packet switch 206 performs switching of NoC packets. The NoC packet switches 206 are connected to each other and to the NMUs 202 and NSUs 204 through the routing 208 to implement a plurality of physical channels. The NoC packet switches 206 also support multiple virtual channels per physical channel. The NPI 210 includes circuitry to program the NMUs 202, NSUs 204, and NoC packet switches 206. For example, the NMUs 202, NSUs 204, and NoC packet switches 206 can include registers 212 that determine functionality thereof. The NPI 210 includes interconnect coupled to the registers 212 for programming thereof to set functionality. Configuration data for the NoC 106 can be stored in the NVM 1 10 and provided to the NPI 210 for programming the NoC 106.Fig. 3 is a block diagram depicting connections between endpoint circuits through the NoC 106 according to an example. In the example, endpoint circuits 302 are connected to endpoint circuits 304 through the NoC 106. The endpoint circuits 302 are master circuits, which are coupled to NMUs 202 of the NoC 106. The endpoint circuits 304 are slave circuits coupled to the NSUs 204 of the NoC 106. Each endpoint circuit 302 and 304 can be a circuit in the processing system 104 or a circuit in a programmable logic region 108. Each endpoint circuit in the programmable logic region 108 can be a dedicated circuit (e.g., a hardened circuit) or a circuit configured in programmable logic.The network 214 includes a plurality of physical channels 306. The physical channels 306 are implemented by programming the NoC 106. Each physical channel 306 includes one or more NoC packet switches 206 and associated routing 208. An NMU 202 connects with an NSU 204 through at least one physical channel 306. A physical channel 306 can also have one or more virtual channels 308.Fig. 4 is a block diagram depicting a computer system 400 according to an example. The computer system 400 includes a computer 401 , input/output (IO) devices 412, and a display 414. The computer 401 includes a hardware platform 418 and software executing on the hardware platform 418, including operating system (OS) 420 and electronic design automation (EDA) software 410. The hardware platform 418 includes a central processing unit (CPU) 402, system memory 408, storage devices (“storage 421”), support circuits 404, and an IO interface 406.The CPU 402 can be any type of general-purpose central processing unit (CPU), such as an x86-based processor, ARM®-based processor, or the like. The CPU 402 can include one or more cores and associated circuitry (e.g., cache memories, memory management units (MMUs), interrupt controllers, etc.). The CPU 402 is configured to execute program code that perform one or more operations described herein and which can be stored in the system memory 408 and/or the storage 421. The support circuits 404 include various devices that cooperate with the CPU 402 to manage data flow between the CPU 402, the system memory 408, the storage 421 , the IO interface 406, or any other peripheral device. For example, the support circuits 404 can include a chipset (e.g., a north bridge, south bridge, platform host controller, etc.), voltage regulators, firmware (e.g., a BIOS), and the like. In some examples, the CPU 402 can be a System-in-Package (SiP), System-on-Chip (SoC), or the like, which absorbs all or a substantial portion of the functionality of the support circuits 404 (e.g., north bridge, south bridge, etc.).The system memory 408 is a device allowing information, such as executable instructions and data, to be stored and retrieved. The system memory 408 can include, for example, one or more random access memory (RAM) modules, such as double-data rate (DDR) dynamic RAM (DRAM). The storage 421 includes local storage devices (e.g., one or more hard disks, flash memory modules, solid state disks, and optical disks) and/or a storage interface that enables the computer 401 to communicate with one or more network data storage systems. The IO interface 406 can be coupled to the IO devices 412 and the display 414.The OS 420 can be any commodity operating system known in the art, such as such as Linux®, Microsoft Windows®, Mac OS®, or the like. A user can interact with the EDA software 410 to generate configuration data for the SoC 102. In particular, the EDA software 410 is configured to generate configuration data for programming the NoC 106 to implement various physical and virtual channels for connecting endpoint circuits.Fig. 5 is a flow diagram depicting a method 500 of generatingconfiguration data for the NoC 106 according to an example. The method 500 can be performed by the EDA software 410. The method 500 begins at step 502, where the EDA software 410 receives traffic flow requirements from the user. In an example, at step 504, the EDA software 410 receives source and destination information for each traffic flow specified by the user (e.g., a source endpoint and a destination endpoint for each traffic flow). A traffic flow is a connection that conveys data (“traffic”) between endpoints. At step 506, the EDA software 410 receives class information for each traffic flow specified by the user. Example traffic classes include low-latency traffic, isochronous traffic, best-effort (BE) traffic (e.g., bandwidth guaranteed traffic), and the like.At step 508, the EDA software 410 assigns routes through the NoC 106 for each traffic flow based on the traffic flow requirements. In an example, at step 510, the EDA software 410 selects a physical channel for each traffic flow based on source and destination thereof. The NoC 106 can have multiple physical routes available between each source and destination. At step 512, the EDA software 410 selects a virtual channel for one or more virtual channels based on traffic class thereof. That is, a given physical channel can have a plurality of virtual channels and can convey a plurality of traffic flows that are separated by traffic class. Each virtual channel within a physical channel carries only one traffic class, but several traffic flows within the same traffic class. For example, a given physical channel can convey a traffic flow in the low-latency traffic class and another traffic flow in the isochronous traffic class in a pair of virtual channels. Note that steps 510 and 512 can occur concurrently in the method 500.At step 514, the EDA software 410 determines arbitration settings for the traffic flows specified by the user. In an example, the EDA software 410 sets virtual channels having higher priority traffic to have higher priority through the switches 206 and virtual channels having lower priority traffic to have lower priority through the switches 206. For example, isochronous or low-latency traffic can have a higher priority than other traffic types. In an example, arbitration uses a deficit scheme. At each arbiter output (e.g., output of a switch 206), there is a combined arbitration for all virtual channels from all input ports to one output port. Each virtual channel from each input port has an independent weight value that provides a specified number of arbitration tokens. The tokens are used to shape the arbitration and control the bandwidth assignment across traffic-flows. This scheme ensures that all requestors (e.g., endpoints) that have tokens are serviced before the tokens are refreshed/reloaded. This ensures that the arbitration does not cause starvation, since all requests in one group must be serviced before a new group can start. Arbitration settings determined at step 514 can be programmed at boot time or can be adjusted dynamically during operation.At step 516, the EDA software 410 generates programming data for the NoC 106. The programming data is set to configure the NoC 106 to implement the physical channels, virtual channels, and optionally the arbitration settings. In some examples, the arbitration settings can be programmed dynamically after configuration of the NoC 106. At step 518, the EDA software 410 loads the programming data to the SoC 102 (e.g., by storing the programming data in the NVM 1 10 or directly providing the programming data to the SoC 102).The method 500 provides fully programmable, end-to-end QoS using the NoC 106. Some SoCs have a relatively fixed interconnect with limited flexibility in arbitration schemes. Other SoCs have selectable routes and limited QoS prioritization, but do not have separate traffic classes and precise bandwidth allocation across traffic flows. The method 500 provides for a combination of virtual channels for independent flow control, configurable physical channel routing, deficit arbitration in groups, and assignment of traffic classes. Fig. 6 is a block diagram depicting a communication system 600according to an example. The communication system 600 includes master devices 602o...6024(master devices 602) coupled to slave devices 604o and 604i (slave devices 604) through the NoC 106. The master devices 602 and slave devices 604 comprise endpoint circuits in the SoC 102 coupled to NMUs 202 and NSUs 204, respectively. The NoC 106 includes NoC packet switches (NPS) 206 (e.g., NPS 2060,o...2060,3and NPS 206I ,0...206I ,3).The master device 602o and the master device 602i are coupled to the NPS 206o,o. The master device 602o is coupled to the NPS 206o,o through a low- latency (LL) virtual channel. The master device 602i is coupled to the NPS206o,o through a best-effort (BE) virtual channel. The master device 6023is coupled to the NPS 206o,ithrough a BE virtual channel. The master 6023is coupled to the NPS 206o,3through an isochronous (ISOC) virtual channel. The master 6024is coupled to the NPS 206o,3through an ISOC virtual channel. The NPS 206o,iis coupled to the NPS 206o,2. The NPS 206o,2is coupled to the NPS 206O,3.The NPS 206o,o is coupled to the NPS 206i,o. The NPS 206o,iis coupled to the NPS 206i,i. The NPS 206I ,2and the NPS 206I ,3are unconnected and not used in the current configuration of the communication system 600. The NPS 206i,o is coupled to the slave 604o. The NPS 206i,iis coupled to the slave 602i. The NPS 206i,o is coupled to the NPS 206i,i.In operation, the master device 602o sends traffic that is low-latency to the slave device 604o. Masters 602i and 6022both send best-effort traffic to the slave device 604o. Masters 6023and 6024send isochronous traffic to the slave device 604i . Each traffic flow enters each switch on a separate physical channel. There are two virtual channels (designated by a pair of lines) between NPS 206o,o and NPS 206i,o, between NPS 206o,iand NPS 206i,i, and between NPS 206i,o and slave device 604o. Other paths use only a single virtual channel on the physical channel (e.g., between NPS 206o,iand NPS 206o,2and between NPS 206i,iand the slave device 602i). Each NPS 206 has output port arbitration that controls the mixing of traffic from input ports to the output port, as described further below.Fig. 7 is a block diagram depicting arbitration in a switch 206 of the NoC106 according to an example. Each switch 206 includes an arbitrator 702. In the example, the arbitrator 702 includes three input ports designated input port 0, input port 1 , and input port 2. But a switch 206 and arbitrator 702 can include any number of input ports. The arbitrator 702 includes an output port designated “out.”As shown in Fig. 7, the input port 2 has no input traffic streams in the example. The input port 0 has two virtual channels receiving two traffic streams (e.g., one low-latency traffic stream and one isochronous traffic stream). The input port 1 has a single virtual channel carrying one traffic stream (e.g., best- effort traffic). Each input port of the arbitrator 702 has an assigned weight. The weight controls the relative share of arbitration bandwidth assigned to each traffic flow. In the example, port 0 has an arbitration weights of 4 and 8 for the respective virtual channels, and port 1 has an arbitration weight of 4 on the single virtual channel. This means that, of the available bandwidth at the output port, the first traffic stream at port 0 gets 25% of the bandwidth, the second traffic stream at port 0 gets 50% of the bandwidth, and the traffic stream at port 1 gets 25% of the bandwidth. For example, the low-latency traffic at port 0 can be assigned more bandwidth (due to higher priority) than the best-effort traffic (lower priority). This means that if all requestors are sending, the arbitrator 702 will service the low-latency traffic as long as it has arbitration tokens. The best- effort traffic will get services if it has a token and there are no other higher- priority requesters that also have a token. If there are requestors present and no requestor has an arbitration token left, the arbitration tokens are reloaded according to the specified weights. The arbitrator 702 also reloads the arbitration tokens if all requestors run out of tokens.The description above is for one arbitration point. The programming of each arbitration point on a given physical path ensures that there is enough bandwidth end-to-end. The use of a high-priority assignment to some virtual channels ensures that the transactions receive lower latency/lower jitter service. The use of arbitration weights and deficit arbitration ensures that all requestors receive some amount of bandwidth according to its arbitration weights within a period of time corresponding to the sum of all the arbitration weights. The time to service of such a group may be less if some requestors are not sending traffic.Fig. 8 is a block diagram depicting assignment of weights to virtual channels according to an example. The example includes two arbitrators 702i and 7022. The arbitrator 7021 arbitrates among physical channels 802, 804, and 806. The arbitrator 7022arbitrates among physical channels 806, 808, and 810. Each physical channel 802, 804, 806, and 808 includes two virtual channels, designated vcO and vd . In the example, there are six different sources (e.g., master devices) designated src0...src5. The source srcO is on vcO of physical channel 808. The source srd is on vd of physical channel 808. The source src2 is on vcO of the physical channel 802. The source src3 is on vd of the physical channel 802. The source src4 is on vcO of the physical channel 804. The source src5 is on vd of the physical channel 804. The arbitrator 7022is programmed to provide a weight of 10 on vcO of the physical channel 808 and a weight of 20 on vd of the physical channel 808. The arbitrator 7022is programmed to provide a weight of 30 on vcO of the physical channel 806 and a weight of 40 on vd of the physical channel 806. The arbitrator 702i is programmed to provide a weight of 10 on vcO of the physical channel 802 and a weight of 30 on vd of the physical channel 802. The arbitrator 702i is programmed to provide a weight of 20 on vcO of the physical channel 804 and a weight of 10 on vd of the physical channel 804. This weighting scheme results in srcO having a weight 10, srd having a weight 20, src2 having a weight 10, src3 having a weight 30, src4 having a weight 20, and src5 having a weight 10, at the output of the arbitrator 7022. Each source gets bandwidth in proportion to its weight. Those skilled in the art will appreciate that various other weighting schemes can be employed across any number of arbitrators for any number of sources in a similar manner.Fig. 9 is a block diagram depicting a programmable IC 1 according to an example that can be used as an implementation of the SoC 102 shown in Fig. 1. The programmable IC 1 includes programmable logic 3, configuration logic 25, and configuration memory 26. The programmable IC 1 can be coupled to external circuits, such as nonvolatile memory 27, DRAM 28, and other circuits 29. The programmable logic 3 includes logic cells 30, support circuits 31 , and programmable interconnect 32. The logic cells 30 include circuits that can be configured to implement general logic functions of a plurality of inputs. The support circuits 31 include dedicated circuits, such as transceivers, input/output blocks, digital signal processors, memories, and the like. The logic cells and the support circuits 31 can be interconnected using the programmable interconnect 32. Information for programming the logic cells 30, for setting parameters of the support circuits 31 , and for programming the programmable interconnect 32 is stored in the configuration memory 26 by the configuration logic 25. The configuration logic 25 can obtain the configuration data from the nonvolatile memory 27 or any other source (e.g., the DRAM 28 or from the other circuits 29). In some examples, the programmable IC 1 includes a processing system 2. The processing system 2 can include microprocessor(s), memory, support circuits, IO circuits, and the like.Fig. 10 illustrates a field programmable gate array (FPGA) implementation of the programmable IC 1 that includes a large number of differentprogrammable tiles including transceivers 37, configurable logic blocks (“CLBs”)33, random access memory blocks (“BRAMs”) 34, input/output blocks (“lOBs”) 36, configuration and clocking logic (“CONFIG/CLOCKS”) 42, digital signal processing blocks (“DSPs”) 35, specialized input/output blocks (“I/O”) 41 (e.g., configuration ports and clock ports), and other programmable logic 39 such as digital clock managers, analog-to-digital converters, system monitoring logic, and so forth. The FPGA can also include PCIe interfaces 40, analog-to-digital converters (ADC) 38, and the like.In some FPGAs, each programmable tile can include at least one programmable interconnect element (“I NT”) 43 having connections to input and output terminals 48 of a programmable logic element within the same tile, as shown by examples included at the top of Fig. 10. Each programmable interconnect element 43 can also include connections to interconnect segments 49 of adjacent programmable interconnect element(s) in the same tile or other tile(s). Each programmable interconnect element 43 can also includeconnections to interconnect segments 50 of general routing resources between logic blocks (not shown). The general routing resources can include routing channels between logic blocks (not shown) comprising tracks of interconnect segments (e.g., interconnect segments 50) and switch blocks (not shown) for connecting interconnect segments. The interconnect segments of the general routing resources (e.g., interconnect segments 50) can span one or more logic blocks. The programmable interconnect elements 43 taken together with the general routing resources implement a programmable interconnect structure(“programmable interconnect”) for the illustrated FPGA. In an example implementation, a CLB 33 can include a configurable logic element (“CLE”) 44 that can be programmed to implement user logic plus a single programmable interconnect element (“I NT”) 43. A BRAM 34 can include a BRAM logic element (“BRL”) 45 in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured example, a BRAM tile has the same height as five CLBs, but other numbers (e.g., four) can also be used. A DSP tile 35 can include a DSP logic element (“DSPL”) 46 in addition to an appropriate number of programmable interconnect elements. An IOB 36 can include, for example, two instances of an input/output logic element (“IOL”) 47 in addition to one instance of the programmable interconnect element 43. As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the I/O logic element 47 typically are not confined to the area of the input/output logic element 47.In the pictured example, a horizontal area near the center of the die (shown in Fig. 10) is used for configuration, clock, and other control logic.Vertical columns 51 extending from this horizontal area or column are used to distribute the clocks and configuration signals across the breadth of the FPGA.Some FPGAs utilizing the architecture illustrated in Fig. 10 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic.Note that Fig. 10 is intended to illustrate only an exemplary FPGA architecture. For example, the numbers of logic blocks in a row, the relative width of the rows, the number and order of rows, the types of logic blocks included in the rows, the relative sizes of the logic blocks, and theinterconnect/logic implementations included at the top of Fig. 10 are purely exemplary. For example, in an actual FPGA more than one adjacent row of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic, but the number of adjacent CLB rows varies with the overall size of the FPGA.In one example, a method of generating a configuration for a network on chip (NoC) in a programmable device may be provided. Such a method may include: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flowrequirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.In such a method, the step of receiving the traffic flow requirements may include: receiving source and destination information for each of the plurality of traffic flows.In such a method, the step of receiving the traffic flow requirements may further include: receiving class information for each of the plurality of traffic flows, where the class information includes assignment of one of a plurality of traffic classes to each of the plurality of traffic flows.In such a method, the step of assigning the routes may include: selecting a physical channel for each of the plurality of traffic flows based on assigned source and destination; and selecting a virtual channel for each of the plurality of traffic flows based on assigned traffic class.In such a method, the source and destination information may include a master circuit and a slave circuit for each of the plurality of traffic flows.In such a method, each of the routes may be between a master circuit and a slave circuit having one or more switches therebetween.In such a method, each of the one or more switches may include an arbitrator, and wherein the step of determining the arbitration settings comprises assigning weights to one or more virtual channels input to the arbitrator in each of the one or more switches.In another example, a non-transitory computer readable medium having stored thereon instructions executable by a processor to perform a method of generating a configuration for a network on chip (NoC) in a programmable device may be provided. Such a non-transitory computer readable medium having stored thereon instructions executable by a processor to perform a method of generating a configuration for a network on chip (NoC) in a programmable device may include: receiving traffic flow requirements for a plurality of traffic flows; assigning routes through the NoC for each traffic flow based on the traffic flow requirements; determining arbitration settings for the traffic flows along the assigned routes; generating programming data for the NoC; and loading the programming data to the programmable device to configure the NoC.In such a non-transitory computer readable medium, the step of receiving the traffic flow requirements may include: receiving source and destination information for each of the plurality of traffic flows.In such a non-transitory computer readable medium, the step of receiving the traffic flow requirements further comprises: receiving class information for each of the plurality of traffic flows, where the class information includes assignment of one of a plurality of traffic classes to each of the plurality of traffic flows.In such a non-transitory computer readable medium, the step of assigning the routes may include: selecting a physical channel for each of the plurality of traffic flows based on assigned source and destination; and selecting a virtual channel for each of the plurality of traffic flows based on assigned traffic class.In such a non-transitory computer readable medium, the source and destination information includes a master circuit and a slave circuit for each of the plurality of traffic flows.In such a non-transitory computer readable medium, each of the routes may be between a master circuit and a slave circuit having one or more switches therebetween.In such a non-transitory computer readable medium, each of the one or more switches may include an arbitrator, and wherein the step of determining the arbitration settings may include assigning weights to one or more virtual channels input to the arbitrator in each of the one or more switches.In another example an integrated circuit may be provided. Such an integrated circuit may include: a processing system; a programmable logic region; and a network on chip (NoC) coupling the processing system and the programmable logic region, the NoC including master circuits coupled to slave circuits through one or more physical channels, a first physical channel having a plurality of virtual channels.In such an integrated circuit, each of the plurality of virtual channels may be configured to convey a different class of traffic. In such an integrated circuit, more than one of the plurality of virtual channels may be configured to convey the same class of traffic.In such an integrated circuit, each of the one or more physical channels include routes through one or more switches of the NoC.In such an integrated circuit, each of the switches includes an arbitrator having weights to one or more virtual channels input to the arbitrator.In such an integrated circuit, the NoC may include a peripheral interconnect configured to program the master circuits, the slave circuits, the physical channels, and the virtual channels.While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
Some embodiments include an integrated assembly having a gate material, an insulative material adjacent the gate material, and a semiconductor oxide adjacent the insulative material. The semiconductor oxide has a channel region proximate the gate material and spaced from the gate material by the insulative material. An electric field along the gate material induces carrier flow within the channel region, with the carrier flow being along a first direction. The semiconductor oxide includes a grain boundary having a portion which extends along a second direction that crosses the first direction of the carrier flow. In some embodiments, the semiconductor oxide has a grain boundary which extends along the first direction and which is offset from the insulative material by an intervening portion of the semiconductor oxide. The carrier flow is within the intervening region and substantially parallel to the grain boundary. Some embodiments include methods of forming integrated assemblies.
1.An integrated assembly including:Grid materialAn insulating material, which is adjacent to the gate material; andA semiconductor oxide which is adjacent to the insulating material; the semiconductor oxide has a channel region close to the gate material and at least separated from the gate material by the insulating material; wherein the flow of carriers responds The electric field along the gate material is induced along a first direction; the semiconductor oxide is polycrystalline; individual crystal grains of the polycrystalline semiconductor oxide are peripherally bounded by grain boundaries; the At least one of the grain boundaries has a portion extending along a second direction, wherein the second direction crosses the first direction in which the carriers flow.2.The integrated assembly of claim 1, wherein the individual crystal grains are dominated by cubic crystallinity.3.The integrated assembly according to claim 1, wherein the semiconductor oxide mainly has cubic crystallinity.4.The integrated assembly according to claim 1, wherein the gate material, insulating material, and semiconductor oxide are supported by a semiconductor substrate having a horizontally extending upper surface, and wherein the carrier flow is relative to the horizontally extending The upper surface extends substantially parallel.5.The integrated assembly according to claim 1, wherein the gate material, insulating material, and semiconductor oxide are supported by a semiconductor substrate having a horizontally extending upper surface, and wherein the carrier flow is relative to the horizontally extending The upper surface extends substantially orthogonally.6.The integrated assembly of claim 1, wherein the semiconductor oxide includes one or more of indium, zinc, tin, and gallium.7.The integrated assembly of claim 1, wherein the semiconductor oxide comprises indium, zinc, and gallium.8.The integrated assembly according to claim 7, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 14 to about 24;The metal atomic percentage of the gallium is in the range of about 37 to about 47; andThe metal atomic percentage of the zinc is in the range of about 35 to about 45.9.The integrated assembly according to claim 7, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 16 to about 22;The metal atomic percentage of the gallium is in the range of about 39 to about 45; andThe metal atomic percentage of the zinc is in the range of about 37 to about 43.10.The integrated assembly according to claim 7, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 19;The metal atomic percentage of the gallium is about 42; andThe metal atomic percentage of the zinc is about 40.11.The integrated assembly according to claim 1, wherein the insulating material is a high-k material.12.The integrated assembly according to claim 1, wherein the insulating material is a metal oxide.13.The integrated assembly according to claim 1, wherein the insulating material includes one or more of aluminum oxide, hafnium dioxide, zirconium oxide, and titanium oxide.14.An integrated assembly including:Grid materialAn insulating material, which is adjacent to the gate material; andA semiconductor oxide which is adjacent to the insulating material; the semiconductor oxide has a channel region close to the gate material and at least separated from the gate material by the insulating material; wherein the flow of carriers responds The electric field along the gate material is induced along a first direction; the semiconductor oxide has at least one grain boundary, and the at least one grain boundary extends along the first direction and passes through the semiconductor oxide The intermediate portion of the object is offset from the insulating material; the carrier flows in the intermediate region and is substantially parallel to the at least one grain boundary.15.The integrated assembly of claim 14, wherein the individual crystal grains of the semiconductor oxide are dominated by cubic crystallinity.16.The integrated assembly according to claim 14, wherein the semiconductor oxide mainly has cubic crystallinity.17.The integrated assembly of claim 14, wherein the gate material, insulating material, and semiconductor oxide are supported by a semiconductor substrate having a horizontally extending upper surface, and wherein the carrier flow is relative to the horizontally extending The upper surface extends substantially parallel.18.The integrated assembly of claim 14, wherein the gate material, insulating material, and semiconductor oxide are supported by a semiconductor substrate having a horizontally extending upper surface, and wherein the carrier flow is relative to the horizontally extending The upper surface extends substantially orthogonally.19.The integrated assembly of claim 14, wherein the semiconductor oxide includes one or more of indium, zinc, tin, and gallium.20.The integrated assembly of claim 14, wherein the semiconductor oxide comprises indium, zinc, and gallium.21.The integrated assembly of claim 20, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 16 to about 26;The metal atomic percentage of the gallium is in the range of about 45 to about 55; andThe metal atomic percentage of the zinc is in the range of about 24 to about 34.22.The integrated assembly of claim 20, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 18 to about 24;The metal atomic percentage of the gallium is in the range of about 47 to about 53; andThe metal atomic percentage of the zinc is in the range of about 26 to about 32.23.The integrated assembly of claim 20, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 21;The metal atomic percentage of the gallium is about 50; andThe metal atomic percentage of the zinc is about 29.24.An integrated assembly including:A semiconductor oxide extending between the first conductive contact and the second conductive contact along the vertical direction; the semiconductor oxide has a first sidewall surface and a second sidewall surface opposite to each other along a cross section ;A first region of the insulating material adjacent to the first sidewall surface, and a second region of the insulating material adjacent to the second sidewall surface;A first region of the gate material adjacent to the first region of the insulating material and spaced at least from the first sidewall surface by the first region of the insulating material, and the gate material A second area along the second area of the insulating material and at least spaced from the second sidewall surface by the second area of the insulating material; andWherein the flow of carriers in the semiconductor oxide is induced in response to the electric field along the first region and the second region of the gate material, wherein the flow of carriers along the corresponding The first direction of the vertical direction of the semiconductor oxide; wherein the semiconductor oxide is polycrystalline; wherein individual crystal grains of the polycrystalline semiconductor oxide are delimited at the periphery by grain boundaries; and wherein At least one of the grain boundaries has a portion extending along a second direction, wherein the second direction crosses the first direction in which the carriers flow.25.The integrated assembly of claim 24, wherein the individual crystal grains are dominated by cubic crystallinity.26.The integrated assembly according to claim 24, wherein the semiconductor oxide mainly has cubic crystallinity.27.The integrated assembly of claim 24, wherein the semiconductor oxide comprises indium, zinc, and gallium.28.The integrated assembly of claim 27, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 16 to about 22;The metal atomic percentage of the gallium is in the range of about 39 to about 45; andThe metal atomic percentage of the zinc is in the range of about 37 to about 43.29.The integrated assembly of claim 27, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 19;The metal atomic percentage of the gallium is about 42; andThe metal atomic percentage of the zinc is about 40.30.The integrated assembly of claim 24, which includes a digit line coupled with one of the first conductive contact and the second conductive contact, and includes a digit line coupled with the first conductive contact and the The other of the second conductive contacts is coupled to the charge storage device.31.The integrated assembly of claim 30, wherein:The semiconductor oxide, the first region and the second region of the insulating material; and the first region and the second region of the gate material together form an access device;The access device and the charge storage device together form a memory device; andThe memory device is one of many substantially identical memory devices in the memory array.32.An integrated assembly including:A semiconductor oxide extending between the first conductive contact and the second conductive contact along the vertical direction; the semiconductor oxide has a first sidewall surface and a second sidewall surface opposite to each other along a cross section ;A first region of the insulating material adjacent to the first sidewall surface, and a second region of the insulating material adjacent to the second sidewall surface;A first region of the gate material adjacent to the first region of the insulating material and spaced at least from the first sidewall surface by the first region of the insulating material, and the gate material A second region adjacent to the second region of the insulating material and spaced apart from the second sidewall surface by at least the second region of the insulating material;A grain boundary that extends within the semiconductor oxide and along the vertical direction; the grain boundary traverses the entire length of the semiconductor oxide from the first contact to the second contact; The grain boundary is offset from the first region of the insulating material through the first intermediate portion of the semiconductor oxide, and from the first region of the insulating material through the second intermediate portion of the semiconductor oxide Second zone offset; andWherein the carrier flow in the semiconductor oxide is induced in response to the electric field along the first region and the second region of the gate material, and wherein the carrier flow is along the The vertical direction of the semiconductor oxide; and wherein the carriers in the semiconductor oxide flow in the intermediate region and substantially parallel to the grain boundary.33.The integrated assembly of claim 32, wherein the individual crystal grains of the semiconductor oxide are dominated by cubic crystallinity.34.The integrated assembly according to claim 32, wherein said semiconductor oxide mainly has cubic crystallinity.35.The integrated assembly of claim 32, which includes a digit line coupled with one of the first conductive contact and the second conductive contact, and includes a digit line coupled with the first conductive contact and the The other of the second conductive contacts is coupled to the charge storage device.36.The integrated assembly of claim 35, wherein:The semiconductor oxide, the first region and the second region of the insulating material; and the first region and the second region of the gate material together form an access device;The access device and the charge storage device together form a memory device; andThe memory device is one of many substantially identical memory devices in the memory array.37.The integrated assembly of claim 32, wherein the semiconductor oxide comprises indium, zinc, and gallium.38.The integrated assembly of claim 37, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 18 to about 24;The metal atomic percentage of the gallium is in the range of about 47 to about 53; andThe metal atomic percentage of the zinc is in the range of about 26 to about 32.39.The integrated assembly of claim 37, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 21;The metal atomic percentage of the gallium is about 50; andThe metal atomic percentage of the zinc is about 29.40.A method of forming an integrated assembly, which includes:The semiconductor oxide is deposited on the conductive material; the semiconductor oxide includes indium, gallium, and zinc; the deposition is physical vapor deposition and is performed in the chamber using the environment in the chamber, the environment having a temperature of about 20 A temperature in the range of ℃ to about 500 ℃ and a pressure in the range of about 1 mTorr to about 9 mTorr; the deposited semiconductor oxide is polycrystalline;Patterning the deposited semiconductor oxide into a vertically extending structure; the vertically extending structure has a first sidewall surface and a second sidewall surface opposite to each other along a cross section;An insulating material is formed along the opposite first and second sidewall surfaces; the first region of the insulating material is along the first sidewall surface, and the second region of the insulating material is along the The second side wall surface;A gate material is formed along the insulating material; a first region of the gate material is along the first region of the insulating material, and a second region of the gate material is along the insulating material The second zone; andWherein the semiconductor oxide, the first region and the second region of the insulating material, and the first region and the second region of the gate material together form a transistor; wherein the transistor is It is configured such that the electric field along the first region and the second region of the gate material induces the flow of carriers in the semiconductor oxide, wherein the flow of carriers corresponds to the The first direction of the vertical direction of the semiconductor oxide; wherein the individual crystal grains of the polycrystalline semiconductor oxide are peripherally bounded by grain boundaries; and wherein at least one of the grain boundaries has a direction along the second direction An extended portion, wherein the second direction crosses the first direction in which the carriers flow.41.The method of claim 40, wherein the semiconductor oxide is deposited directly onto the conductive material.42.The method according to claim 41, wherein the conductive material includes one or both of ruthenium and tungsten.43.The method of claim 40, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 14 to about 24;The metal atomic percentage of the gallium is in the range of about 37 to about 47; andThe metal atomic percentage of the zinc is in the range of about 35 to about 45.44.The method of claim 40, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 16 to about 22;The metal atomic percentage of the gallium is in the range of about 39 to about 45; andThe metal atomic percentage of the zinc is in the range of about 37 to about 43.45.The method of claim 40, wherein the indium, zinc, and gallium are each present in the semiconductor oxide to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 19;The metal atomic percentage of the gallium is about 42; andThe metal atomic percentage of the zinc is about 40.46.A method of forming an integrated assembly, which includes:Depositing a semiconductor oxide on the support material; the semiconductor oxide includes indium, gallium, and zinc;Patterning the deposited semiconductor oxide into a vertically extending structure; the vertically extending structure has a first sidewall surface and a second sidewall surface opposite to each other along a cross section;An insulating material is formed along the opposite first and second sidewall surfaces; the first region of the insulating material is along the first sidewall surface, and the second region of the insulating material is along the The second side wall surface;A gate material is formed along the insulating material; a first region of the gate material is along the first region of the insulating material, and a second region of the gate material is along the insulating material The second area;After the insulating material is formed, the temperature of the semiconductor oxide is maintained in the range of about 400°C to about 600°C for a duration of at least about 30 minutes to less than or equal to about 1 day. The semiconductor oxide is annealed; after the annealing, the grain boundary is in the semiconductor oxide and extends along the vertical direction; the grain boundary crosses the semiconductor oxide from the upper side of the semiconductor oxide The entire length from the surface to the lower surface of the semiconductor oxide; the grain boundary is offset from the first region of the insulating material through the first intermediate portion of the semiconductor oxide, and passes through the semiconductor oxide The second intermediate portion is offset from the second region of the insulating material; andWherein the semiconductor oxide, the first region and the second region of the insulating material, and the first region and the second region of the gate material together form a transistor; wherein the transistor is It is configured such that the electric field along the first region and the second region of the gate material induces the flow of carriers in the semiconductor oxide, wherein the flow of carriers corresponds to the The first direction of the vertical direction of the semiconductor oxide; the carrier in the semiconductor oxide flows in the first intermediate region and the second intermediate region and is substantially parallel to the crystal World.47.The method of claim 46, wherein the deposition includes one or more of physical vapor deposition, chemical vapor deposition, and atomic layer deposition.48.The method of claim 46, wherein the annealing is performed in the chamber after the gate material is formed and when the top portion of the semiconductor oxide is exposed to the environment inside the chamber.49.The method of claim 48, wherein the environment is composed of a gas that is inert with respect to the reaction with the exposed top portion of the semiconductor oxide.50.The method of claim 48, wherein the environment includes a reducing agent.51.The method of claim 48, wherein the environment includes an oxidizing agent.52.The method according to claim 46, wherein said indium, zinc and gallium are each present in said semiconductor oxide up to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 16 to about 26;The metal atomic percentage of the gallium is in the range of about 45 to about 55; andThe metal atomic percentage of the zinc is in the range of about 24 to about 34.53.The method according to claim 46, wherein said indium, zinc and gallium are each present in said semiconductor oxide up to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is in the range of about 18 to about 24;The metal atomic percentage of the gallium is in the range of about 47 to about 53; andThe metal atomic percentage of the zinc is in the range of about 26 to about 32.54.The method according to claim 46, wherein said indium, zinc and gallium are each present in said semiconductor oxide up to a certain atomic percentage of metal, and wherein:The metal atomic percentage of the indium is about 21;The metal atomic percentage of the gallium is about 50; andThe metal atomic percentage of the zinc is about 29.
Integrated assembly with semiconductor oxide channel material and forming integrated assembly MethodsRelated patent dataThis patent claims the priority and rights of U.S. Provisional Application No. 62/770,081 filed on November 20, 2018, the disclosure of which is incorporated herein by reference.Technical fieldAn integrated assembly with semiconductor oxide channel material and a method of forming an integrated assembly.Background techniqueSemiconductor oxides (e.g., oxides including one or more of indium, gallium, zinc, and tin) can be incorporated into an integrated assembly. For example, a semiconductor oxide can be used to form the channel region of a transistor. Transistors can be used as access devices in memory arrays, or for other applications.It is desirable to develop improved semiconductor oxides suitable for use in integrated assemblies, and it is desirable to develop integrated components that utilize the improved semiconductor oxides.Description of the drawingsFigures 1 and 2 are diagrammatic cross-sectional side views of regions of example integrated assemblies that include example transistors.Figure 3 is a diagrammatic schematic illustration of regions of an example memory array.4 to 6 are diagrammatic cross-sectional side views of regions of example integrated assemblies including example transistors.7 and 8 are diagrammatic cross-sectional top-down views along the line A-A, showing an example embodiment configuration of the example integrated assembly of FIG. 6. The cross-sectional side view of FIG. 6 is along the line B-B of FIGS. 7 and 8. FIG. 8A is a diagrammatic cross-sectional top-down view of a region of an example integrated assembly that replaces the assembly of FIG. 8. FIG.9 to 14 are diagrammatic cross-sectional side views of regions of the example integrated assembly shown at an example process stage of the example method for manufacturing the integrated assembly of FIG. 1. 10A is a diagrammatic cross-sectional side view of a region of an example integrated assembly that replaces the assembly of FIG. 10.15 to 21 are diagrammatic cross-sectional side views of regions of the example integrated assembly shown at an example process stage of the example method for manufacturing the integrated assembly of FIG. 2.Detailed waysSome embodiments include a semiconductor oxide used in the channel region of the transistor. The transistor may include a conductive gate material, and may include an insulating gate dielectric between the gate material and the semiconductor oxide. The operation of the transistor induces carrier flow (eg, electron flow and/or hole migration) along the channel region. The carriers flow in the first direction. The semiconductor oxide may be configured to have grain boundaries extending along the first direction and spaced apart from the gate dielectric by the intermediate region; and the current flow may be completely within the intermediate region so that the current flow does not cross the grain boundary ( That is, it is substantially parallel to the grain boundary). Alternatively, the semiconductor oxide may be configured to have grain boundaries that cross the current. Example embodiments are described below with reference to FIGS. 1 to 21.With reference to FIG. 1, this figure illustrates the area of an integrated assembly 10 that includes an example memory cell 12 having an example access device (transistor) 14. The transistor 14 is above the digit line 16, which in turn is supported by the substrate 18.The substrate 18 may include a semiconductor material; and may, for example, include single crystal silicon, mainly composed of single crystal silicon, or composed of single crystal silicon. The base 18 may be referred to as a semiconductor substrate. The term "semiconductor substrate" means any structure that includes semiconductor materials, including but not limited to monolithic semiconductor materials, such as semiconductor wafers (alone or in a combination including other materials), and (alone or in a combination including other materials) In the piece) a layer of semiconductor material. The term "substrate" refers to any support structure, including but not limited to the semiconductor substrate described above. In some applications, the substrate 18 may correspond to a semiconductor substrate containing one or more materials associated with integrated circuit manufacturing. Such materials may include, for example, one or more of refractory metal materials, barrier materials, diffusion materials, insulator materials, and the like.The base 18 includes an upper surface 17 extending horizontally. In some embodiments, the upper surface 17 may be regarded as extending along a first direction; where this first direction is shown as being along the axis 5.A gap is provided between the substrate 18 and the digit line 16 to indicate that there may be additional materials, electrical components, etc., provided between the substrate 18 and the digit line 16.The digit line 16 includes a conductive material 19. The conductive material 19 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.) And/or one or more of conductively doped semiconductor materials (for example, conductively doped silicon, conductively doped germanium, etc.). In some embodiments, the digital line 16 may include one or two of tungsten and ruthenium, mainly composed of one or two of tungsten and ruthenium, or one or two of tungsten and ruthenium.The access device 14 includes pillars 20 of semiconductor oxide 22. The semiconductor oxide may include any suitable composition; and in some embodiments, may include one or more of indium, zinc, tin, and gallium. For example, the semiconductor oxide may include a composition having oxygen and indium, zinc, and gallium, mainly composed of the composition, or composed of the composition. Indium, zinc, and gallium can be considered metals in such compositions. The stoichiometric content of the composition can be expressed as metal atomic percentage. Specifically, with respect to the total concentration of all metals of the semiconductor oxide, the content of each of the metals of the semiconductor oxide may be definite in terms of the concentration of each of the metals of the semiconductor oxide; and Ignore the oxygen concentration. In some example embodiments, the semiconductor oxide 22 may include indium with a metal atomic percentage in the range of about 14 to about 24, gallium with a metal atomic percentage in the range of about 37 to about 47, and a metal atomic percentage in the range of about 35. Zinc in the range of about 45. In some example embodiments, the metal atomic percentage of indium can be in the range of about 16 to about 22, the metal atomic percentage of gallium can be in the range of about 39 to about 45, and the metal atomic percentage of zinc can be in the range of about 37 to Within the range of about 43. It should be noted that even a small change in the stoichiometry of the semiconductor oxide may substantially change the physical properties of the semiconductor oxide. Therefore, it may be advantageous to carefully control the metal content in the semiconductor oxide.In the illustrated embodiment, the pillars 20 of semiconductor oxide extend vertically; or in other words, extend along a second axis 7 that is substantially orthogonal to the first axis 5. The term "substantially orthogonal" means orthogonal within reasonable tolerances of manufacturing and measurement.The semiconductor oxide pillar 20 has opposite sidewall surfaces 23 and 25 along the cross section of FIG. 1. The side wall surface 23 may be referred to as a first side wall surface, and the side wall surface 25 may be referred to as a second side wall surface.The access device 14 includes an insulating material 24 along and directly against the semiconductor oxide 22 (ie, adjacent to the semiconductor oxide 22). The insulating material 24 may include any suitable composition. For example, in some embodiments, the insulating material 24 may include one or more high-k materials, where the term high-k means that the dielectric constant is greater than that of silicon dioxide. For example, the insulating material 24 may include one or more metal oxides; and in some embodiments, it may include one or more of aluminum oxide, hafnium dioxide, zirconium oxide, titanium oxide, etc., mainly composed of one of them. Or multiple components, or one or more of them. In some embodiments, the insulating material 24 may be referred to as an insulating gate oxide or gate dielectric.In the illustrated embodiment, the first region 26 of the insulating material 24 is along the first sidewall surface 23 of the pillar 20, and the second region 28 of the insulating material 24 is along the second sidewall surface 25 of the pillar 20.The access device 14 also includes a gate material 30 along the insulating material 24 and directly against the insulating material. The gate material 30 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal Carbide, etc.), and/or one or more of conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the gate material 30 may include one or both of tungsten and titanium nitride.In the illustrated embodiment, the first region 32 of the gate material 30 is along the first region 26 of the insulating material 24, and the second region 34 of the gate material 30 is along the second region 28 of the insulating material. In some embodiments, the gate material 30 can be considered to be separated from the semiconductor oxide 22 by the insulating material 24. In some embodiments, there may be an additional composition between the semiconductor oxide and the gate material (eg, additional insulating composition), and therefore, the gate material may be considered to be at least separated from the semiconductor oxide by the insulating material 24 open.The gate material 30 is supported above the insulating material 36. The insulating material 36 may include any suitable composition; and in some embodiments, it may include silicon dioxide, consist mainly of silicon dioxide, or consist of silicon dioxide. In some embodiments, the insulating material 36 may be omitted.The pillar 20 of the semiconductor oxide 22 extends between the first conductive contact 37 and the second conductive contact corresponding to the digit line 16. The first conductive contact 37 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, etc.) , Metal carbide, etc.), and/or one or more of conductive doped semiconductor materials (for example, conductive doped silicon, conductive doped germanium, etc.). In some embodiments, the first conductive contact 37 may include one or two of tungsten and ruthenium, mainly composed of one or two of tungsten and ruthenium, or one or both of tungsten and ruthenium. Kind of composition.The conductive contact 37 is coupled with the charge storage device 38; in the embodiment shown, the charge storage device is a capacitor. In other embodiments, the charge storage device may have other configurations; and may include, for example, phase change materials, conductive bridging materials, and the like.The capacitor 38 has a node coupled with the reference voltage 40. Such reference voltage can be ground, Vcc/2 or any other suitable reference voltage.The gate material 30 may be coupled with the word line WL1, and the digit line 16 may correspond to the digit line DL1. In operation, a voltage is applied to the word line WL1 that establishes an electric field along the first region 32 and the second region 34 of the gate material 30. Such an electric field induces a flow of carriers in the channel region made of semiconductor oxide, wherein such flow of carriers extends between the digit line 16 and the conductive contact 37. The flow of carriers is illustrated by arrows 42 and 44. The carrier flow extends along the vertical direction of the pillar 20 (ie, the direction along the second axis 7).In the illustrated embodiment, the semiconductor oxide 22 is polycrystalline. The individual grains of polycrystalline materials are bounded by grains. The dashed line 46 illustrates the grain boundary. The crystal grains may have any suitable particle size; and in some embodiments, the average particle size may be in the range of about 1 nanometer (nm) to about 100 nm; in the range of about 1 nm to about 50 nm; in the range of about 20 nm to about 25 nm Within the scope and so on. Any suitable method can be used to determine the average particle size. The crystallinity may be cubic crystallinity (that is, it may have a cubic unit cell and may include a cubic crystal system). In some embodiments, individual crystalline grains may be referred to as cubic crystallinity dominated, which means that the crystallinity is substantially cubic, and may or may not be completely cubic in the entire grain. The term "substantially cubic" means cubic within reasonable tolerances. In some embodiments, the polycrystalline material may be said to have mainly cubic crystallinity, which means that more than 50 volume percent of the polycrystalline material has cubic crystallinity (or at least substantially cubic crystallinity). In some embodiments, the content of cubic crystallinity (or substantially cubic crystallinity) in the polycrystalline material may be greater than 70 volume percent, greater than 90 volume percent, greater than 95 volume percent, and so on.The direction of carrier flow (indicated by arrows 42 and 44) crosses the grain boundaries of the polycrystalline material 22. In other words, one or more of the grain boundaries have a portion (for example, the illustrated portion 47) extending in a direction crossing the current flow direction. In some embodiments, the current flow direction may be referred to as the first direction, and the grain boundary direction may be referred to as the second direction. The advantage of having carriers flow through one or more of the grain boundaries of the semiconductor oxide 22 may be that this enables the carrier flow to be modified by adjusting the number of grain boundaries per unit length of the semiconductor oxide. Therefore, the carrier flow can be tailored for a specific application by tailoring the particle size of the semiconductor oxide 22.With reference to Figure 2, this figure illustrates the area of an integrated assembly 10a that includes another example memory cell 12a having an example access device (transistor) 14a. When appropriate, the assembly 10a will be described with the same numbering as used above when describing the assembly 10 of FIG. 1.The transistor 14a is above the digit line 16, which in turn is supported by the substrate 18.The base 18 includes a horizontally extending upper surface 17, wherein the upper surface extends along the first direction of the axis 5.The access device 14a includes a pillar 20a of a semiconductor oxide 22a. The semiconductor oxide may include any suitable composition; and in some embodiments, may include one or more of indium, zinc, tin, and gallium. For example, the semiconductor oxide may include a composition having oxygen and indium, zinc, and gallium, mainly composed of the composition, or composed of the composition. In some example embodiments, the semiconductor oxide 22a may include indium having a metal atomic percentage in the range of about 16 to about 26, gallium having a metal atomic percentage in the range of about 45 to about 55, and a metal atomic percentage of about 24. To about 34 zinc. In some example embodiments, the metal atomic percentage of indium can be in the range of about 18 to about 24, the metal atomic percentage of gallium can be in the range of about 47 to about 53, and the metal atomic percentage of zinc can be in the range of about 26 to about 24. Within the range of about 32.In the illustrated embodiment, the pillars 20a of semiconductor oxide extend vertically; or in other words, extend along an axis 7 that is substantially orthogonal to the axis 5.The semiconductor oxide pillar 20a has a first side wall surface 23 and a second side wall surface 25 opposite to each other along the cross section of FIG. 2.The access device 14a includes an insulating material 24 along and directly against the semiconductor oxide 22a. The first region 26 of the insulating material 24 is along the first sidewall surface 23 of the pillar 20a, and the second region 28 of the insulating material 24 is along the second sidewall surface 25 of the pillar 20a.The access device 14 also includes a gate material 30 along the insulating material 24 and directly against the insulating material. The first region 32 of the gate material 30 is along the first region 26 of the insulating material 24, and the second region 34 of the gate material 30 is along the second region 28 of the insulating material. In some embodiments, the gate material 30 can be considered to be separated from the semiconductor oxide 22 by the insulating material 24.The gate material 30 is supported above the insulating material 36.The pillar 20 a of the semiconductor oxide 22 a extends between the first conductive contact 37 and the second conductive contact corresponding to the digit line 16.The conductive contact 37 is coupled with a charge storage device 38, which in the illustrated embodiment is a capacitor.The gate material 30 is coupled with the word line WL1, and the digit line 16 corresponds to the digit line DL1. In operation, a voltage is applied to the word line WL1 that establishes an electric field along the first region 32 and the second region 34 of the gate material 30. Such an electric field induces the flow of carriers in the channel region formed by the semiconductor oxide 22a, wherein such carrier flow extends between the digit line 16 and the conductive contact 37. The flow of carriers is illustrated by arrows 42 and 44. The carrier flow extends along the vertical direction of the pillar 20a.In the illustrated embodiment, the semiconductor oxide 22 a is configured such that the grain boundary 46 a extends along the vertical direction of the axis 7 and traverses the entire length of the semiconductor oxide 22 a from the digit line 16 to the conductive contact 37. The grain boundary 46a is offset from the first region 26 of the insulating material 24 through the first intermediate region 50 of the semiconductor oxide 22a, and is offset from the second region 28 of the insulating material 24 through the second intermediate region 52 of the semiconductor oxide 22a. In the embodiment of FIG. 2, the grain boundary 46a is shown to be wavy. In other embodiments, the grain boundaries may be substantially straight or may have other configurations; however, in any case, they will extend substantially vertically along the pillar 20a. The semiconductor oxide 22a may have cubic crystallinity.The carrier flow (indicated by arrows 42 and 44) in the semiconductor oxide 22a is in the intermediate regions 50 and 52, and mainly along (ie, substantially parallel to) the vertical direction of the grain boundary 46a; and in some implementations In the example, it does not cross the grain boundary 46a. The term "substantially parallel" means along the same general direction as the grain boundaries, and in some embodiments, may be parallel within a reasonable tolerance of the measurement. The intermediate regions 50 and 52 can be very uniform in physical and chemical properties. The advantage of allowing carriers to flow through the intervening regions 50 and 52 of the semiconductor oxide 22a may be that it enables the flow of carriers on a large number of substantially identical access devices 14a to be uniform.In some embodiments, the memory cells 12 and 12a of FIGS. 1 and 2 may be representative memory cells incorporated into a memory array. All memory cells within a given memory array can be substantially the same as each other; where the term "substantially the same" means the same within reasonable tolerances of manufacturing and measurement. FIG. 3 shows the regions of an example memory array 54. The memory array includes word lines WL1 and WL2, and digital lines DL1 and DL2. The memory array also includes a plurality of memory cells 12 or 12a. Word lines can be viewed as extending along the rows of the memory array, and digit lines can be viewed as extending along the columns of the memory array. Each of the memory cells is uniquely addressed with one of the word lines and one of the digit lines. The illustrated memory array is a dynamic random access memory (DRAM) array. In other embodiments, transistors 14 and 14a of the type described above with reference to FIGS. 1 and 2 may be used for other types of memory arrays. Additionally or alternatively, the transistor can be used in other circuit systems; for example, logic, sensors, and so on.The transistors 14 and 14a of FIGS. 1 and 2 are shown as vertically extending pillars with semiconductor oxide, and have a carrier flow extending vertically along such pillars. In other embodiments, similar transistors may have other configurations. For example, Figures 4 and 5 show transistors configured for horizontal carrier flow.Referring to FIG. 4, the area of integrated assembly 10b is shown as including transistor 14b. The transistor 14b includes a semiconductor oxide 22 of the type described above with reference to FIG. 1. Such a semiconductor oxide extends horizontally, and specifically, extends in the same direction as the horizontally extending upper surface 17 of the substrate 18 (ie, the direction of the axis 5).The semiconductor oxide 22 is supported by an insulating material 56. Such insulating materials may include any suitable composition; and in some embodiments, may include one or two of silicon dioxide and silicon nitride, mainly composed of one or two of them, or One or two kinds of composition.The semiconductor oxide 22 extends between the first contact 58 and the second contact 60. The first contact member 58 and the second contact member 60 may include any suitable conductive composition; for example, various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide One or more of conductive doped semiconductor materials (e.g., conductive doped silicon, conductive doped germanium, etc.), and/or conductive doped semiconductor materials.The insulating material 24 is above the semiconductor oxide 22 and the gate material 30 is above the insulating material 24.In operation, the electric field along the gate material 30 induces the flow of carriers in the channel region of the semiconductor oxide 22. The carrier flow is indicated by arrow 42 and in the illustrated embodiment extends substantially parallel to the horizontally extending upper surface 17 of the substrate 18 (ie, extends along the axis 5).Referring to Figure 5, the area of the integrated assembly 10c is shown as including a transistor 14c. The transistor 14c includes a semiconductor oxide 22a of the type described above with reference to FIG. 1. Such a semiconductor oxide extends horizontally, and specifically, extends in the same direction as the horizontally extending upper surface 17 of the substrate 18 (ie, the direction of the axis 5).The semiconductor oxide 22 a is supported by the insulating material 56 and extends between the first contact 58 and the second contact 60.The insulating material 24 is above and below the semiconductor oxide 22 a, and the gate material 30 is above and below the insulating material 24. Therefore, in the embodiment of FIG. 5 (ie, the assembly 10c), the semiconductor oxide 22a is vertically between the upper and lower regions of the material 30. This is in contrast to the embodiment of FIG. 4 (ie, assembly 10b), which has only a single region of gate material 30 (specifically, the region of gate material 30 above the semiconductor oxide). In some embodiments, similar to the embodiment of FIG. 4, the semiconductor oxide 22a of the assembly 10c may only be adjacent to a single region of the gate material 30; and in some embodiments, similar to the embodiment of FIG. 5, FIG. 4 The gate dielectric material (gate oxide material) 22 of (assembly 10b) may be provided between the upper region and the lower region of the gate material.Still referring to the embodiment of FIG. 5, the electric field along the gate material 30 induces the flow of carriers in the channel region of the semiconductor oxide 22a. The carrier flow is indicated by arrows 42 and 44, and in the illustrated embodiment extends substantially parallel to the horizontally extending upper surface 17 of the substrate 18 (ie, extends along the axis 5).It should be noted that the embodiment described above with reference to FIG. 2 shows a single vertically extending grain boundary along the approximate center of the semiconductor oxide pillar 20a. In some embodiments, such grain boundaries are produced by recrystallization in the semiconductor oxide 22a and propagate inward from the sidewall surfaces 23 and 25 of the pillar 20a. Therefore, a structure similar to the structure of FIG. 2 can be formed, but in the structure, the grain boundaries extending inward from the surfaces 23 and 25 have not yet merged into a single grain boundary extending downward along the center of the pillar 20a. Alternatively, there may be a pair of grain boundaries extending vertically along the pillar 20a, as shown in FIG. 6. Specifically, FIG. 6 shows an integrated assembly 10d that includes a transistor 14d similar to the transistor 14a of FIG. 2. However, the transistor 14d includes two grain boundaries 46b and 46c extending vertically along the pillar 20a instead of including the single grain boundary 46a shown in FIG. 2. The intermediate regions 50 and 52 are respectively between the surface 23 and the grain boundary 46b and between the surface 25 and the grain boundary 46c. Such an intermediate region includes the channel region of the transistor, and the carrier flow (indicated by arrows 42 and 44) extends vertically along such channel region.Figures 7 and 8 show a pair of top-down views along line A-A of Figure 6 to indicate an alternative configuration of transistor 14d. It should be noted that the side view of FIG. 6 is along the line B-B of FIGS. 7 and 8.Referring to FIG. 7, the gate material 30 and the insulating material 24 are along two opposite sides of the pillar 20a of the semiconductor oxide 22a, and the insulating material 62 is along the other two opposite sides of the pillar 20a. The insulating material 62 may include any suitable composition; and in some embodiments, it may include one or two of silicon dioxide and silicon nitride, mainly composed of one or two of them, or One or two kinds of composition. The grain boundary regions 46b and 46c are parallel to the opposite sidewall surfaces 23 and 25.Referring to FIG. 8, in the surrounding gate configuration, the insulator 24 and the gate material 30 completely surround the pillar 20a. The grain boundaries 46b and 46c are part of the continuous grain boundary structure in the semiconductor oxide 22a. In the illustrated embodiment, the grain boundary structure is polygonal (specifically, substantially square) and conformal to the configuration of the gate material 30 extending around the pillar 20a. It should be noted that there may be multiple grain boundaries or at least one grain boundary; in some embodiments, the crystal grains may be regarded as columnar; and the crystal grains may or may not extend all the way down to the bottom layer corresponding to the conductive material 19 "Substrate".FIG. 8A shows a top-down view of an assembly that replaces the assembly of FIG. 8 and shows the area of transistor 14e. The grain boundary 46 is vertically oriented, similar to the boundary 46b/46c of FIGS. 6 and 8, and along the columnar grain structure 43. In some embodiments, there may be a plurality of vertically-oriented grain boundaries 46 extending within the semiconductor oxide 22a, and in some embodiments, there may be at least one vertically-oriented grain boundary extending within the semiconductor oxide 22a.界46.Any suitable method may be used to form the structure described above. An example method is described with reference to FIGS. 9 to 21; wherein FIGS. 9 to 14 illustrate an example method for forming the transistor 14 of FIG. 1, and wherein FIGS. 15 to 21 illustrate an example method for forming the transistor 14a of FIG. The substrate 18 is not shown in Figures 9 to 21 in order to simplify the drawings.Referring to FIG. 9, the manufacture of the integrated assembly 10 of FIG. 1 begins with the provision of conductive material 19 of the assembly 16. In some embodiments, the conductive material 19 may have an upper surface that includes one or two of tungsten and ruthenium, and is mainly composed of one or two of tungsten and ruthenium, or is composed of tungsten and ruthenium. One or two kinds of composition. The rest of the conductive material 19 may be the same composition as the upper surface, or may be a different composition with respect to the upper surface.Referring to FIG. 10, the semiconductor oxide 22 is deposited on the conductive material 19; and in the embodiment shown, it is deposited directly on the conductive material 19. The semiconductor oxide 22 may be deposited under any suitable conditions using any suitable treatment. In some embodiments, the deposition may utilize one or more of atomic layer deposition (ALD), chemical vapor deposition (CVD), and physical vapor deposition (PVD). In an example embodiment, the deposition of the semiconductor oxide 22 may use PVD, and may be performed in a chamber using the environment inside the chamber, the environment having a temperature in the range of about 20°C to about 500°C and Pressure in the range of about 1 millitorr (mTorr) to about 9 mTorr. In some embodiments, the temperature of the environment may be in the range of about 80°C to about 150°C.The semiconductor oxide 22 of FIG. 10 may include any of the compositions described above with reference to FIG. 1. In some embodiments, the semiconductor oxide may include indium, gallium, and zinc. In such embodiments, physical vapor deposition of semiconductor oxides may utilize multiple targets to achieve the desired concentration of indium, gallium, and zinc; or may utilize a single target with the desired concentration.The deposited semiconductor oxide 22 is polycrystalline (in which the grain boundaries are illustrated by dashed lines 46).FIG. 10A shows an integrated assembly 10e that replaces the assembly 10 of FIG. 10. The assembly 10e has the vertically oriented grain boundaries of FIG. 8A and has a columnar grain structure 43. There may be an amorphous region 70 of the semiconductor oxide 22 under the crystal grain 43. Such amorphous regions may have any suitable thickness; including thicknesses of, for example, about Å. The grain 43 grows along the grain growth direction indicated by the arrow 73 with respect to FIG. 10A; and may be grown using annealing during the deposition of the semiconductor oxide 22 and/or after the deposition. The region 71 may correspond to a crystal nucleation region. In some embodiments, the crystal grain 43 can be considered to be produced by double-sided growth, where the thickness increases along the growth direction 73.FIG. 11 shows the assembly 10 at a processing stage after the processing stage of FIG. 10. The semiconductor oxide 22 is patterned into a structure corresponding to the vertical extension of the pillar 20. Such structures have opposing side wall surfaces 23 and 25 along the cross section of FIG. 11. In some embodiments, the grain structure may include vertical pillars in the as-deposited state (for example, columnar grains similar to the columnar grains of FIG. 10A).12, the insulating material 24 is formed along the opposite side walls 23 and 25 of the pillar 20 and above the pillar. The insulating material 24 includes a first region 26 along the sidewall surface 23 and a second region 28 along the sidewall surface 25.The insulating material 36 is formed above the insulating material 24; and the gate material 30 is formed above the insulating material 36 and the pillar 20. The gate material 30 includes a first region 32 along the first region 26 of the insulating material 24 and includes a second region 34 along the second region 28 of the insulating material 24. In some embodiments, the insulating material 36 may be omitted.Referring to FIG. 13, materials 24, 36, and 30 are patterned. Patterning may include any suitable combination of masking and etching. Such patterning removes materials 30 and 24 from above the upper surface 63 of the pillar 20. The assembly 10 of FIG. 13 can be placed in a chamber and subjected to annealing when the upper surface 63 is exposed to the desired environment. For example, in some embodiments, the upper surface 63 may be exposed to an oxidizing environment (for example, an environment including one or both of O2 and O3) to supplement what may have been during the patterning of the materials 24, 30, and 36. Oxygen lost from the semiconductor oxide 22. Annealing can be performed at any suitable temperature (e.g., a temperature of at least about 400°C) for any suitable duration (e.g., a duration greater than about 30 minutes). The temperature may be the temperature of the environment in the chamber during annealing, may be the temperature of the chuck or other structure holding assembly 10 in the chamber, and/or may be the temperature of the pillar 20 of the semiconductor oxide 22. Annealing can enable the chemical components in the semiconductor oxide 22 to be redistributed so that the composition of the semiconductor oxide 22 becomes more uniform than before annealing, and can enable adjustment of particle size and the like in the semiconductor oxide 22.Referring to FIG. 14, a conductive contact 37 is formed above the upper surface 63 of the pillar 20 to complete the fabrication of the transistor 14, wherein such a transistor is the same as the transistor described above with reference to FIG. 1. Any suitable process may be used to form and pattern the conductive contacts 37. In some embodiments, the conductive material 37 is deposited over the material 22 at the processing stage of FIG. 10 and then patterned together with the material 22.Referring to FIG. 15, the manufacture of the integrated assembly 10 a of FIG. 2 begins with the provision of the conductive material 19 of the assembly 16. In some embodiments, the conductive material 19 may have an upper surface that includes one or two of tungsten and ruthenium, and is mainly composed of one or two of tungsten and ruthenium, or is composed of tungsten and ruthenium. One or two kinds of composition. The rest of the conductive material 19 may be the same composition as the upper surface, or may be a different composition with respect to the upper surface.Referring to FIG. 16, the semiconductor oxide 22 a is deposited on the conductive material 19; and in the embodiment shown, it is deposited directly on the conductive material 19. The semiconductor oxide 22a may be deposited under any suitable conditions using any suitable process; and in some embodiments, the semiconductor oxide may use one or more of ALD, CVD, and PVD. In an example embodiment, the deposition of the semiconductor oxide 22a may use PVD, and may be performed in a chamber using an environment within the chamber, the environment having a temperature in the range of about 20°C to about 500°C and a temperature in the range of about 20°C to about 500°C. Pressure in the range of about 1 mTorr to about 9 mTorr. In some embodiments, the temperature of the environment may be in the range of about 80°C to about 150°C.The semiconductor oxide 22a of FIG. 16 may include any of the compositions described above with reference to FIG. 2. In some embodiments, the semiconductor oxide may include indium, gallium, and zinc. In such embodiments, physical vapor deposition of semiconductor oxides may utilize multiple targets to achieve the desired concentration of indium, gallium, and zinc; or may utilize a single target with the desired concentration.The deposited semiconductor oxide 22a may or may not be crystalline; and in some embodiments may be polycrystalline and/or amorphous. The grain boundaries are not shown in relation to the processing steps of FIG. 16.Referring to FIG. 17, the semiconductor oxide 22a is patterned into a vertically extending structure corresponding to the pillar 20a. Such structures have opposing side wall surfaces 23 and 25 along the cross section of FIG. 17.Referring to FIG. 18, the insulating material 24 is formed along the opposite side walls 23 and 25 of the pillar 20a and above the pillar. The insulating material 24 includes a first region 26 along the sidewall surface 23 and a second region 28 along the sidewall surface 25.The insulating material 36 is formed over the insulating material 24; and the gate material 30 is formed over the insulating material 36 and the pillar 20a. The gate material 30 includes a first region 32 along the first region 26 of the insulating material 24 and includes a second region 34 along the second region 28 of the insulating material 24.Referring to FIG. 19, materials 24, 36, and 30 are patterned. Patterning may include any suitable combination of masking and etching. Such patterning removes materials 30 and 24 from above the upper surface 65 of the pillar 20a. The assembly 10a of FIG. 19 may be disposed in the chamber and subjected to annealing when the upper surface 65 (ie, the top portion 65) is exposed to the desired environment. For example, in some embodiments, the upper surface 65 may be exposed to an oxidizing environment (for example, an environment including one or both of O2 and O3) to supplement what may have been during the patterning of the materials 24, 30, and 36. Oxygen lost from the semiconductor oxide 22. In other embodiments, the upper surface 65 may be exposed to a reducing environment (for example, an environment including a reducing agent; for example, an environment including H2). In other embodiments, the environment may be composed of a gas that is inert with respect to the reaction with the exposed top portion of the semiconductor oxide 22a (for example, the environment may be composed of one or both of argon and N2).Annealing can be performed at any suitable temperature (e.g., a temperature of at least about 400°C) for any suitable duration (e.g., a duration greater than about 30 minutes). The temperature may be the temperature of the environment in the chamber during annealing, may be the temperature of the chuck or other structure holding assembly 10a in the chamber, and/or may be the temperature of the pillar 20a of the semiconductor oxide 22a. In some embodiments, when the temperature of the semiconductor oxide is maintained at about 400°C to about 600°C for a duration of about 30 minutes to about one day; for example, for a duration of about 30 minutes to about 10 hours, When it is within the range, annealing can be performed.Annealing may crystallize and/or recrystallize the semiconductor oxide 22a to form at least one grain boundary 46a (or "seam)" extending vertically through the semiconductor oxide 22a, as shown in FIG. 20. In the illustrated embodiment, the grain boundary 46a traverses the entire length of the vertically extending pillar 20a from the top surface 65 to the conductive material 19. The grain boundaries are offset from the first surface 23 and the second surface 25 of the pillar 20a through the intermediate regions 50 and 52. In the illustrated embodiment, such intermediate regions have substantially the same width as each other along the horizontal direction. In other embodiments, one of the intermediate regions may be wider than the other.In understanding some of the embodiments described herein, it may be useful to provide a brief description of possible mechanisms. However, the appended claims are not limited to any specific mechanism, unless such mechanism (if any) is explicitly recited in such claims. It is believed that the vertically extending grain boundary 46 may be produced by the recrystallization of the semiconductor oxide 22a, where such recrystallization propagates inward from the surface adjacent to the insulator 24 toward the center of the pillar 20a. The grain boundary 46a is clearly visible in the cross section of the structure formed according to the process described herein. Although there may be other smaller grain boundaries, this is far less dominant than grain boundary 46a. In some embodiments, the grain boundary 46a may be referred to as the main grain boundary to indicate that such grain boundary is far less dominant than the grain boundary 46a in the sense that there are other grain boundaries.Referring to FIG. 21, a conductive contact 37 is formed above the upper surface 65 of the pillar 20a to complete the fabrication of the transistor 14a, wherein such a transistor is the same as the transistor described above with reference to FIG. 2. Any suitable process may be used to form and pattern the conductive contacts 37.The assemblies and structures discussed above can be utilized within an integrated circuit (where the term "integrated circuit" means an electronic circuit supported by a semiconductor substrate); and can be incorporated into an electronic system. Such electronic systems can be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and can include multi-layer, multi-chip modules. The electronic system can be any of the following wide range of systems: for example, cameras, wireless devices, displays, chipsets, set-top boxes, games, lighting systems, vehicles, clocks, televisions, cellular phones, personal computers, automobiles, industrial control systems , Airplanes, etc.Unless otherwise specified, the various materials, substances, compositions, etc. described herein can be formed by any suitable method currently known or yet to be developed, including, for example, ALD, CVD, PVD, and the like.The terms "dielectric" and "insulating" can be used to describe materials with insulating electrical properties. The terms are considered synonymous in this disclosure. The term "dielectric" in some cases and the term "insulation" (or "electrical insulation") in other cases may be used to provide language variations in the present disclosure to simplify the premises in the appended claims, rather than being used Indicates any significant chemical or electrical differences.The specific orientations of the various embodiments in the figures are for illustrative purposes only, and in some applications, the embodiments may be rotated relative to the orientation shown. The description provided herein and the appended claims relate to any structure having the described relationship between the various features, regardless of whether the structure is in a particular orientation of the schema or rotated relative to such orientation.Unless otherwise specified, the accompanying cross-sectional views only show the features in the plane of the cross-section, and do not show the materials behind the plane of the cross-section, in order to simplify the drawings.When a structure is referred to as being "on another structure," "adjacent to another structure," or "against another structure," the structure may be directly on the other structure, or an intervening structure may also be present. Conversely, when a structure is referred to as being "directly on another structure," "directly adjacent to another structure," or "directly abutting another structure," there is no intervening structure. The terms "directly below", "directly above", etc. do not indicate direct physical contact (unless explicitly stated otherwise), but instead indicate upright alignment.The structure (e.g., layer, material, etc.) may be referred to as "vertically extending" to indicate that the structure generally extends upward from the underlying base (e.g., substrate). The vertically extending structure may or may not extend substantially orthogonally with respect to the upper surface of the substrate.Some embodiments include an integrated assembly having a gate material, an insulating material along the gate material, and a semiconductor oxide along (adjacent to) the insulating material. The semiconductor oxide has a channel region close to the gate material and at least separated from the gate material by the insulating material. The flow of carriers in the channel region is induced in response to an electric field along the gate material, wherein the flow of carriers is along the first direction. The semiconductor oxide is polycrystalline, in which individual crystal grains of the polycrystalline semiconductor oxide are delimited on the periphery by grain boundaries. At least one of the grain boundaries has a portion extending along a second direction, where the second direction crosses the first direction in which carriers flow.Some embodiments include an integrated assembly having a gate material, an insulating material along (adjacent to) the gate material, and a semiconductor oxide along the insulating material. The semiconductor oxide has a channel region close to the gate material and at least separated from the gate material by the insulating material. The flow of carriers in the channel region is induced in response to an electric field along the gate material, wherein the flow of carriers is along the first direction. The semiconductor oxide has at least one grain boundary that extends along the first direction and is offset from the insulating material through an intervening portion of the semiconductor oxide. The carrier flows in the intermediate region and is substantially parallel to the at least one grain boundary.Some embodiments include an integrated assembly having a semiconductor oxide extending in a vertical direction between a first conductive contact and a second conductive contact. The semiconductor oxide has a first sidewall surface and a second sidewall surface opposed to each other along the cross section. The first region of insulating material is along the first sidewall surface, and the second region of insulating material is along the second sidewall surface. The first region of the gate material is along the first region of the insulating material and is spaced from the first sidewall surface by at least the first region of the insulating material, and the second region of the gate material is along the second region of the insulating material and The second region is spaced apart from the second sidewall surface by the insulating material. The electric field along the first region and the second region of the gate material induce carrier flow in the semiconductor oxide, wherein the carrier flow is along a first direction corresponding to the vertical direction of the semiconductor oxide. The semiconductor oxide is polycrystalline. The individual crystal grains of the polycrystalline semiconductor oxide are delimited on the periphery by grain boundaries. At least one of the grain boundaries has a portion extending along a second direction, where the second direction crosses the first direction in which carriers flow.Some embodiments include an integrated assembly having a semiconductor oxide extending in a vertical direction between a first conductive contact and a second conductive contact. The semiconductor oxide has a first sidewall surface and a second sidewall surface opposed to each other along the cross section. The first region of insulating material is along the first sidewall surface, and the second region of insulating material is along the second sidewall surface. The first region of the gate material is along the first region of the insulating material and is spaced from the first sidewall surface by at least the first region of the insulating material, and the second region of the gate material is along the second region of the insulating material and The second region is spaced apart from the second sidewall surface by the insulating material. The grain boundary is within the semiconductor oxide and extends along the vertical direction. The grain boundary traverses the entire length of the semiconductor oxide from the first contact to the second contact. The grain boundary is offset from the first region of the insulating material through the first intermediate portion of the semiconductor oxide, and is offset from the second region of the insulating material through the second intermediate portion of the semiconductor oxide. The flow of carriers within the semiconductor oxide is induced in response to an electric field along the first and second regions of the gate material, wherein the flow of carriers is along the vertical direction of the semiconductor oxide. The carriers in the semiconductor oxide flow in the intermediate region and are substantially parallel to the grain boundaries.Some embodiments include a method of forming an integrated assembly. The semiconductor oxide is deposited over the conductive material. Semiconductor oxides include indium, gallium, and zinc. The deposition is physical vapor deposition and is performed in the chamber using the environment inside the chamber, the environment having a temperature in the range of about 20° C. to about 500° C. and a pressure in the range of about 1 mTorr to about 9 mTorr. The deposited semiconductor oxide is polycrystalline. The deposited semiconductor oxide is patterned into a vertically extending structure. The vertically extending structure has opposing first and second side wall surfaces along the cross section. The insulating material is formed along the opposing first sidewall surface and the second sidewall surface. The first region of insulating material is along the first sidewall surface, and the second region of insulating material is the second sidewall surface. The gate material is formed along the insulating material. The first area of the gate material is along the first area of the insulating material, and the second area of the gate material is along the second area of the insulating material. The semiconductor oxide, the first and second regions of insulating material, and the first and second regions of gate material together form a transistor. The transistor is configured such that the electric field along the first region and the second region of the gate material induce carrier flow in the semiconductor oxide, wherein the carrier flow is along the first region corresponding to the vertical direction of the semiconductor oxide. direction. The individual crystal grains of the polycrystalline semiconductor oxide are delimited on the periphery by grain boundaries. At least one of the grain boundaries has a portion extending along a second direction, where the second direction crosses the first direction in which carriers flow.Some embodiments include a method of forming an integrated assembly. The semiconductor oxide is deposited over the support material. Semiconductor oxides include indium, gallium, and zinc. The deposited semiconductor oxide is patterned into a vertically extending structure. The vertically extending structure has opposing first and second side wall surfaces along the cross section. The insulating material is formed along the opposing first sidewall surface and the second sidewall surface. The first region of insulating material is along the first sidewall surface, and the second region of insulating material is along the second sidewall surface. The gate material is formed along the insulating material. The first area of the gate material is along the first area of the insulating material, and the second area of the gate material is along the second area of the insulating material. After the insulating material is formed, the semiconductor oxide is annealed under the condition that the temperature of the semiconductor oxide is maintained in the range of about 400° C. to about 600° C. for a duration of at least about 30 minutes to less than or equal to about 1 day. After annealing, the grain boundaries are within the semiconductor oxide and extend in the vertical direction. The grain boundary traverses the entire length of the semiconductor oxide from the upper surface of the semiconductor oxide to the lower surface of the semiconductor oxide. The grain boundary is offset from the first region of the insulating material through the first intermediate portion of the semiconductor oxide, and is offset from the second region of the insulating material through the second intermediate portion of the semiconductor oxide. The semiconductor oxide, the first and second regions of insulating material, and the first and second regions of gate material together form a transistor. The transistor is configured such that the electric field along the first region and the second region of the gate material induce carrier flow in the semiconductor oxide, wherein the carrier flow is along the first region corresponding to the vertical direction of the semiconductor oxide. direction. The carriers in the semiconductor oxide flow in the first intermediate region and the second intermediate region and are substantially parallel to the grain boundary.According to regulations, the subject matter disclosed herein has been described in more specific or less specific language in terms of structure and method features. However, it should be understood that the claims are not limited to the specific features shown and described, as the devices disclosed herein include example embodiments. Therefore, the claims have the entire scope as stated in the writing, and should be properly interpreted according to the principle of equivalents.
In one embodiment, a processor includes at least one execution unit and Return Oriented Programming (ROP) detection logic. The ROP detection logic may determine a ROP metric based on a plurality of control transfer events. The ROP detection logic may also determine whether the ROP metric exceeds a threshold. The ROP detection logic may also, in response to a determination that the ROP metric exceeds the threshold, provide a ROP attack notification.
What is claimed is: 1 . A processor comprising: at least one execution unit; and a Return Oriented Programming (ROP) detection logic to: determine a ROP metric based on a plurality of control transfer events, determine whether the ROP metric exceeds a threshold, and in response to a determination that the ROP metric exceeds the threshold, provide a ROP attack notification. 2. The processor of claim 1 , wherein the ROP detection logic is to determine the ROP metric based on a count, wherein the count is based on the plurality of control transfer events. 3. The processor of claim 2, wherein the ROP detection logic is to increment the count based on an instance of a subroutine return instruction. 4. The processor of claim 3, wherein the ROP detection logic is to decrement the count based on an instance of a subroutine call instruction. 5. The processor of claim 2, wherein the ROP detection logic is to increment the count based on a return misprediction. 6. The processor of claim 2, wherein the ROP detection logic is to increment the count based on an instance of a control transfer instruction associated with a stack pop instruction. 7. The processor of claim 2, wherein the ROP detection logic is to increment the count based on an instance of a control transfer instruction associated with an increase in a stack pointer. 8. The processor of claim 1 , wherein the ROP attack notification is to trigger a protection application to take one or more actions to address the ROP attack. 9. The processor of claim 1 , wherein the ROP detection logic is further to freeze a branch instruction log in response to determining that the ROP metric exceeds the threshold. 10. A processor comprising: an instruction buffer; a branch prediction unit; a Return Oriented Programming (ROP) detection unit comprising: an accumulator to generate a count based on one or more control transfer events, and a comparator to provide a notification of a ROP attack when the count exceeds a threshold during a window. 1 1 . The processor of claim 10, further comprising an instruction detector to detect the execution of a control transfer instruction, wherein the one or more control transfer events comprise the control transfer instruction. 12. The processor of claim 10, further comprising a return stack buffer to detect a return misprediction, wherein the one or more control transfer events comprise the return misprediction. 13. The processor of claim 10, wherein the one or more control transfer events comprise a branch misprediction, and wherein the branch misprediction is detected by the branch prediction unit. 14. The processor of claim 10, wherein the one or more control transfer events comprise pairs of associated instructions. 15. The processor of claim 10, the ROP detection unit further comprising bias logic to reduce at least one bias effect due to a natural imbalance. 16. The processor of claim 10, the ROP detection unit further comprising control logic to adjust the threshold based on a desired level of protection against ROP attacks. 17. A system comprising: a processor including Return Oriented Programming (ROP) detection logic, wherein the ROP detection logic is to determine whether a ROP metric exceeds a threshold during a window, wherein the ROP metric is based at least on one or more control transfer events; and a random access memory coupled to the processor, wherein the random access memory includes an anti-malware application. 18. The system of claim 17, wherein the ROP detection logic is further to, in response to a determination that the ROP metric exceeds the threshold during the window, provide a ROP attack notification to the anti-malware application. 19. The system of claim 18, wherein the anti-malware application is to, in response to the ROP attack notification, initiate one or more actions to halt the ROP attack. 20. The system of claim 17, wherein the window comprises a number of instructions. 21 . A method, comprising: detecting, by instruction control logic of a hardware processor, at least one control transfer event; generating, by ROP detection logic of the hardware processor, a ROP metric based on the at least one control transfer event, and upon determining that the ROP metric exceeds a threshold during a window, notifying a protection application of a ROP attack. 22. The method of claim 21 , wherein generating the ROP metric comprises incrementing a counter upon detecting a control transfer instruction. 23. The method of claim 21 , wherein generating the ROP metric comprises incrementing a counter upon detecting a misprediction. 24. The method of claim 21 , further comprising, upon determining that the ROP metric exceeds the predefined threshold during the window, freezing the contents of a branch instruction log.
DETECTION OF RETURN ORIENTED PROGRAMMING ATTACKS Background [0001 ] Embodiments relate generally to computer security. [0002] Computer exploits are techniques which may be used to compromise the security of a computer system or data. Such exploits may take advantage of a vulnerability of a computer system in order to cause unintended or unanticipated behavior to occur on the computer system. For example, Return Oriented Programming (ROP) exploits may involve identifying a series of snippets of code that are already available in executable memory (e.g., portions of existing library code), and which are followed by a return instruction (e.g., a RET instruction). Such snippets may be chained together into a desired execution sequence by pushing a series of pointer values onto the call stack and then tricking the code into execution the first pointer value. This chained execution sequence does not follow the intended program execution order that the original program author intended, but may instead follow an alternative execution sequence. In this manner, an attacker may create a virtual program sequence without requiring injection of external code. Brief Description Of The Drawings [0003] FIGs. 1 A-1 B are block diagrams of systems in accordance with one or more embodiments. [0004] FIG. 2 is a sequence in accordance with one or more embodiments. [0005] FIGs. 3A-3E are sequences in accordance with one or more embodiments. [0006] FIG. 4 is a block diagram of a processor in accordance with an embodiment of the present invention. [0007] FIG. 5 is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. [0008] FIG. 6 is a block diagram of an embodiment of a processor including multiple cores is illustrated. [0009] FIG. 7 is a block diagram of a system in accordance with an embodiment of the present invention. Detailed Description [0010] In accordance with some embodiments, detection of Return Oriented Programming (ROP) attacks may be provided. In one or more embodiments, ROP attacks may be detected based on a ROP metric (e.g., a metric indicating the likelihood that a system is under a current ROP attack). In some embodiments, the ROP metric is generated based at least in part on control transfer events, meaning instances of instructions and/or states that are associated with ROP attacks. For example, in some embodiments, control transfer events may include instances of control transfer instructions such as subroutine call instructions, subroutine return instructions, branch or jump instructions, etc. Further, in some embodiments, control transfer events may include pairs of associated instructions (i.e., specific types of instructions executed within a given range of each other). Furthermore, in some embodiments, control transfer events may include branch or return mispredictions. [001 1 ] Referring to FIG. 1 A, shown is a block diagram of a system 100 in accordance with one or more embodiments. As shown in FIG. 1 A, the system 100 may include a processor 1 10 and a memory 120. In accordance with some embodiments, the system 100 may be all or a portion of any electronic device, such as a cellular telephone, a computer, a server, a media player, a network device, etc. [0012] In accordance with some embodiments, the memory 120 may include an operating system (OS) 122 and protection software 124. In some embodiments, the operating OS 122 and/or the protection software 124 may include functionality to protect the system 100 against computer exploits and attacks. For example, the protection software 124 may be an anti-virus application, an intrusion detector, a network firewall, etc. [0013] As shown, in some embodiments, the processor 1 10 may include instruction control logic 130 and ROP detection logic 140. In one or more embodiments, the instruction control logic 130 may include functionality to manage and/or optimize the performance characteristics of the processor 1 10. For example, the instruction control logic 130 may include functionality for performance profiling, logging, prefetching, branch prediction, tuning, etc. Further, in some embodiments, the ROP detection logic 140 may include functionality to detect a ROP attack against the system 100. This functionality of the ROP detection logic 140 is described further below with reference to FIGs. 1 B, 2, and 3A-3E. [0014] In one or more embodiments, the ROP detection logic 140 may be implemented in any form of hardware, software, and/or firmware. For example, the ROP detection logic 140 may be implemented in microcode, programmable logic, hard-coded logic, control logic, instruction set architecture, processor abstraction layer, etc. Further, the ROP detection logic 140 may be implemented within the processor 1 10, and/or any other component accessible or medium readable by processor 1 10, such as memory 120. While shown as a particular implementation in the embodiment of FIG. 1A, the scope of the various embodiments discussed herein is not limited in this regard. [0015] Referring now to FIG. 1 B, shown are block diagrams of the instruction control logic 130 and the ROP detection logic 140 in accordance with one or more embodiments. As shown, in some embodiments, the instruction control logic 130 may include an instruction buffer 132, an instruction detector 134, a return stack buffer 136, a branch prediction unit 137, and a branch instruction log 138. [0016] In one or more embodiments, the instruction buffer 132 may be a buffer including entries corresponding to instructions processed in the processor 1 10. For example, in some embodiments, the instruction buffer 132 may be an instruction retirement buffer. In such embodiments, as each instruction is completed, a corresponding entry is cleared from the instruction buffer 132. Such instructions may include, e.g., subroutine call instructions (e.g., CALL), subroutine return instructions (e.g., RET), branch or jump instructions (e.g., IF-THEN, JMP, GOTO), stack instructions (e.g., PUSH, POP), etc. [0017] In one or more embodiments, the instruction detector 134 includes functionality to detect entries of the instruction buffer 132 that correspond to control transfer instructions associated with ROP attacks. For example, in some embodiments, the instruction detector 134 may detect entries corresponding to subroutine call instructions, subroutine return instructions, branch or jump instructions, stack instructions, etc. Further, in response to detecting such entries, the instruction detector 134 may send a detection signal to the ROP detection logic 140. In some embodiments, the detection signal may indicate the type of the detected instruction. [0018] In one or more embodiments, the return stack buffer 136 stores return pointers for use in performance optimization of the processor 1 10 (i.e., return prediction). For example, when a call to a subroutine is executed in a program, the instruction control logic 130 may predict that a corresponding return from the subroutine will be subsequently performed. Accordingly, in some embodiments, a return pointer may be pushed onto the return stack buffer 136 in anticipation of executing the predicted return instruction. [0019] In accordance with some embodiments, the branch prediction unit 137 includes functionality to predict the future direction of a branch executed by the processor 1 10. In some embodiments, the predictions provided by the branch prediction unit 137 may be used to fetch instructions so that they can be readied for execution and/or speculatively executed, thereby saving some of the time that would be required to fetch the instructions when the branch is taken. In one or more embodiments, the branch prediction unit 137 may include functionality to identify correct and incorrect branch predictions. For example, the branch prediction unit 137 may identify all incorrectly predicted branches, and/or may identify specific types of branch mispredictions, such as mispredicted indirect branches (i.e., branches where the branch target is held in a register or memory), mispredicted far branch instructions (i.e., branches that perform a control transfer that also involves changing the code segment), etc. [0020] In accordance with some embodiments, the branch instruction log 138 may store address information for a given number of the most recent branch related instructions. For example, in some embodiments, the branch instruction log 138 may store source and destination addresses for the last sixteen branch related instructions (including call and return instructions) processed in the processor 1 10. Such functionality of the branch instruction log 138 may be used for debugging purposes, and may be referred to as Last Branch Record (LBR) functionality. [0021 ] In some embodiments, the branch instruction log 138 may include functionality to selectively freeze its contents, meaning to stop updating the stored address information at a specific point in time, regardless of whether any new branch related instructions are executed subsequently. In one or more embodiments, the freeze function may be triggered based on a signal received from the ROP detection logic 140. [0022] As shown, in some embodiments, the ROP detection logic 140 includes an accumulator 142, a comparator 144, bias logic 146, and control logic 148. In one or more embodiments, the accumulator 142 includes functionality to generate a count based on control transfer events occurring during a defined window. Specifically, in some embodiments, the accumulator 142 may increment or decrement a counter in response to detecting instances of control transfer instructions (e.g., call or return instructions, branch or jump instructions, etc.) and/or mispredictions of control transfer instructions. For example, in some embodiments, the accumulator 142 may increment the counter in response to detecting a control transfer instruction associated with popping (i.e., removing) an instruction return pointer value from the call stack (e.g., a return instruction). Further, in some embodiments, the accumulator 142 may decrement the counter in response to a detecting a control transfer instruction associated with pushing (i.e., storing) an instruction return pointer value on the call stack (e.g., a call instruction). [0023] In one or more embodiments, the accumulator 142 may detect instances of control transfer instructions using the instruction detector 134. For example, the accumulator 142 may increment the counter in response to receiving a detection signal indicating the detection of a return instruction. Further, in some embodiments, the accumulator 142 may decrement the counter in response to receiving a detection signal indicating the detection of a call instruction. Note that, in normal operation (i.e., when not under a ROP attack), a call instruction is typically followed some instructions later by a corresponding return instruction. Accordingly, in normal operation, counter increases are generally balanced by counter decreases, and thus the counter will remain within a specific range around the zero value. However, in the event of a ROP attack, the number of return instructions may substantially exceed the number of call instructions (referred to as a return-call imbalance). Therefore, under a ROP attack, the counter may increase beyond the specific range around the zero value. Thus, in this example, the counter value may be used as a ROP metric. [0024] In one or more embodiments, the accumulator 142 may interact with the return stack buffer 136 to detect return predictions and/or mispredictions, and to increment/decrement the count based on such detections. For example, in some embodiments, the accumulator 142 may increment a counter in response to detecting a mispredicted return instruction. Further, in some embodiments, the accumulator 142 may decrement the counter in response to detecting a correctly predicted return instruction. [0025] In one or more embodiments, the accumulator 142 may interact with the branch prediction unit 137 to detect branch predictions and mispredictions, and to increment/decrement the count based on such detections. For example, in some embodiments, the accumulator 142 may increment a counter in response to detecting any mispredicted branch instruction. Further, in some embodiments, the accumulator 142 may decrement the counter in response to detecting any correctly predicted branch instruction. In another example, in some embodiments, the accumulator 142 may increment a counter in response to detecting a particular type of mispredicted branch (e.g., a mispredicted indirect branch, a mispredicted far branch, etc.). [0026] In one or more embodiments, the accumulator 142 may also detect instances of stack pivots (e.g., a return instruction associated with an instruction moving the stack pointer to a new memory location). Further, in response to detecting a stack pivot, the accumulator 142 may increment the counter by some amount (e.g., 1 , 2, 5, etc.). [0027] In one or more embodiments, the accumulator 142 may adjust a single counter based on multiple types of control transfer events (e.g., call instructions, return instructions, mispredictions, etc.)- Further, in some embodiments, the accumulator 142 may include separate counters, each corresponding to a different type of control transfer event. Alternatively, in some embodiments, the ROP detection logic 140 includes multiple accumulators 142, each corresponding to a different type of control transfer event. [0028] In one or more embodiments, the accumulator 142 may be limited to a predefined window. For example, the accumulator 142 may reset the count after a specific number of instructions (e.g., 10, 100, 1000, etc.) are processed in the processor 1 10. In another example, the accumulator 142 may be a circular buffer storing a given number of instructions. In yet another example, the accumulator 142 may reset the count after a given time period (e.g., 1 millisecond, 1 second, 1 minute, etc.) has expired. In such embodiments, the counter may reflect a return-call imbalance occurring within the predefined window (e.g., ten more return instructions than call instructions processed during a window of 1000 instructions). In some embodiments, the accumulator 142 may include a saturating mode to prevent the count from exceeding maximum and/or minimum limits. For example, in some embodiments, the accumulator 142 may clip the count to a maximum count limit (e.g., a hardware buffer capacity) in the case of a count increment, and/or may clip the count to the minimum count limit in the case of a count decrement. [0029] In accordance with some embodiments, the comparator 144 includes functionality to provide an attack notification when the count of the accumulator 142 exceeds a predefined threshold. In some embodiments, the predefined threshold may be set to a count level or percentage that indicates a high probability that the system 100 is under a ROP attack. For example, assume that the count of the accumulator 142 is to indicate a return-call imbalance during a window of one hundred instructions. Assume further that, before completing the window of one hundred instructions, the comparator 144 determines that the count of the accumulator 142 reaches eleven, and thereby exceeds a predefined threshold of ten (i.e., a count of positive ten). Therefore, in this example, the comparator 144 may trigger an attack notification (e.g., an interrupt, an exception, etc.) to indicate that the system 100 is probably under a ROP attack. Further, in some embodiments, the attack notification may be sent to the OS 122 and/or the protection software 124. In response, in one or more embodiments, the OS 122 and/or the protection software 124 may undertake actions to prevent and/or interrupt the ROP attack (e.g., system or process stoppage, memory quarantine, event logging, user notification, etc.). For example, the OS 122 and/or protection software 124 may determine that the ROP detection is false based on the system process state, and may thus allow the process to continue execution. In some embodiments, the OS 122 and/or protection software 124 may examine the state of the branch instruction log 138 as part of such a determination. [0030] Note that, while the functionality of the accumulator 142 and/or the comparator 144 is described above in terms of the example of a return-call imbalance, embodiments are not limited in this regard. In particular, in some embodiments, the accumulator 142 and/or the comparator 144 may use any other type of control transfer event, or any combination of types of control transfer events. For example, in some embodiments, the accumulator 142 may increment the count in response to instances of control transfer instructions, without decrementing the count. In another example, in some embodiments, the accumulator 142 may increment and/or decrement the count in response to detecting control transfer instructions that are associated with stack-related instructions (e.g., a jump instruction using a pointer associated with a pop instruction, a jump instruction using a pointer associated with a push instruction, etc.). In yet another example, the accumulator 142 may increment and/or decrement the count in response to detecting control transfer instructions that are associated with instructions to change the stack pointer value (e.g., a return instruction associated with a move instruction). In still another example, in some embodiments, the accumulator 142 may increment the count in response to return mispredictions and/or branch mispredictions. This functionality of the accumulator 142 and/or the comparator 144 is described further below with reference to FIGs. 3A-3D. [0031 ] In one or more embodiments, the ROP detection logic 140 may include multiple sets of components (e.g., accumulator 142, comparator 144, etc.), with each set corresponding to a different type of control transfer event. For example, in some embodiments, the ROP detection logic 140 may include a first accumulator 142 to generate a first count based on detections of return instructions, and may include a second accumulator 142 to generate a second count based on branch mispredictions. In some embodiments, the counts generated by such sets may be combined to generate a single ROP metric. Alternatively, in some embodiments, the count generated by each set may correspond to a different ROP metric which may be evaluated to detect ROP attacks. In such embodiments, the control logic 148 may evaluate each ROP metric separately using a different threshold and/or window, and may trigger an attack notification if any single ROP metric exceeds its corresponding threshold. Optionally, in some embodiments, the control logic 148 may only trigger an attack notification if at least a predefined number of ROP metrics exceed their associated thresholds. Further, in such embodiments, each ROP metric may be weighted by a respective weight or importance. [0032] In one or more embodiments, the bias logic 146 includes functionality to bias or adjust the accumulator 142 to reduce any effects due to natural imbalances (i.e., an imbalance or bias that is not caused by ROP attacks). Specifically, in some embodiments, the bias logic 146 may periodically divide or reduce the count of the accumulator 142 over a given period. For example, in one or more embodiments, the bias logic 146 may shift the accumulator 142 to the right by one bit in order to divide the count by two. In such a manner, the bias logic 146 may offset or reduce any natural imbalances that may be inherent in the system 100 (e.g., imbalances due to program exits, error states, etc.). [0033] In one or more embodiments, the control logic 148 includes functionality to manage the ROP detection logic 140. In some embodiments, such functionality may include adjusting the sensitivity of the ROP detection logic 140 based on an estimated threat level and/or desired level of protection against ROP attacks (e.g., low, medium, high, etc.). For example, the control logic 148 may increase the threshold used by the comparator 144 in response to a lowered threat or protection level, thereby requiring a greater imbalance to occur in the accumulator 142 before triggering an attack notification (i.e., decreasing sensitivity to an ROP attack). Similarly, the control logic 148 may lower the threshold used by the comparator 144 in response to a heightened threat or protection level, thereby requiring a smaller imbalance to occur in the accumulator 142 before triggering an attack notification (i.e., increasing sensitivity to an ROP attack). In another example, the control logic 148 may increase or decrease the length of the window used by the accumulator 142 to adjust sensitivity to an ROP attack. In yet another example, the control logic 148 may adjust the biasing effect of the bias logic 146 to compensate for system changes which may affect any natural imbalances. [0034] Referring now to FIG. 2, shown is a sequence 200 for detecting a ROP attack, in accordance with one or more embodiments. In one or more embodiments, the sequence 200 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 200 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0035] At step 210, control transfer events may be detected. For example, referring to FIG. 1 B, the instruction detector 134 may detect instances of control transfer instructions (e.g., call instructions, return instructions, branch or jump instructions, etc). In another example, the branch prediction unit 137 may detect mispredictions of control transfer instructions. [0036] At step 220, an ROP metric may be determined based on the control transfer events (detected at step 210). In one or more embodiments, the ROP metric may be a count value based on instances of control transfer instructions and/or mispredictions. Further, in some embodiments, the ROP metric may be limited to a predefined window. For example, referring to FIG. 1 B, the accumulator 142 may increment and/or decrement a counter in response to a detection of a particular type of instruction (e.g., a return instruction, a call instruction, a branch instruction, a far branch instruction, etc.). In another example, the accumulator 142 may increment and/or decrement the counter in response to associated pairs of instructions (e.g., a push or pop instruction associated with a jump instruction, a return instruction associated with an instruction moving the stack pointer, etc.). In some embodiments, the accumulator 142 may detect the aforementioned instructions based on a signal from the instruction detector 134. In yet another example, the accumulator 142 may increment and/or decrement the counter in response to a branch misprediction (e.g., determined by interacting with the branch prediction unit 137). In still another example, the accumulator 142 may increment and/or decrement the counter in response to a return misprediction (e.g., determined by interacting with the return stack buffer 136). The steps involved in performing step 220 are discussed in greater detail below with reference to FIGs. 3A-3E. [0037] At step 230, a determination about whether the metric exceeds a predefined threshold is made. In one or more embodiments, the predefined threshold may correspond to a level of the metric that indicates that a system is probably under ROP attack. For example, referring to FIG. 1 B, the comparator 144 may determine whether the count of the accumulator 142 exceeds a predefined threshold. [0038] If it is determined at step 230 that the metric does not exceed the predefined threshold, then the sequence 200 ends. However, if it is determined at step 230 that the metric exceeds the predefined threshold, then at step 240, protection software (e.g., an anti-malware application) may be notified that a ROP attack has been detected. For example, referring to FIG. 1 A, the ROP protection logic 140 may send an ROP attack notification (e.g., an interrupt, an exception, etc.) to the protection software 124 and/or the operating system 122 to indicate that a possible ROP attack has been detected. In some embodiments, the ROP attack notification may trigger the protection software to take one or more actions to address the ROP attack (e.g., monitor suspected code, quarantine suspected code, notify an administrator or a management system, halt execution, shut down a system, etc.). [0039] At step 250, in some embodiments, a branch instruction log may be frozen in response to determining (at step 230) that the metric exceeds the predefined threshold. For example, referring to FIG. 1 B, the contents of the branch instruction log 138 may be frozen (i.e., no longer updated) in response to a signal from the control logic 148. In some embodiments, the contents of the branch instruction log 138 may then be provided to the protection software 124 (shown in FIG. 1A) for use in analyzing and/or addressing the possible ROP attack. After step 250, the sequence 200 ends. [0040] Referring now to FIG. 3A, shown is a sequence 310 for determining a metric, in accordance with one or more embodiments. In particular, the sequence 310 illustrates an exemplary expansion of the steps involved in performing step 220 (shown in FIG. 2). In one or more embodiments, the sequence 310 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 310 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0041 ] At step 312, a count may be incremented for each instance of a return instruction. For example, referring to FIG. 1 B, the accumulator 142 may receive a signal from the instruction detector 134 indicating the detection of a return instruction. In response to receiving this signal, the accumulator 142 may increment a counter by one. [0042] At step 314, the count may be decremented for each instance of a call instruction. For example, referring to FIG. 1 B, the accumulator 142 may receive a signal from the instruction detector 134 indicating the detection of a call instruction. In response to receiving this signal, the accumulator 142 may decrement the counter by one. In one or more embodiments, the counter value may correspond to a return- call imbalance metric. [0043] At step 316, the count may optionally be adjusted by a bias factor. For example, referring to FIG. 1 B, the bias logic 146 may determine a need to reduce the counter of the accumulator 142 to compensate for natural return-call imbalances in the system 100. Accordingly, the bias logic 146 may shift the accumulator 142 to the right by one bit in order to divide the counter value by two, thereby reducing the effect of the natural imbalances. After step 316, the sequence 310 continues at step 230 (shown in FIG. 2). [0044] Optionally, in some embodiments, step 314 may be omitted from the sequence 310. For example, referring to FIG. 1 B, the accumulator 142 may increment the count in response to instances of return instructions, without decrementing the count in response to instances of call instructions. Accordingly, in such embodiments, the counter value of the accumulator 142 may not correspond to a return-call imbalance metric, but may instead correspond to a metric of the number of return instructions executed during the predefined window of the accumulator 142. [0045] Referring now to FIG. 3B, shown is a sequence 320 for determining a metric, in accordance with one or more embodiments. In particular, the sequence 320 illustrates an exemplary expansion of the steps involved in performing step 220 (shown in FIG. 2). In one or more embodiments, the sequence 320 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 320 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0046] At step 322, a count may be incremented for each instance of a control transfer instruction associated with a stack pop instruction. For example, referring to FIG. 1 B, the accumulator 142 may determine whether a jump instruction and a stack pop instruction are executed within a predefined distance (i.e., a number of instructions) of each other. In some embodiments, the association between the jump instruction and a stack pop instruction may be determined based on whether the target value from the stack pop instruction is used by the jump instruction. Further, in some embodiments, such a pair of associated instructions may be functionally similar to a return instruction, and may thus be indicative of a possible ROP attack. Accordingly, if a pair of associated instructions is detected, the accumulator 142 may increment a counter by one. [0047] At step 324, the count may be decremented for each instance of a control transfer instruction associated with a stack push instruction. In some embodiments, such a pair of associated instructions may be functionally similar to a call instruction. For example, referring to FIG. 1 B, the accumulator 142 may determine whether a jump instruction and a stack push instruction are executed within a predefined distance, and if so, may decrement the counter by one. [0048] At step 326, the count may optionally be adjusted by a bias factor. For example, referring to FIG. 1 B, the bias logic 146 may divide or otherwise reduce the counter value of the accumulator 142 in order to reduce the effect of the natural imbalances. After step 326, the sequence 320 continues at step 230 (shown in FIG. 2). [0049] Referring now to FIG. 3C, shown is a sequence 330 for determining a metric, in accordance with one or more embodiments. In particular, the sequence 330 illustrates an exemplary expansion of the steps involved in performing step 220 (shown in FIG. 2). In one or more embodiments, the sequence 330 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 330 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0050] At step 332, a count may be incremented for each instance of a control transfer instruction associated with an increase in the stack pointer. For example, referring to FIG. 1 B, the accumulator 142 may determine whether a jump instruction is associated with an increase in the stack pointer. In response to such a determination, the accumulator 142 may increment a counter by one. [0051 ] At step 334, the count may be decremented for each instance of a control transfer instruction associated with a decrease in the stack pointer. For example, referring to FIG. 1 B, the accumulator 142 may determine whether a jump instruction is associated with a decrease in the stack pointer, and if so, may decrement the counter by one. [0052] At step 336, the count may optionally be adjusted by a bias factor. For example, referring to FIG. 1 B, the bias logic 146 may divide or otherwise reduce the counter value of the accumulator 142 in order to reduce the effect of the natural imbalances. After step 336, the sequence 330 continues at step 230 (shown in FIG. 2). [0053] Referring now to FIG. 3D, shown is a sequence 340 for determining a metric, in accordance with one or more embodiments. In particular, the sequence 340 illustrates an exemplary expansion of the steps involved in performing step 220 (shown in FIG. 2). In one or more embodiments, the sequence 340 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 340 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0054] At step 342, a count may be incremented for each return misprediction. For example, referring to FIG. 1 B, the accumulator 142 may interact with the return stack buffer 136 to detect a return misprediction. In response to such a determination, the accumulator 142 may increment a counter by one. [0055] At step 344, the count may optionally be decremented for each correct return prediction. For example, referring to FIG. 1 B, the accumulator 142 may interact with the return stack buffer 136 to detect a correct return prediction. In response to such a determination, the accumulator 142 may decrement a counter by one. Alternatively, in some embodiments, the count may not be decremented for each correct return prediction. For example, in some embodiments, the count may be reset to zero for each correct return prediction. In another example, in some embodiments, the count may not be altered in response to a correct return prediction. [0056] At step 346, the count may optionally be adjusted by a bias factor. For example, referring to FIG. 1 B, the bias logic 146 may divide or otherwise reduce the counter value of the accumulator 142 in order to reduce the effect of the natural imbalances. After step 346, the sequence 340 continues at step 230 (shown in FIG. 2). [0057] Referring now to FIG. 3E, shown is a sequence 350 for determining a metric, in accordance with one or more embodiments. In particular, the sequence 350 illustrates an exemplary expansion of the steps involved in performing step 220 (shown in FIG. 2). In one or more embodiments, the sequence 350 may be part of the ROP detection logic 140 shown in FIG. 1A. The sequence 350 may be implemented in hardware, software, and/or firmware. In firmware and software embodiments it may be implemented by computer executed instructions stored in a non-transitory computer readable medium, such as an optical, semiconductor, or magnetic storage device. [0058] At step 352, a count may be incremented for each branch misprediction. For example, referring to FIG. 1 B, the accumulator 142 may interact with the branch prediction unit 137 to detect any branch misprediction. In response to such a determination, the accumulator 142 may increment a counter by one. Alternatively, in some embodiments, the accumulator 142 may increment a counter only in response to detecting a particular type of branch misprediction (e.g., a mispredicted indirect branch). [0059] At step 354, the count may optionally be decremented for each correct branch prediction. For example, referring to FIG. 1 B, the accumulator 142 may interact with the branch prediction unit 137 to detect a correct branch prediction. In response to such a determination, the accumulator 142 may decrement a counter by one. Alternatively, in some embodiments, the count may not be decremented for each correct branch prediction. For example, in some embodiments, the count may be reset to zero for each correct branch prediction. In another example, in some embodiments, the count may not be altered in response to a correct branch prediction. [0060] At step 356, the count may optionally be adjusted by a bias factor. For example, referring to FIG. 1 B, the bias logic 146 may divide or otherwise reduce the counter value of the accumulator 142 in order to reduce the effect of the natural imbalances. After step 356, the sequence 350 continues at step 230 (shown in FIG. 2). [0061 ] Note that the examples shown in FIGs. 1A-1 B, 2, and 3A-3E are provided for the sake of illustration, and are not intended to limit any embodiments. For instance, while the above examples describe incrementing or decrementing the accumulator 142 by one, embodiments are not limited in this regard. For example, the accumulator 142 may be incremented by a first amount (e.g., two, four, five, etc.) in response to a first control transfer event, may be decremented by a second amount in response to a second control transfer event, etc. Further, it is contemplated that the ROP detection logic 140 may use any type of control transfer events, and/or any combination thereof. [0062] Note also that, while embodiments may be shown in simplified form for the sake of clarity, embodiments may include any number and/or arrangement of processors, cores, and/or additional components (e.g., buses, storage media, connectors, power components, buffers, interfaces, etc.). In particular, it is contemplated that some embodiments may include any number of components (e.g., additional accumulators 142 and/or comparators 144) in addition to those shown, and that different arrangement of the components shown may occur in certain implementations. Further, it is contemplated that specifics in the examples shown in FIGs. 1 A-1 B, 2, and 3A-3E may be used anywhere in one or more embodiments. [0063] Referring now to FIG. 4, shown is a block diagram of a processor in accordance with an embodiment of the present invention. As shown in FIG. 4, the processor 400 may be a multicore processor including first die 405 having a plurality of cores 410a - 41 On of a core domain. The various cores 410a - 41 On may be coupled via an interconnect 415 to a system agent or uncore domain 420 that includes various components. As seen, the uncore domain 420 may include a shared cache 430 which may be a last level cache. In addition, the uncore may include an integrated memory controller 440 and various interfaces 450. [0064] Although not shown for ease of illustration in FIG. 4, in some embodiments, each of the cores 410a - 41 On may include the ROP detection logic 140 shown in FIG. 1A-1 B. Alternatively, in some embodiments, some or all of the ROP detection logic 140 may be included in the uncore domain 420, and may thus be shared across the cores 410a - 41 On. [0065] With further reference to FIG. 4, the processor 400 may communicate with a system memory 445, e.g., via a memory bus. In addition, by interfaces 450, connection can be made to various off-package components such as peripheral devices, mass storage and so forth. While shown with this particular implementation in the embodiment of FIG. 4, the scope of the present invention is not limited in this regard. [0066] Referring now to FIG. 5, shown is a block diagram of a multi-domain processor in accordance with another embodiment of the present invention. As shown in the embodiment of FIG. 5, processor 500 includes multiple domains. Specifically, a core domain 510 can include a plurality of cores 510a-510n, a graphics domain 520 can include one or more graphics engines, and a system agent domain 550 may further be present. Each of the cores 510a-510n can include the ROP detection logic 140 described above with reference to FIGs. 1 A-1 B. Note that while only shown with three domains, understand the scope of the present invention is not limited in this regard and additional domains can be present in other embodiments. For example, multiple core domains may be present each including at least one core. [0067] In general, each core 510 may further include low level caches in addition to various execution units and additional processing elements. In turn, the various cores may be coupled to each other and to a shared cache memory formed of a plurality of units of a last level cache (LLC) 540a - 540n. In various embodiments, LLC 550 may be shared amongst the cores and the graphics engine, as well as various media processing circuitry. As seen, a ring interconnect 530 thus couples the cores together, and provides interconnection between the cores, graphics domain 520 and system agent circuitry 550. [0068] In the embodiment of FIG. 5, system agent domain 550 may include display controller 552 which may provide control of and an interface to an associated display. As further seen, system agent domain 550 may also include a power control unit 555 to allocate power to the CPU and non-CPU domains. [0069] As further seen in FIG. 5, processor 500 can further include an integrated memory controller (IMC) 570 that can provide for an interface to a system memory, such as a dynamic random access memory (DRAM). Multiple interfaces 580a - 580n may be present to enable interconnection between the processor and other circuitry. For example, in one embodiment at least one direct media interface (DM I) interface may be provided as well as one or more Peripheral Component Interconnect Express (PCI Express™ (PCIe™)) interfaces. Still further, to provide for communications between other agents such as additional processors or other circuitry, one or more interfaces in accordance with an Intel® Quick Path Interconnect (QPI) protocol may also be provided. As further seen, a peripheral controller hub (PCH) 590 may also be present within the processor, and can be implemented on a separate die, in some embodiments. Although shown at this high level in the embodiment of FIG. 5, understand the scope of the present invention is not limited in this regard. [0070] Referring to FIG. 6, an embodiment of a processor including multiple cores is illustrated. Processor 1 100 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a coprocessor, a system on a chip (SOC), or other device to execute code. Processor 1 100, in one embodiment, includes at least two cores— cores 1 101 and 1 102, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 1 100 may include any number of processing elements that may be symmetric or asymmetric. [0071 ] In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads. [0072] A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor. [0073] Physical processor 1 100, as illustrated in FIG. 6, includes two cores, cores 1 101 and 1 102. Here, cores 1 101 and 1 102 are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core 1 101 includes an out-of-order processor core, while core 1 102 includes an in-order processor core. However, cores 1 101 and 1 102 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native instruction set architecture (ISA), a core adapted to execute a translated ISA, a co-designed core, or other known core. Yet to further the discussion, the functional units illustrated in core 1 101 are described in further detail below, as the units in core 1 102 operate in a similar manner. [0074] As shown, core 1 101 includes two hardware threads 1 101 a and 1 101 b, which may also be referred to as hardware thread slots 1 101 a and 1 101 b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 1 100 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 1 101 a, a second thread is associated with architecture state registers 1 101 b, a third thread may be associated with architecture state registers 1 102a, and a fourth thread may be associated with architecture state registers 1 102b. Here, each of the architecture state registers (1 101 a, 1 101 b, 1 102a, and 1 102b) may be referred to as processing elements, thread slots, or thread units, as described above. [0075] As illustrated, architecture state registers 1 101 a are replicated in architecture state registers 1 101 b, so individual architecture states/contexts are capable of being stored for logical processor 1 101 a and logical processor 1 101 b. In core 1 101 , other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 1 130 may also be replicated for threads 1 101 a and 1 101 b. Some resources, such as re-order buffers in reorder/retirement unit 1 135, ILTB 1 120, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page-table base register(s), low-level data-cache and data-TLB 1 1 15, execution unit(s) 1 140, and portions of out-of-order unit 1 135 are potentially fully shared. [0076] Processor 1 100 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 6, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 1 101 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 1 120 to predict branches to be executed/taken and an instruction-translation buffer (l-TLB) 1 120 to store address translation entries for instructions. [0077] Core 1 101 further includes decode module 1 125 coupled to fetch unit 1 120 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 1 101 a, 1 101 b, respectively. Usually core 1 101 is associated with a first ISA, which defines/specifies instructions executable on processor 1 100. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 1 125 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. As a result of the recognition by decoders 1 125, the architecture or core 1 101 takes specific, predefined actions to perform tasks associated with the appropriate instruction (e.g., the actions shown in FIGs. 2-3E). It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. [0078] In one example, allocator and renamer block 1 130 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 1 101 a and 1 101 b are potentially capable of out-of-order execution, where allocator and renamer block 1 130 also reserves other resources, such as reorder buffers to track instruction results. Unit 1 130 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 1 100. Reorder/retirement unit 1 135 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of- order. [0079] Scheduler and execution unit(s) block 1 140, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units. [0080] Lower level data cache and data translation buffer (D-TLB) 1 150 are coupled to execution unit(s) 1 140. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages. [0081 ] Here, cores 1 101 and 1 102 share access to higher-level or further-out cache 1 1 10, which is to cache recently fetched elements. Note that higher-level or further-out refers to cache levels increasing or getting further away from the execution unit(s). In one embodiment, higher-level cache 1 1 10 is a last-level data cache— last cache in the memory hierarchy on processor 1 100— such as a second or third level data cache. However, higher level cache 1 1 10 is not so limited, as it may be associated with or includes an instruction cache. A trace cache— a type of instruction cache— instead may be coupled after decoder 1 125 to store recently decoded traces. [0082] In the depicted configuration, processor 1 100 also includes bus interface module 1 105 and a power controller 1 160, which may perform power sharing control in accordance with an embodiment of the present invention. Historically, controller 1 170 has been included in a computing system external to processor 1 100. In this scenario, bus interface 1 105 is to communicate with devices external to processor 1 100, such as system memory 1 175, a chipset (often including a memory controller hub to connect to memory 1 175 and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus 1 105 may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus. [0083] Memory 1 175 may be dedicated to processor 1 100 or shared with other devices in a system. Common examples of types of memory 1 175 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that device 1 180 may include a graphic accelerator, processor or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device. [0084] Note however, that in the depicted embodiment, the controller 1 170 is illustrated as part of processor 1 100. Recently, as more logic and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor 1 100. For example in one embodiment, memory controller hub 1 170 is on the same package and/or die with processor 1 100. Here, a portion of the core (an on-core portion) includes one or more controller(s) 1 170 for interfacing with other devices such as memory 1 175 or a graphics device 1 180. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core configuration). As an example, bus interface 1 105 includes a ring interconnect with a memory controller for interfacing with memory 1 175 and a graphics controller for interfacing with graphics processor 1 180. Yet, in the SOC environment, even more devices, such as the network interface, coprocessors, memory 1 175, graphics processor 1 180, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption. [0085] Embodiments may be implemented in many different system types. Referring now to FIG. 7, shown is a block diagram of a system in accordance with an embodiment of the present invention. As shown in FIG. 7, multiprocessor system 600 is a point-to-point interconnect system, and includes a first processor 670 and a second processor 680 coupled via a point-to-point interconnect 650. As shown in FIG. 7, each of processors 670 and 680 may be multicore processors, including first and second processor cores (i.e., processor cores 674a and 674b and processor cores 684a and 684b), although potentially many more cores may be present in the processors. Each of the processors can include the ROP detection logic 140 described above with reference to FIGs. 1 A-1 B. [0086] Still referring to FIG. 7, first processor 670 further includes a memory controller hub (MCH) 672 and point-to-point (P-P) interfaces 676 and 678. Similarly, second processor 680 includes a MCH 682 and P-P interfaces 686 and 688. As shown in FIG. 7, MCH's 672 and 682 couple the processors to respective memories, namely a memory 632 and a memory 634, which may be portions of system memory (e.g., DRAM) locally attached to the respective processors. First processor 670 and second processor 680 may be coupled to a chipset 690 via P-P interconnects 652 and 654, respectively. As shown in FIG. 7, chipset 690 includes P-P interfaces 694 and 698. [0087] Furthermore, chipset 690 includes an interface 692 to couple chipset 690 with a high performance graphics engine 638, by a P-P interconnect 639. In turn, chipset 690 may be coupled to a first bus 616 via an interface 696. As shown in FIG. 7, various input/output (I/O) devices 614 may be coupled to first bus 616, along with a bus bridge 618 which couples first bus 616 to a second bus 620. Various devices may be coupled to second bus 620 including, for example, a keyboard/mouse 622, communication devices 626 and a data storage unit 628 such as a disk drive or other mass storage device which may include code 630, in one embodiment. Further, an audio I/O 624 may be coupled to second bus 620. Embodiments can be incorporated into other types of systems including mobile devices such as a smart cellular telephone, tablet computer, netbook, Ultrabook™, or so forth. [0088] It should be understood that a processor core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology). [0089] Any processor described herein may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which are available from Intel Corporation, of Santa Clara, Calif. Alternatively, the processor may be from another company, such as ARM Holdings, Ltd, MIPS, etc.. The processor may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The processor may be implemented on one or more chips. The processor may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. [0090] It is contemplated that the processors described herein are not limited to any system or device. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable. [0091 ] Embodiments may be implemented in code and may be stored on a non- transitory storage medium having stored thereon instructions which can be used to program a system to perform the instructions. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, solid state drives (SSDs), compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. [0092] The following clauses and/or examples pertain to further embodiments. One example embodiment may be a processor including at least one execution unit, and a Return Oriented Programming (ROP) detection logic. The ROP detection logic may be to determine a ROP metric based on a plurality of control transfer events, determine whether the ROP metric exceeds a threshold, and in response to a determination that the ROP metric exceeds the threshold, provide a ROP attack notification. The ROP detection logic may be to determine the ROP metric based on a count, where the count is based on the plurality of control transfer events. The ROP detection logic may be to increment the count based on an instance of a subroutine return instruction. The ROP detection logic may be to decrement the count based on an instance of a subroutine call instruction. The ROP detection logic may be to increment the count based on a return misprediction. The ROP detection logic may be to increment the count based on an instance of a control transfer instruction associated with a stack pop instruction. The ROP detection logic may be to increment the count based on an instance of a control transfer instruction associated with an increase in a stack pointer. The ROP attack notification is to trigger a protection application to take one or more actions to address the ROP attack. The ROP detection logic may be to freeze a branch instruction log in response to determining that the ROP metric exceeds the threshold. [0093] Another example embodiment may be a processor including: an instruction buffer; a branch prediction unit; and a Return Oriented Programming (ROP) detection unit. The ROP detection unit may include an accumulator to generate a count based on one or more control transfer events, and a comparator to provide a notification of a ROP attack when the count exceeds a threshold during a window. The processor may also include an instruction detector to detect the execution of a control transfer instruction, where the one or more control transfer events comprise the control transfer instruction. The processor may also include a return stack buffer to detect a return misprediction, wherein the one or more control transfer events comprise the return misprediction. The one or more control transfer events may include a branch misprediction, where the branch misprediction is detected by the branch prediction unit. The one or more control transfer events may include pairs of associated instructions. The ROP detection unit may also include bias logic to reduce at least one bias effect due to a natural imbalance. The ROP detection unit may also include control logic to adjust the threshold based on a desired level of protection against ROP attacks. [0094] Yet another example embodiment may be a system including a processor including Return Oriented Programming (ROP) detection logic, where the ROP detection logic is to determine whether a ROP metric exceeds a threshold during a window, and where the ROP metric is based at least on one or more control transfer events. The system may also include a random access memory coupled to the processor, where the random access memory includes an anti-malware application. The ROP detection logic may be to, in response to a determination that the ROP metric exceeds the threshold during the window, provide a ROP attack notification to the anti-malware application. The anti-malware application may be to, in response to the ROP attack notification, initiate one or more actions to halt the ROP attack. The window may include a number of instructions. [0095] Still another example embodiment may be a method, including: detecting, by instruction control of a hardware processor, at least one control transfer event; generating, by ROP detection logic of the hardware processor, a ROP metric based on the at least one control transfer event, and upon determining that the ROP metric exceeds a threshold during a window, notifying a protection application of a ROP attack. Generating the ROP metric may include incrementing a counter upon detecting a control transfer instruction. Generating the ROP metric may include incrementing a counter upon detecting a misprediction. The method may also include, upon determining that the ROP metric exceeds the predefined threshold during the window, freezing the contents of a branch instruction log. [0096] References throughout this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present invention. Thus, appearances of the phrase "one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be instituted in other suitable forms other than the particular embodiment illustrated and all such forms may be encompassed within the claims of the present application. [0097] While the present invention has been described with respect to a limited number of embodiments for the sake of illustration, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
The invention encompasses a method of removing at least some of a material from a semiconductor substrate. A feed gas is fed through an ozone generator to generate ozone. The feed gas comprises at least 99.999% O2 (by volume). The ozone, or a fragment of the ozone, is contacted with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate. The invention also encompasses another method of removing at least some of a material from a semiconductor substrate. A mixture of ozone and organic solvent vapors is formed in a reaction chamber. At least some of the ozone and solvent vapors are contacted with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate.
We claim: 1. A method of removing at least some of a material from a semiconductor substrate, comprising:providing a feed gas comprising at least 99.999% O2 (by volume); in an absence of additionally added gases, feeding the feed gas through an ozone generator to generate ozone from the feed gas; and contacting the ozone or a fragment of the ozone with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate. 2. The method of claim 1 further comprising irradiating at least some of the ozone with ultraviolet light prior to the contacting.3. The method of claim 1 further comprising irradiating at least some of the ozone with ultraviolet light proximate the material.4. The method of claim 1 wherein the material on the semiconductor substrate is photoresist.5. The method of claim 1 further comprising mixing the ozone with water vapor prior to the contacting.6. The method of claim 1 further comprising mixing the ozone with an organic solvent vapor prior to the contacting.7. A method of removing at least some of a material from a semiconductor substrate, comprising:providing a feed gas comprising 99.999% O2 and less than or equal to 0.001% N2 (by volume); in an absence of additionally added gases, feeding the feed gas through an ozone generator to generate ozone from the feed gas; forming a mixture of ozone and organic solvent vapors in a reaction chamber; and contacting at least some of the ozone and solvent vapors with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate. 8. The method of claim 7 wherein the material on the semiconductor substrate is photoresist.9. The method of claim 7 wherein the material on the semiconductor substrate is photoresist; wherein the semiconductor substrate comprises Al2O3; and further comprising exposing at least some of the Al2O3 to the ozone during the contacting.10. The method of claim 7 wherein the material on the semiconductor substrate is photoresist; wherein the semiconductor substrate comprises platinum; and further comprising exposing at least some of the platinum to the ozone during the contacting.11. The method of claim 7 further comprising providing a reservoir of volatile organic solvent within the reaction chamber and forming the solvent vapors from the volatile organic solvent.12. The method of claim 11 wherein the volatile organic solvent is a liquid.13. The method of claim 11 wherein the volatile organic solvent comprises acetone.14. The method of claim 11 wherein the volatile organic solvent consists essentially of acetone.15. The method of claim 11 wherein the volatile organic solvent comprises cyclohexanone.16. The method of claim 11 wherein the volatile organic solvent consists essentially of cyclohexanone.17. The method of claim 11 wherein the volatile organic solvent comprises a mixture of cyclohexanone and PGMEA.18. The method of claim 11 wherein the volatile organic solvent comprises propylene glycol.19. The method of claim 7 further comprising providing a reservoir of volatile organic solvent within the reaction chamber and heating the volatile organic solvent to form the solvent vapors from the volatile organic solvent.20. A method of removing at least some of a material from a semiconductor substrate, comprising:providing a feed gas comprising 99.999% O2 and less than or equal to 0.001% N2 (by volume); in an absence of additionally added gases, feeding the feed gas through an ozone generator to generate ozone from the feed gas; forming a mixture of ozone and organic solvent vapors in a reaction chamber; irradiating at least some of the ozone with ultraviolet light to form ozone fragments from the ozone; and contacting at least some of the ozone fragments and solvent vapors with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate. 21. The method of claim 20 wherein the material on the semiconductor substrate is photoresist.22. The method of claim 20 further comprising providing a reservoir of volatile organic solvent within the reaction chamber and forming the solvent vapors from the volatile organic solvent.23. The method of claim 22 wherein the volatile organic solvent is a liquid.24. The method of claim 22 wherein the volatile organic solvent comprises acetone.25. The method of claim 22 wherein the volatile organic solvent comprises cyclohexanone.26. The method of claim 22 wherein the volatile organic solvent comprises a mixture of cyclohexanone and PGMEA.27. The method of claim 22 wherein the volatile organic solvent comprises propylene glycol.28. The method of claim 20 further comprising providing a reservoir of volatile organic solvent within the reaction chamber and heating the volatile organic solvent to form the solvent vapors from the volatile organic solvent.29. The method of claim 20 wherein the material on the semiconductor substrate is photoresist; wherein the semiconductor substrate comprises Al2O3; and further comprising exposing at least some of the Al2O3 to the ozone fragments during the contacting.30. The method of claim 20 wherein the material on the semiconductor substrate is photoresist; wherein the semiconductor substrate comprises platinum; and further comprising exposing at least some of the platinum to the ozone fragments during the contacting.
TECHNICAL FIELDThe invention pertains to methods of forming and utilizing ozone to remove at least some of a material from a semiconductor substrate. In particular applications, the invention pertains to methods of utilizing organic material vapors in combination with ozone to remove materials from semiconductor substrates.BACKGROUND OF THE INVENTIONIt is common to utilize ozone for removing materials from over semiconductor substrates during semiconductor device fabrication. For instance, ozone can be utilized for removing photoresist and other organic materials. The ozone is typically generated proximate to, or within, a reaction chamber. The semiconductor substrate is provided within the reaction chamber, and the ozone is contacted with the material which is to be removed.Ozone can be utilized for removing organic materials, such as, for example, photoresist, in that the ozone can oxidize the organic material and thereby convert the organic material to a form which is more readily removed from over a semiconductor substrate than was the organic material prior to oxidation.A method of forming ozone is to feed a diatomic oxygen (O2) containing feed gas into an ozone generator. The feed gas is generally about 99.9% O2 (by volume), with the remaining 0.1% of the feed gas comprising mostly nitrogen (N2). Occasionally, additional nitrogen may be spiked into the feed gas to raise a concentration of nitrogen up to about 5%. A reason for utilizing the relatively low purity oxygen as a feed gas for generating ozone is that it can be cheaper than higher purity oxygen. Another reason is that there can be a reduced risk of flame or explosion in utilizing a lower purity oxygen, relative to that which would exist in utilizing a higher purity oxygen.The invention encompasses new methods of forming and utilizing ozone in removing materials from over semiconductor substrates.SUMMARY OF THE INVENTIONThe invention encompasses a method of removing at least some of a material from a semiconductor substrate. A feed gas is fed through an ozone generator to generate ozone. The feed gas comprises at least 99.999% O2 (by volume). The ozone, or a fragment of the ozone, is contacted with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate.In another aspect, the invention encompasses another method of removing at least some of a material from a semiconductor substrate. A mixture of ozone and organic solvent vapors is formed in a reaction chamber. At least some of the ozone and solvent vapors are contacted with a material on a semiconductor substrate to remove at least some of the material from the semiconductor substrate.BRIEF DESCRIPTION OF THE DRAWINGSPreferred embodiments of the invention are described below with reference to the following accompanying drawings.FIG. 1 is a diagrammatic, cross-sectional view of a reaction chamber utilized in accordance with a method of the present invention.FIG. 2 is a diagrammatic, cross-sectional, fragmentary view of a semiconductor wafer fragment at a preliminary processing step of a method of the present invention.FIG. 3 is a view of the FIG. 2 wafer fragment shown at a processing step subsequent to that of FIG. 2.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTSThis disclosure of the invention is submitted in furtherance of the constitutional purposes of the U.S. Patent Laws "to promote the progress of science and useful arts" (Article 1, Section 8).Methodology encompassed by the present invention is described with reference to FIGS. 1-3. Referring initially to FIG. 1, an apparatus 10 is diagrammatically illustrated. Apparatus 10 comprises a reaction chamber 12 having a semiconductor wafer support 14 therein. A semiconductor wafer 16 is shown on support 14.An ozone generator 18 is shown mounted relative to chamber 12 such that ozone 20 formed within generator 18 is expelled into chamber 12. An exemplary ozone generator is an ASTEX(TM) 8200, which is manufactured by Applied Science and Technology, of 3500 Cabot Rd, Woburn, Mass. It is to be understood that ozone generator 18 can be mounted outside of chamber 12, and ozone flowed from generator 18 into chamber 12. It is also to be understood that ozone generator 18 could be mounted such that it is fully enclosed within chamber 12. Further, it is to be understood that ozone generator 18 can be mounted above wafer 16, as shown, or can be mounted in other orientations relative to wafer 16.A feed gas source 22 is provided externally of chamber 12, and a feed gas 24 is flowed from source 22 to ozone generator 18. Feed gas 24 comprises O2, and in contrast to the prior art preferably comprises at least 99.999% O2 (by volume). Feed gas 24 is flowed into ozone generator 18 t o form ozone 20. An advantage of utilizing a feed gas with a higher purity of oxygen than the prior art is that such reduces a concentration of nitrogen within the feed gas. In accordance with one aspect of the invention, it is recognized that nitrogen can be converted to various nitrous oxides (NOx) upon being passed with oxygen through an ozone generator. The nitrous oxides can be corrosive and otherwise damaging to integrated circuitry exposed to the nitrous oxides. Further, the nitrous oxides can form various acids (such as, for example, HNO3) which can be corrosive to various integrated circuitry materials, such as, for example, aluminum oxide (Al2O3). Accordingly, one aspect of the invention encompasses utilization of a higher purity oxygen in an ozone-generating feed gas than that which is utilized in the prior art. A related aspect of the invention is that such utilizes an ozone-generating feed gas having less nitrogen than prior art feed gases. Preferably, the ozone-generating feed gas 24 comprises less than or equal to 0.001% N2 (by volume).Semiconductor substrate 16 comprises an upper layer 17. Semiconductor substrate 16 can comprise, for example, monocrystalline silicon lightly-doped with a background p-type dopant. To aid in interpretation of the claims that follow, the terms "semiconductive substrate" and "semiconductor substrate" are defined to mean any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductive substrates described above. Upper layer 17 can comprise, for example, aluminum oxide (Al2O3), platinum, or other materials associated with fabrication of integrated circuitry.A layer 19 is over upper layer 17 of semiconductor substrate 16, and comprises a material which is to be removed. Layer 19 can comprise, for example, photoresist, such as, for example, a so-called I-line photoresist (typically a novolac resin), or a deep ultraviolet resist. Alternatively, layer 19 can comprise other organic materials.Ozone 20 is utilized to remove at least some of layer 19 from over semiconductor substrate 16. In other words, the ozone is utilized to remove a material defined by layer 19 from over the upper layer 17 of semiconductor substrate 16.In one aspect, ozone 20 flows to material 19 to react with the material and form a product which can be removed from over semiconductor substrate 16. For instance, ozone 20 can oxidize an organic material 19 to form a relatively volatile material which can be swept from over layer 17 by flow of gases through reaction chamber 12.In another aspect, ozone 20 can be broken into reactive fragments which contact material 19 and react with the material to form a product which can be removed from over layer 17. In the shown embodiment, an ultraviolet light source 30 is provided proximate reaction chamber 12 and adjacent a window 32 which extends through a wall of reaction chamber 12. Ultraviolet light generated by source 30 passes through window 32 into chamber 12. The ultraviolet light can then impact ozone 20 within chamber 12 to cause the ozone to break into reactive fragments. Such reactive fragments can comprise, for example, atomic oxygen. The fragments formed from the ozone can also comprise O2. In embodiments in which ozone 20 is exposed to ultraviolet light prior to contact of the ozone or fragments thereof with material 19, such exposure preferably occurs proximate layer 19. In such context, the term "proximate" means that the exposure occurs within one foot of layer 19. Such can alleviate losses of the reactive species formed by the exposure prior to interaction of the reactive species with layer 19. In particular aspects of the invention, the ultraviolet light can be shined onto a surface of layer 19 while the surface is exposed to ozone.The apparatus 10 of FIG. 1 further comprises a reservoir 50 comprising a volatile material 52. Reservoir 50 is on a reservoir holder 54 which can comprise a heater. In operation, material 52 is volatilized from reservoir 50 to form vapor within reaction chamber 12 which can enhance removal of material 19 by ozone 20. Volatile material 52 can comprise, for example, water, and accordingly the vapor formed within the reaction chamber 12 will be water vapor. Alternatively, volatile material 52 could comprise an organic solvent such as, for example, one or more of cyclohexanone, acetone, or propylene glycol methylether acetate (PGMEA). In particular embodiments, the solvent can consist essentially of, or consist of, acetone. In other embodiments, the solvent can consist essentially of, or consist of, cyclohexanone. In yet other embodiments, the solvent can consist essentially of, or consist of, a mixture of cyclohexanone and PGMEA. A particular solvent can comprise a mixture of 60% cyclohexanone and 40% PGMEA. An alternative solvent is propylene glycol. Although the solvents described above would be liquid materials, it is to be understood that reservoir 50 could also comprise a volatile solid material.If the material 52 is volatile at a temperature within reactor 12, vapors will be formed from material 52 without additional heating of the material. Alternatively, if material 52 is not volatile at the temperature of reaction chamber 12, or if it desired to enhance volatilization of material 52, the material can be heated by, for example, a heater within support 54.If material 52 comprises a volatile organic material, then the vapors formed from material 52 will be volatile organic solvent vapors. It is to be understood that within the context of this document the term "solvent vapor" refers to a vapor formed from a volatile organic material, and not to any volatile organic materials formed by degradation of layer 19 within chamber 12. If volatile solvent vapors are utilized in conjunction with the very pure oxygen described above, it is preferred that flames and sparks be kept out of the reaction chamber to alleviate a risk of fire or explosion.Although reservoir 50 is shown provided within chamber 12, it is to be understood that the invention encompasses other embodiments wherein reservoir 50 is provided outside of chamber 12, and wherein solvent vapors are flowed into chamber 12 from the external reservoir. Also, the invention encompasses embodiments wherein vapors are provided in a gas source external of chamber 12 (such as, for example, a tank of gas), and piped into chamber 12.Organic solvent vapors are found to assist in removal of organic materials 19 (such as, for example, photoresist) from over semiconductor substrates. A possible mechanism is that the vapors may "wet" or otherwise improve susceptibility of an organic material 19 to ozone or reactive fragments formed from ozone. Such mechanism is provided to assist in understanding the present invention, and is not to limit the claims except to the extent that the mechanism is expressly recited within a claim.FIGS. 2 and 3 illustrate enlarged views of the semiconductor substrate 16 at processing steps of a method of the present invention. FIG. 2 illustrates semiconductor substrate 16 having material 19 thereover. Specifically, material 19 is over a layer 17. As discussed above, layer 19 can comprise an organic material such as, for example, photoresist. Layer 17 can comprise an inorganic material such as, for example, aluminum oxide or platinum.Referring to FIG. 3, semiconductor substrate 16 is illustrated after material 19 has been removed from over layer 17. Such removal can be accomplished by the processing described above with reference to FIG. 1, wherein ozone (or a reactive fragment generated from ozone) is contacted with material 19 to remove material 19. It is noted that some of layer 17 can be exposed to the ozone, or ozone fragments, during removal of material 19. In accordance with an embodiment of the present invention, the ozone preferably will be formed from an oxygen feed material that comprised less than 0.001% nitrogen. Accordingly, any concentration of nitrous oxides or reactive products formed from a nitrous oxides will be lower in methods of the present invention than in prior art processes. Accordingly, if layer 17 comprises aluminum oxide, platinum, or other materials which can be etched or otherwise corroded by nitrous oxide or products thereof, methods of the present invention can alleviate such corrosion relative to prior art methods.In compliance with the statute, the invention has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the invention is not limited to the specific features shown and described, since the means herein disclosed comprise preferred forms of putting the invention into effect. The invention is, therefore, claimed in any of its forms or modifications within the proper scope of the appended claims appropriately interpreted in accordance with the doctrine of equivalents.
A processing system includes a cache and a prefetcher to prefetch lines from a memory into the cache. The prefetcher receives a memory access request to a first address in the memory and sets a stride length associated with an instruction that issued the memory access request to a length of a line in the cache. The stride length indicates a number of bytes between addresses of lines that are prefetched from the memory into the cache.
WHAT IS CLAIMED IS:1 . A method comprising:receiving, at a processor, a memory access request to a first memory address issued during execution of an instruction at the processor; and setting, at the processor, a stride length associated with the instruction to a length of a line in a cache, wherein the stride length indicates a number of bytes between addresses of lines that are prefetched from the memory into the cache.2. The method of claim 1 , wherein setting the stride length comprises:determining a difference between the first memory address and a second memory address of a previous memory request issued by the instruction; andsetting the stride length to the length of the line such that an absolute value of the difference is less than an absolute value of the length of the line.3. The method of claim 2, wherein setting the stride length further comprises:setting the stride length equal to the length of the line in response to thedifference being less than or equal to the length of the line.4. The method of claim 2, wherein setting the stride length further comprises:setting the stride length equal to a negative value of the length of the line in response to the difference being negative and the absolute value of the difference being less than or equal to the length of the line.5. The method of claim 2, wherein setting the stride length further comprises:setting the stride length equal to twice the length of the line in response to the difference being greater than the length and less than or equal to twice the length; andsetting the stride length equal to a negative value of twice the length of the line in response to the difference being negative and the absolute value of the difference being greater than the length and less than or equal to twice the length.6. The method according to any of the preceding claims, further comprising:prefetching a line into the cache from a prefetch address in the memory,wherein the prefetch address is equal to the first memory address plus a product of the stride length and a stride distance that indicates a number of strides that are prefetched ahead of the current address.7. The method of claim 6, wherein the prefetching is performed in response to a stride confidence associated with the stride length being greater than a threshold.8. The method of claim 7, further comprising:determining a previous stride length in response to a previous memory access request being directed to a line with a memory address different than the first memory address; andincrementing the stride confidence in response to the stride length being equal to the previous stride length.9. An apparatus comprising:a cache; anda prefetcher to prefetch lines from a memory into the cache, wherein theprefetcher is to receive a memory access request to a first memory address issued during execution of an instruction and set a stride length associated with an instruction that issued the memory access request to a length of a line in the cache, wherein the stride length indicates a number of bytes between addresses of lines that are prefetched from the memory into the cache.10. The apparatus of claim 9, wherein the prefetcher is to determine a difference between the first memory address and a second memory address in a previous memory request issued by the instruction and set the stride length to the length such that an absolute value of the difference is less than an absolute value of the length.1 1 . The apparatus of claim 10, wherein the prefetcher is to set the stride length equal to the length of the line in response to the difference being less than or equal to the length of the line.12. The apparatus of claim 10, wherein the prefetcher is to set the stride length equal to a negative value of the length of the line in response to the difference being negative and the absolute value of the difference being less than or equal to the length of the line. 13. The apparatus according to any of the preceding claims, wherein the prefetcher is to prefetch a line into the cache from a prefetch address in the memory, wherein the prefetch address is equal to the first memory address plus a product of the stride length and a stride distance that indicates a number of strides that are prefetched ahead of the current address. 14. The apparatus of claim 13, wherein the prefetcher is to prefetch the line into the cache in response to a stride confidence associated with the stride length being greater than a threshold.15. The apparatus of claim 14, wherein the prefetcher is to increment the stride confidence in response to the stride length being equal to a previous stride length determined in response to a previous memory access request and the first memory address being to a different line than a previous address in the previous memory access request.
STRIDE PREFETCHER FOR INCONSISTENT STRIDESBACKGROUNDDescription of the Related ArtProcessing systems typically use speculation to improve performance by prefetching data from a memory into a cache in the expectation that a processor in the processing system will subsequently request the prefetched data from the cache. Stride prefetchers identify patterns in memory addresses that are accessed by instructions being executed by a processor. For example, the stride prefetcher stores the memory address accessed by an instruction and determines a stride length that is equal to the difference between the current memory address and a memory address that was previously accessed by the instruction. The stride prefetcher counts the number of consecutive memory accesses that have the same stride length. The number is typically referred to as the stride confidence. If the stride confidence for a stride length is above a threshold, the stride prefetcher identifies a memory access pattern that indicates that the instruction has accessed a sequence of memory accesses that differ by the stride length. The stride prefetcher then predicts that the instruction will subsequently request information from memory addresses that follow the memory access pattern, i.e., memory addresses that are separated by the stride length. Information is prefetched from the predicted memory addresses into the cache so that the processor executing the instruction can retrieve the information from the cache.BRIEF DESCRIPTION OF THE DRAWINGSThe present disclosure is better understood, and its numerous features and advantages made apparent to those skilled in the art by referencing theaccompanying drawings. The use of the same reference symbols in different drawings indicates similar or identical items.FIG. 1 is a block diagram of a processing device, according to some embodiments. FIG. 2 is a block diagram of a prefetcher according to some embodiments. FIG. 3 is a flow diagram of a method for setting stride lengths based on a length of a line in a cache and prefetching data from the cache based on the stride length according to some embodiments.DETAILED DESCRIPTIONPrefetched information is retrieved from a memory added to a cache in quantized blocks that have a predetermined size, which can be determined based on a bandwidth or structure of an interface between the memory and the cache or the size of a line in the cache. For example, if lines in the cache store 64 bytes, every prefetch request causes 64 bytes of data to be retrieved from the memory and stored in a line of the cache. A sequence of instructions that each request eight bytes of data will produce a memory access pattern with a stride length of eight. However, each prefetch based on the memory access pattern will retrieve 64 bytes of data. Consequently, the stride prefetcher performs as many as seven redundant prefetches to retrieve information that was already retrieved from the memory and stored in the line of the cache in response to the first prefetch request for the first eight bytes of data. In addition to accessing memory in blocks that are smaller than a cache line, some instructions access memory with a stride length that is not constant, e.g., with a sequence of strides equal to 8 bytes, 16 bytes, 16 bytes, 8 bytes, etc. A conventional stride prefetcher that receives this sequence would not detect any stride length with a significant stride confidence. Furthermore, in some cases a memory access request is dropped or skipped, which can cause the stride length to change even though the underlying memory access pattern associated with the instruction remains the same. For example, if the stride length for a sequence of instructions is 128 bytes, dropping one memory access request will cause the stride prefetcher to calculate a stride length of 256 bytes, which can cause the stride prefetcher to reduce the stride confidence to zero so that information is not prefetched until a new sequence of memory access requests separated by 128 bytes has been received. Missing one or more prefetch opportunities because of a variable stride length, a skipped memory access, or dropped memory access can reduce the performance of the processing system.The performance of a stride prefetcher can be improved by setting a stride length associated with an instruction that issues a memory access request equal to a length of a line in a cache. The stride length is set to the length of the cache line based on a difference between a first address indicated by the memory access request and a second address in a previous memory access request by the instruction. For example, the stride prefetcher sets the stride length associated with the memory access request to the length of the line in the cache if the difference between the first address and the second address is less than or equal to the length of the line in the cache. The stride can be positive or negative. In cases where the stride is negative, the stride length is set to a negative value of the length. For example, the stride length can be set to a negative value of the cache line length if the difference between the first address and the second address is negative and the absolute value of the difference is less than or equal to the length. In some variations, the stride length is set to twice the length of the line in the cache if the difference is greater than the length and less than or equal to twice the length.The stride prefetcher is able to detect a memory access pattern for the instruction on the basis of the stride lengths that have been set to the cache line length and, in some cases, the stride prefetcher prefetches information from a memory to the cache based on the memory access pattern. For example, the stride prefetcher can increment a stride confidence if the first address is in a different line than the second address and the stride length associated with the first address is the same as the stride length associated with the second address. The stride confidence is not incremented if the first address is in the same line as the second address. The stride prefetcher then prefetches information from the memory to the cache when the stride confidence exceeds a threshold.FIG. 1 is a block diagram of a processing device 100, according to someembodiments. The processing device 100 includes a processing unit 105 (or"processor") that is configured to access instructions or data that are stored in a main memory 1 10 via an interface 1 15. In some variations, the processing unit 105 is used to implement a central processing unit (CPU), a graphics processing unit (GPU) an accelerated processing unit (APU), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other type of processing unit. The processing unit 105 shown in Figure 1 includes four processor cores 1 1 1 , 1 12, 1 13, 1 14 (collectively referred to herein as "the processor cores 1 1 1 -1 14") that are used to execute the instructions or manipulate the data. The processor cores 111-114 can also be referred to as compute units.The processing unit 105 shown in Figure 1 also implements a hierarchical (or multilevel) cache complex that is used to speed access to the instructions or data by storing selected instructions or data in the caches. However, some embodiments of the device 100 implement different configurations of the processing unit 105, such as configurations that use external caches, different numbers of processor cores 111- 114, and the like. Moreover, some embodiments associate different numbers or types of caches with the processor cores 111-114. The cache complex depicted in FIG.1 includes a level 2 (L2) cache 120 for storing copies of instructions or data that are stored in the main memory 110. Some embodiments of the L2 cache 120 are implemented using an associativity such as 2- way associativity, 8-way associativity, 16-way associativity, direct mapping, fully associative caches, and the like. Relative to the main memory 110, the L2 cache 120 is implemented using faster memory elements. The L2 cache 120 can also be deployed logically or physically closer to the processor cores 111-114 (relative to the main memory 110) so that information can be exchanged between the L2 cache 120 and the processor cores 111-114 more rapidly or with less latency.The illustrated cache complex also includes L1 caches 121, 122, 123, 124(collectively referred to as "the L1 caches 121-124") for storing copies of instructions or data that are stored in the main memory 110 or the L2 cache 120. Each of the L1 caches 121-124 is associated with a corresponding processor core 111-114. The L1 caches 121-124 can be implemented in the corresponding processor cores 111-114 or the L1 caches 121-124 can be implemented outside the corresponding processor core 112. Relative to the L2 cache 120, the L1 caches 121-124 are implemented using faster memory elements so that information stored in the lines of the L1 caches 121-124 can be retrieved quickly by the corresponding processor cores 111-114. The L1 caches 121-124 can also be deployed logically or physically closer to the processor cores 111-114 (relative to the main memory 110 or the L2 cache 120) so that information can be exchanged between the processor cores 111-114 and the L1 caches 121-124 more rapidly or with less latency (relative to communication with the main memory 110 or the L2 cache 120). Some embodiments of the L1 caches 121 -124 are separated into caches for storing instructions and data, which are referred to as the L1 -I caches 125, 126, 127, 128 (collectively referred to herein as "the L1 -I caches 125-128") and the L1 -D caches 131 , 132, 133, 134 (collectively referred to herein as "the L1 -D caches 131 -134"). Separating or partitioning the L1 caches 121 -124 into the L1 -I caches 125-128 for storing instructions and the L1 -D caches 131 -134 for storing data allows these caches to be deployed closer to the entities that are likely to request instructions or data, respectively. Consequently, this arrangement can reduce contention, wire delays, and generally decrease latency associated with instructions and data. A replacement policy dictates that the lines in the L1 -I caches 125-128 are replaced with instructions from the L2 cache 120 and the lines in the L1 -D caches 131 -134 are replaced with data from the L2 cache 120. However, some embodiments of the L1 caches 121 -124 are partitioned into different numbers or types of caches that operate according to different replacement policies. Furthermore, some programming or configuration techniques allows the L1 -I caches 125-128 to store data or the L1 -D caches 131 -134 to store instructions, at least on a temporary basis.The L2 cache 120 illustrated in Figure 1 is inclusive so that cache lines resident in the L1 caches 121 -124, 125-128, 131 -134 are also resident in the L2 cache 120. The L1 caches 121 -124 and the L2 cache 120 represent one example of a multi-level hierarchical cache memory system. However, some embodiments of the processing device 100 use different multilevel caches including elements such as L0 caches, L1 caches, L2 caches, L3 caches, and the like, some of which may or may not be inclusive of the others.Each of the caches 120, 121 -124, 125-128, 131 -134 includes a plurality of lines for storing copies of the information from the memory 1 10. The lines have apredetermined length. For example, the length of a cache line can be set to a value of 64 bytes, although the length of the cache line is a matter of design choice.Furthermore, in some variations, the length of the cache lines can differ in the different caches 120, 121 -124, 125-128, 131 -134. Information is retrieved from the main memory (e.g., in response to a cache miss or a prefetch request) in blocks that have a length that is equal to the length of the cache line. The size of the retrieved block is therefore independent of the amount of information requested in the memory access request. For example, as discussed herein, a memory access request for eight bytes of data beginning at an address in the memory 1 10 results in 64 bytes of data (beginning at the address in the memory access request) being retrieved from the memory 1 10 and stored in a cache line of one or more of the caches 120, 121 - 124, 125-128, 131 -134.The processing unit 105 includes one or more prefetchers 135 for prefetching instructions or data from the memory 1 10 into one or more of the caches 120, 121 - 124, 125-128, 131 -134 before one of the processor cores 1 1 1 -1 14 has generated a memory access request for the instructions or data. The prefetchers 135 are able to detect patterns in addresses in memory access requests issued by instructions that are executing on the processor cores 1 1 1 -1 14. The patterns are detected based on numbers of bytes that separate addresses in successive memory access requests, which are referred to herein as "strides." The numbers of bytes between the addresses are referred to herein as "stride lengths." Thus, a memory access pattern can be represented by an initial address of a memory location and a sequence of subsequent addresses that correspond to the initial address plus integer multiples of the stride length of the memory access pattern. The number of integer multiples of the stride length that are used to prefetch information from the memory 1 10 into one or more of the caches 120, 121 -124, 125-128, 131 -134 is referred to herein as the "stride distance." For example, if the stride distance for a memory access pattern is three, the prefetchers 135 can prefetch information indicated by three successive addresses that are separated from each other by a stride length.Some embodiments of the prefetchers 135 can track strides on subgroups of addresses. For example, the data address streams generated by the processor cores 1 1 1 -1 14 can be partitioned based on an instruction pointer (IP), a program counter (PC), a physical page that includes the address, or other criteria. Each prefetcher 135 can then track addresses in the data stream associated with one or more of the partitions. Tracking strides on subgroups of addresses can improve the accuracy of the tracking algorithm. As discussed herein, some instructions issue memory access requests to access the memory 1 10 in blocks that are smaller than a cache line and some instructions access memory with a stride length that is not constant, e.g., with a sequence of strides that have stride lengths equal to 8 bytes, 16 bytes, 16 bytes, 8 bytes, etc. Furthermore, in some cases a memory access request is dropped or skipped, which can cause the stride length detected by the prefetcher 135 to change even though the underlying memory access pattern associated with the instruction remains the same. The prefetchers 135 are therefore configured to set a stride length associated with an instruction that issued a memory access request to a length of a line in the caches 120, 121 -124, 125-128, 131 -134. In some variations, the prefetchers 135 determine a difference between a first address referenced by a current memory access request issued by an instruction and a second address in a previous memory access request issued by the instruction. The prefetchers 135 can then set the stride length to the cache line length such that an absolute value of the difference is less than the cache line length. For example, if the difference between the first and second addresses is eight bytes and the cache line length is 64 bytes, the stride length is set to 64 bytes. For another example, if the difference between the first and second addresses is negative because the second address is smaller than the first address, the stride length is set to -64 bytes if the absolute value of the difference is less than 64 bytes. Setting a stride length to the cache line length can be referred to as "snapping" the stride length to the cache line length.The prefetchers 135 detect memory access patterns and prefetch information into the caches 120, 121 -124, 125-128, 131 -134 based on the stride lengths that have been set or "snapped" to corresponding cache line lengths, as discussed herein. For example, the prefetchers 135 can prefetch a line from the memory 1 10 if a current value of the stride length associated with an instruction is the same as a previous value of the stride length, which indicates that the memory access requests are continuing to follow the memory access pattern. The prefetchers 135 also check whether the address in the current memory access request is in the same line as the address of the previous memory access request. If so, the prefetchers 135 have already prefetched the data that would have been prefetched in response to the current memory access request and so the prefetchers 135 bypass prefetching any data in response to the current memory access request. In some variations, the prefetchers 135 can implement other features such as snapping the stride length to twice the cache line length, prefetching information based on a stride confidence of the stride length, issuing more than one prefetch request in response to a memory access request, and the like, as discussed herein.FIG. 2 is a block diagram of a prefetcher 200 according to some embodiments. The prefetcher 200 is used to implement some embodiments of the prefetchers 135 shown in FIG. 1 . The prefetcher 200 receives signals indicating events related to memory access requests such as hits or misses associated with a load instruction, hits or misses associated with a store instruction, and the like. Miss address buffer (MAB) events, such as hit or miss events for loads or stores, are received or accessed by an event selector block 205, which is used to select events that are to be passed to other stages of the prefetcher 200. For example, the highest priority event can be stored in the registers 210 until they are passed to one or more stream engines 215 and a stream allocation unit 220, e.g., during a subsequent clock cycle. The priority of events can be determined using a hierarchy such as giving the highest priority to load misses and then assigning successively lower priorities to store misses, load hits, and store hits.The prefetcher 200 includes one or more stream engines 215 that can be used to manage separate prefetch streams. The stream engines 215 provide a signal to the stream allocation unit 220 to indicate that the current event either hit or missed the stream managed by the stream engine 215. If none of the existing streams indicates a hit for the MAB miss event, then the stream allocation unit 220 can allocate a new stream to a different stream engine 215 using the current event information. When a stream is first allocated, the stream engine 215 sets a page address and an offset value to the current event cache line address. The stream engine 215 can then monitor further MAB events to detect events at addresses adjacent to the current event cache line address in either direction.As discussed herein, the prefetcher 200 can be configured to set a stride length associated with an instruction that issued a memory access request to a cache line length of the cache that receives the prefetched cache lines. For example, if the current event cache line address is set to A, then the stream engine 215 compares addresses of events to the current event cache line address to determine a difference between the addresses, e.g., addresses A+8 bytes or A-8 bytes would have differences of +8 bytes and -8 bytes, respectively. The stream engine 215 determines a stride length for the event by snapping the differences to a cache line length. For example, a difference of +8 bytes is snapped to a stride length of 64 bytes and a difference of -8 bytes is snapped to a stride length of -64 bytes. For another example, a difference of +80 bytes is snapped to a stride length of 128 bytes and a difference of -80 bytes is snapped to a stride length of -128 bytes.The snapped stride lengths are used to train the prefetch streams. If the stream engine 215 sees a stream of addresses associated with an instruction that have snapped stride length of 64 bytes, the stream engine 215 defines a stream in the appropriate direction (positive for A+64 bytes and negative for A-64 bytes) and trains a new prefetch stream. Some embodiments of the stream engine 215 can also implement additional features, as discussed herein.The prefetcher 200 includes a request arbiter 225 that is used to arbitrate prefetch requests from the stream engines 215. The request arbiter 225 can be a rotating priority arbiter, but other types of request arbiter 225 can alternatively beimplemented in the prefetcher 200. Requests are transferred from the request arbiter 225 to a register 230 so that the request information can be provided to a prefetch request interface 235, e.g., during a subsequent clock cycle. The prefetch request interface 235 provides feedback to the request arbiter 225, which can be used to select or arbitrate between pending requests from the stream engines 215. FIG. 3 is a flow diagram of a method 300 for setting stride lengths based on a length of a line in a cache and prefetching data from the cache based on the stride length according to some embodiments. The method 300 is implemented by some embodiments of the prefetchers 135 shown in FIG. 1 or the prefetcher 200 shown in FIG. 2. At block 305, a memory access request is received. The memory access request references a memory address. For example, the memory access request is generated by an instruction such as a read instruction or a write instruction. In those cases, the memory address indicates a memory location that is to be read or written by the instruction. At block 310, the prefetcher determines a difference between the current memory address referenced by the memory access request and a previous memory address accessed by the previous memory access. As discussed herein, the difference indicates a number of bytes between the current memory address and the previous memory address, which is either positive or negative depending on the relative positions of the current and previous memory addresses in the memory.At block 315, the prefetcher sets (or snaps) the absolute value of a stride length associated with the memory access request to a length of a line in a cache that receives the prefetched information. For example, if the cache line length is 64 bytes, absolute values of address differences of ±8 bytes (or any other differences with absolute values that are less than or equal to 64 bytes) are snapped to 64 bytes. The stride length associated with the memory access request is then determined by attaching appropriate signs to the absolute values depending on the sign of the address difference. For example, the stride length for an address difference of 8 bytes is snapped to 64 bytes and the stride length for an address difference of -8 bytes is snapped to -64 bytes. In some variations, larger differences are snapped to a value equal to twice the cache line length. For example, the stride length for an address difference of 96 bytes is snapped to 128 bytes (i.e., twice the cache line length) and the stride length for an address difference of -96 bytes is snapped to -128 bytes.At decision block 320, the prefetcher determines whether to prefetch one or more lines from the memory into the cache based on a comparison of current and previous stride lengths. For example, the prefetcher determines whether the current stride length is equal to the previous stride length, which indicates whether the memory access requests are continuing to follow the same access pattern. The prefetcher also determines whether to prefetch the one or more lines from the memory into the cache based on a comparison of current and previous memory addresses. For example, the prefetcher determines whether the current address is in the same line or a different line than the previous address, which indicates whether information in the line that would be prefetched in response to the current memory access request has already been prefetched into the cache in response to the previous memory access request. If the current stride length is equal to the previous stride length and the current address is in a different line than the previous address, the method 300 flows to block 325. If the current stride length is not equal to the previous stride length or the current address is in the same line as the previous address, the method flows to block 330. At block 325, the prefetcher prefetches at least one line from an address indicated by a stride length and a stride distance associated with the memory access request. For example, if the stride length is 64 bytes and the stride distance is three, the prefetcher prefetches a line from an address in the memory that is separated from the current address by a distance (measured in bytes) equal to a product of the stride length and the stride distance (i.e., 192 bytes). As discussed herein, in some variations the prefetcher prefetches one or more additional lines that are one or more stride lengths from the previously prefetched line.At block 330, the prefetcher bypasses prefetching a line because at least a subset of the information in the line that would be prefetched at this stage has already been prefetched into the cache in response to the previous memory access request.Some variations of prefetcher such as the prefetchers 135 shown in FIG. 1 or the prefetcher 200 shown in FIG. 2 implement one or more additional features to modify the prefetching algorithm, as discussed herein. These features can further enhance the effectiveness of the prefetching algorithm. The following pseudocode is an example of a stride prefetcher algorithm that does not snap stride lengths to a length of a line in a cache that receives the prefetched information. In the pseudocode, the memory address referenced by the current memory access request is indicated by strideAddr, the memory address referenced by the previous memory access request is indicated by lastAddr, the current and previous stride length are indicated by strideLength and last strideLength,respectively, and the stride distance is indicated by strideDistance. Instructions are indicated by an instruction pointer (I P) and values of the stride parameters associated with each instruction are stored in a stride table. For each instruction which references memory:if (the IP of the current instruction matches one of the IPs stored in a stride table) {update strideAddrif (current strideLength is the same as last strideLength -and- currentAddr is to a different line than lastAddr) {increment strideConfidence (if it is below some maximum confidence )if (strideConfidence is greater than some threshold) { issue prefetch to currentAddr + ( strideDistance * strideLength)if (strideDistance is less than some maximum distance) {issue prefetch to currentAddr +( ( strideDistance+1 ) * strideLength)increment strideDistance}}} else if (strideConfidence is non-zero) {decrement strideConfidenceif (current strideLength is double last strideLength) { decrement strideDistanceissue prefetch to currentAddr + (strideDistance * strideLength)} else {set strideDistance to zero}} else {update strideLengthset strideDistance to zero}else if (the current memory reference missed in the local cache) { replace a stride prefetcher entry:update IP & address with the values from the current memory referenceset strideConfidence and strideDistance to zero)} Prefetchers that implement the preceding pseudocode maintain a value of a stride confidence (strideConfidence) to indicate the likelihood that the current instruction is issuing memory access requests that follow an access pattern indicated by the stride parameters. Thus, the prefetcher only prefetches information if the stride confidence is above a threshold value. The stride confidence is incremented each time the same stride length is repeated and the current access is in a different line than the previous access. In some variations, the stride confidence is only incremented up to a maximum stride confidence. Otherwise, the stride confidence is decremented until the stride confidence reaches zero. Table 1 is an example of a stride table that includes information indicating a current address for a current memory access request (CurrentAddr), a previous (or last) address for a previous memory access request (LastAddr), a previous stride length (LastStride), a previous stride confidence (LastConf), an indication of whether the current stride length matches the previous stride length (Stride Match), the current stride length (NewStride), and a current stride confidence (NewConf).Table 1As indicated in the above table, the stride lengths never match and so the stride confidence does not increase. Thus, the prefetcher does not issue any prefetches, even though the stride length of following an alternating pattern of 0x20, 0x30, 0x20, 0x30, etc. The following pseudocode is an example of a stride prefetcher algorithm that snaps stride lengths to a length of a line in a cache that receives the prefetched information, as discussed herein. The pseudocode also snaps the stride length to a value equal to twice the length of the line the cache if the stride length is between the length of a cache line and twice the length of the cache line. The cache line length is indicated by HneSize in the pseudocode.For each instruction which references memory:if (the IP of the current instruction matches one of the IPs stored in the stride table) {update strideAddrif (current strideLength is the "same" (after snapping address differences to -lineSize or lineSize) as last strideLength - and- currentAddr is to a different line than lastAddr) {increment strideConfidence (if it is below some maximum confidence)if (strideConfidence is greater than some threshold) { issue prefetch to currentAddr + ( strideDistance * strideLength)if (strideDistance is less than some maximum distance) {issue prefetch to currentAddr +( ( strideDistance+1 ) * strideLength)increment strideDistance}}} else if (strideConfidence is non-zero) {decrement strideConfidenceif (current strideLength is "double" (for subline strides, between lineSize and 2*lineSize or between - lineSize and -2*lineSize) last strideLength) { decrement strideDistanceissue prefetch to currentAddr + (strideDistance * strideLength)} else {set strideDistance to zero } else {update strideLength (snapping subline strides to - lineSize or lineSize)set strideDistance to zero}else if (the current memory reference missed in the local cache) { replace a stride prefetcher entry:update IP & address with the values from the current memory referenceset strideConfidence and strideDistance to zero)}In the above example pseudocode, the stride distance can be decremented in response to the current stride length being different than the previous stride length. The stride distance can also be decremented in response to the current address being in the same line as the previous address. These features allow prefetchers that implement the pseudocode to adjust to strides that are either dropped or missing in the memory access pattern. Decrementing or reducing the stride prefetch distance in this manner can avoid gaps in the prefetch pattern introduced by the dropped or missed strides.Table 2 is an example of a stride table that is produced by applying the above example pseudocode to snap memory address differences to a cache line length before detecting memory access patterns or deciding whether to prefetch cache lines from memory to a cache, as discussed herein. The sequence of memory addresses indicated in the memory access requests shown in Table 2 is the same as the sequence of memory addresses shown in Table 1 . Table 2The address differences between the addresses in the sequence shown in Table 2 are all less than or equal to 64 bytes and so they are snapped to a value of 64 bytes, as indicated by "<=64B" in the LastStride column. The StrideMatch column indicates that the current stride matches the previous stride beginning with the memory access request to the address 0x70. However, the current address 0x70 of this memory access request is in the same line as the previous address 0x50, so the stride length is not updated and the stride confidence is not updated. The stride length associated with the memory access request to the subsequent address OxaO matches the stride length of the previous memory access request and is not in the same line as the address 0x50 of the previous memory access request. Consequently, the stride length is updated to "<=64B" as indicated in the NewStride column and the stride confidence is incremented as indicated in the NewConf column. Successive memory access requests lead to increases in the stride confidence until the stride confidence exceeds the threshold value (e.g., a threshold value of two) following the memory access request to the memory address 0x1 10. A prefetch request is issued in response to this memory access request, as well as the subsequent memory access requests, as indicated by the asterisk (*) on the stride confidence values. In some embodiments, the apparatus and techniques described above are implemented in a system including one or more integrated circuit (IC) devices (also referred to as integrated circuit packages or microchips), such as the processing system described above with reference to FIGs. 1 -3. Electronic design automation (EDA) and computer aided design (CAD) software tools are used in the design and fabrication of these IC devices. These design tools typically are represented as one or more software programs. The one or more software programs include code executable by a computer system to manipulate the computer system to operate on code representative of circuitry of one or more IC devices so as to perform at least a portion of a process to design or adapt a manufacturing system to fabricate the circuitry. This code can include instructions, data, or a combination of instructions and data. The software instructions representing a design tool or fabrication tool typically are stored in a computer readable storage medium accessible to the computing system. Likewise, the code representative of one or more phases of the design or fabrication of an IC device can be stored in and accessed from the same computer readable storage medium or a different computer readable storage medium.A computer readable storage medium can include any non-transitory storage medium, or combination of non-transitory storage media, accessible by a computer system during use to provide instructions and/or data to the computer system. Such storage media can include, but is not limited to, optical media (e.g., compact disc (CD), digital versatile disc (DVD), Blu-Ray disc), magnetic media (e.g., floppy disc, magnetic tape, or magnetic hard drive), volatile memory (e.g., random access memory (RAM) or cache), non-volatile memory (e.g., read-only memory (ROM) or Flash memory), or microelectromechanical systems (MEMS)-based storage media. The computer readable storage medium can be embedded in the computing system (e.g., system RAM or ROM), fixedly attached to the computing system (e.g., a magnetic hard drive), removably attached to the computing system (e.g., an optical disc or Universal Serial Bus (USB)-based Flash memory), or coupled to the computer system via a wired or wireless network (e.g., network accessible storage (NAS)). In some embodiments, certain aspects of the techniques described above can implemented by one or more processors of a processing system executing software. The software comprises one or more sets of executable instructions stored or otherwise tangibly embodied on a non-transitory computer readable storage medium. The software can include the instructions and certain data that, when executed by the one or more processors, manipulate the one or more processors to perform one or more aspects of the techniques described above. The non-transitory computer readable storage medium can include, for example, a magnetic or optical disk storage device, solid state storage devices such as Flash memory, a cache, random access memory (RAM) or other non-volatile memory device or devices, and the like. The executable instructions stored on the non-transitory computer readable storage medium can be in source code, assembly language code, object code, or other instruction format that is interpreted or otherwise executable by one or more processors.Note that not all of the activities or elements described above in the general description are required, that a portion of a specific activity or device may not be required, and that one or more further activities may be performed, or elements included, in addition to those described. Still further, the order in which activities are listed are not necessarily the order in which they are performed. Also, the concepts have been described with reference to specific embodiments. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present disclosure as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of the present disclosure.Benefits, other advantages, and solutions to problems have been described above with regard to specific embodiments. However, the benefits, advantages, solutions to problems, and any feature(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature of any or all the claims. Moreover, the particular embodiments disclosed above are illustrative only, as the disclosed subject matter may be modified and practiced in different but equivalent manners apparent to those skilled in the art having the benefit of the teachings herein. No limitations are intended to the details of construction or design herein shown, other than as described in the claims below. It is therefore evident that the particular embodiments disclosed above may be altered or modified and all such variations are considered within the scope of the disclosed subject matter. Accordingly, the protection sought herein is as set forth in the claims below.
In described examples, an integrated circuit (100) includes a field-plated FET (110) and is formed by forming a first opening in a layer of oxide mask, exposing an area for a drift region (116). Dopants are implanted into the substrate (102) under the first opening. Subsequently, dielectric sidewalls are formed along a lateral boundary of the first opening. A field relief oxide (122) is formed by thermal oxidation in the area of the first opening exposed by the dielectric sidewalls. The implanted dopants are diffused into the substrate (102) to form the drift region (116), extending laterally past the layer of field relief oxide (122). The dielectric sidewalls and layer of oxide mask are removed after the layer of field relief oxide (122) is formed. A gate (130) is formed over a body (120) of the field-plated FET (110) and over the adjacent drift region (116). A field plate (132) is formed immediately over the field relief oxide (122) adjacent to the gate (130).
CLAIMSWhat is claimed is:1. An integrated circuit, comprising:a substrate comprising a semiconductor material;field oxide disposed at a top surface of the substrate; anda field-plated field effect transistor (FET), comprising:a field relief oxide of silicon dioxide disposed at the top surface of the substrate, the field relief oxide having a bird's beak structure at lateral edges of the field relief oxide, the field relief oxide being thinner than the field oxide;a drift region disposed in the substrate under the field relief oxide, the drift region having a first conductivity type, the drift region extending laterally past the field relief oxide by equal lateral distances on opposite sides of the field relief oxide, the drift region being free of the field oxide;a body disposed in the substrate, the body having a second, opposite, conductivity type, the body abutting the drift region at the top surface of the substrate;a gate dielectric layer disposed at the top surface of the substrate adjacent to the field relief oxide, the field relief oxide being at least twice as thick as the gate dielectric layer;a gate disposed over the gate dielectric layer, the gate extending over a portion of the body, and over a portion of the drift region between the body and the field relief oxide; and a field plate disposed immediately over the field relief oxide.2. The integrated circuit of claim 1, the field-plated FET comprising a charge adjustment region disposed in the substrate immediately under the field relief oxide, the charge adjustment region being laterally recessed from a boundary of the drift region, the charge adjustment region having dopants of the second conductivity type.3. The integrated circuit of claim 1, the field-plated FET comprising a charge adjustment region disposed in the substrate immediately under the field relief oxide, the charge adjustment region being laterally recessed from a boundary of the drift region, the charge adjustment region having dopants of the first conductivity type.4. The integrated circuit of claim 1, wherein the gate extends partway over the field relief oxide to provide the field plate.5. The integrated circuit of claim 1, wherein the field plate is electrically isolated from the gate.6. The integrated circuit of claim 1, the drift region being n-type, the drift region comprising an arsenic diffused region immediately below the field relief oxide, a majority of n-type dopants in the arsenic diffused region being arsenic, and a phosphorus diffused region below the arsenic diffused region, a majority of n-type dopants in the phosphorus diffused region being phosphorus.7. The integrated circuit of claim 1, wherein the drift region extends under the gate laterally past the field relief oxide by a distance of 100 nanometers to 200 nanometers.8. The integrated circuit of claim 1, comprising a planar FET comprising a drift region disposed in the substrate, the drift region of the planar FET having the first conductivity type, the drift region of the planar FET having a substantially equal distribution of dopants as the drift region of the field-plated FET, the planar FET being free of a field relief oxide.9. A method of forming an integrated circuit, the method comprising:providing a substrate comprising a semiconductor material;forming elements of field oxide at a top surface of the substrate;forming a layer of oxide mask over the top surface of the substrate in an area for a field-plated FET;removing the layer of oxide mask to form a first opening in an area for the field-platedFET;implanting dopants of a first polarity into the substrate under the first opening while the layer of oxide mask is in place;subsequently forming dielectric sidewalls in the first opening on lateral edges of the layer of oxide mask;forming a field relief oxide by thermal oxidation in the first opening at the top surface of the substrate while the dielectric sidewalls are in place, the field relief oxide being thinner than the field oxide;activating the dopants of the first polarity in the substrate under the first opening to form a drift region of the field-plated FET, the drift region having a first conductivity type, the drift region extending laterally past the field relief oxide;removing the dielectric sidewalls and the layer of oxide mask;forming a body in the substrate, the body having a second, opposite, conductivity type, the body abutting the drift region at the top surface of the substrate;forming a gate dielectric layer at the top surface of the substrate adjacent to the field relief oxide, the field relief oxide being at least twice as thick as the gate dielectric layer;forming a gate disposed over the gate dielectric layer, the gate extending over a portion of the body, and over a portion of the drift region between the body and the field relief oxide; and forming a field plate immediately over the field relief oxide.10. The method of claim 9, wherein the layer of oxide mask comprises silicon nitride.1 1. The method of claim 9, comprising forming a layer of pad oxide at the top surface of the substrate, before forming the layer of oxide mask, so that the layer of oxide mask is formed over the layer of pad oxide.12. The method of claim 9, wherein implanting the dopants of the first polarity comprises: implanting phosphorus at a dose of 12 2 12 21 x 10 cm"to 4x 10 cm"at an energy of 150 kilo-electron volts (keV) to 225 keV; andimplanting arsenic at a dose of 12 2 12 22x 10 cm"to 6x 10 cm"at an energy of 100 keV to 150 keV.13. The method of claim 9, comprising performing a thermal drive operation after implanting the dopants of the first polarity and before forming the dielectric sidewalls, the thermal drive operation comprising a furnace anneal at about 900 °C to 1050 °C for 30 minutes to 60 minutes.14. The method of claim 9, wherein the dielectric sidewalls comprise silicon nitride.15. The method of claim 9, wherein the dielectric sidewalls comprise silicon dioxide.16. The method of claim 9, comprising implanting charge adjustment dopants into the substrate under the first opening, after forming the dielectric sidewalls and before forming the field relief oxide.17. The method of claim 9, wherein forming the gate comprises:forming a layer of gate material over the gate dielectric layer and the field relief oxide; andpatterning the layer of gate material to extend over the portion of the body, over the portion of the drift region between the body and the field relief oxide, and over a portion of the field relief oxide, to provide the gate and to provide the field plate as an extension of the gate.18. The method of claim 9, comprising forming the field plate to be electrically isolated from the gate.19. The method of claim 9, comprising forming a planar FET free of a field relief oxide, by a process comprising:removing the layer of oxide mask to form a second opening in an area for the planar FET concurrently with removing the layer of oxide mask to form a first opening in an area for the field-plated FET, wherein a width of the second opening is less than 2.5 times a thickness of a conformal dielectric layer formed to provide the dielectric sidewalls in the first opening;implanting dopants of the first polarity into the substrate under the second opening for a drift region of the planar FET, while the layer of oxide mask is in place, concurrently with implanting the dopants of the first polarity under the first opening while the layer of oxide mask is in place;subsequently forming the conformal dielectric layer over the layer of oxide mask and in the first opening and the second opening; andperforming an anisotropic etch which removes the conformal dielectric layer from over the layer of oxide mask and from a central portion of the first opening, to leave the dielectric sidewalls in the first opening and to leave the dielectric material of the conformal dielectric layer in the second opening so as to block the second opening.20. The method of claim 9, wherein the layer of oxide mask is a chemical mechanical polish (CMP) stop layer formed as part of forming the elements of field oxide.
DRIFT REGION IMPLANT SELF-ALIGNED TO FIELD RELIEF OXIDEWITH SIDEWALL DIELECTRIC[0001] This relates generally to integrated circuits, and more particularly to field effect transistors in integrated circuits.BACKGROUND[0002] Some integrated circuits contain field effect transistors (FETs) with drift regions to enable higher voltage operation. As these integrated circuits are scaled to the next generation of products, a desire exists to increase the switching frequency of these FETs to reduce the sizes of the external passive components such as inductors while maintaining a low power dissipation in these FETs. This requires simultaneously reducing the switching parasitics and the on-state specific resistances (the area-normalized on-state resistances) of the FETs.[0003] To enable operation at elevated drain voltage, the FETs employ drift regions that deplete under high drain voltage conditions, allowing the FETs to block the voltage while supporting conduction during the on-state. A higher voltage FET tends to be formed with the gate extending over field oxide in order to act as a field plate for the drift region. Unfortunately, field oxide in advanced fabrication nodes such as the 250 nanometer node and beyond is commonly formed by shallow trench isolation (STI) processes, and is generally too thick for optimal use as a field relief oxide under a gate extension field plate in such a FET.SUMMARY[0004] In described examples, an integrated circuit includes a field-plated FET and is formed by forming a layer of oxide mask over a top surface of a substrate of the integrated circuit, covering an area for the field-plated FET. A first opening is formed in the layer of oxide mask, exposing an area for a drift region of the field-plated FET. Dopants are implanted into the substrate under the first opening. Subsequently, dielectric sidewalls are formed on the layer of oxide mask along a lateral boundary of the first opening. A layer of field relief oxide is formed at the top surface of the substrate in the area of the first opening which is exposed by the dielectric sidewalls. The implanted dopants are diffused into the substrate to form the drift region, extending laterally past the layer of field relief oxide. The dielectric sidewalls and layer of oxide mask are removed after the layer of field relief oxide is formed. A gate of the field-plated FET is formed over a body of the field-plated FET, extending over the adjacent drift region. A field plate is formed immediately over the field relief oxide adjacent to the gate.BRIEF DESCRIPTION OF THE DRAWINGS[0005] FIG. 1 is a cross section of an example integrated circuit including a field-plated FET.[0006] FIG. 2 A through FIG. 2K are cross sections of the integrated circuit of FIG. 1, depicting successive stages of an example method of formation.[0007] FIG. 3A through FIG. 3F are cross sections of another example integrated circuit containing a field-plated FET, depicted in successive stages of an example method of formation. DETAILED DESCRIPTION OF EXAMPLE EMB ODEVIENT S[0008] The figures are not drawn to scale, and they are provided to illustrate the description. Example embodiments are not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with example embodiments.[0009] FIG. 1 is a cross section of an example integrated circuit including a field-plated FET. In this example, an n-channel field-plated FET is described. An analogous p-channel field-plated FET may be described with appropriate changes in polarities of dopants. The integrated circuit 100 includes a substrate 102, possibly with a heavily doped n-type buried layer 104 and a p-type layer 106 over the n-type buried layer 104. The p-type layer 106 extends to a top surface 108 of the substrate 102. The integrated circuit 100 includes the n-channel field-plated FET 110. The integrated circuit 100 may also optionally include a planar FET 112. Components of the integrated circuit 100, such as the field-plated FET 110 and the planar FET 112 may be laterally separated by field oxide 114. The field oxide 114 may have an STI structure as depicted in FIG. 1, or may have a localized oxidation of silicon (LOCOS) structure.[0010] The field-plated FET 110 includes an n-type drift region 116 disposed in the substrate 102. The drift region 116 extends from an n-type drain contact region 118 to a p-type body 120 of the field-plated FET 110. For example, an average dopant density of the drift region 116 may be 1 x 1016 cm-3 to 1 x 1016 cm-3. The drift region 116 may have a heavier-doped top portion and a lighter doped bottom portion, to provide desired values of breakdown voltage and specific resistance for the field-plated FET 110. A layer of field relief oxide 122 is disposed over the drift region 116. The field relief oxide 122 has a tapered profile at lateral edges of the field relief oxide 122, commonly referred to as a bird's beak. The field relief oxide 122 is thinner than the field oxide 114. The drift region 116 extends past the field relief oxide 122 by a lateral distance 124 adjacent to the body 120. For example, the lateral distance 124 may be 100 nanometers to 200 nanometers, which may advantageously provide desired low values of specific resistance and gate-drain capacitance of the field-plated FET 110. A gate dielectric layer 126 of the field-plated FET 110 is disposed at the top surface 108 of the substrate 102, extending from the field relief oxide 122 to an n-type source 128 of the field-plated FET 110 abutting the body 120 opposite from the drift region 116. The gate dielectric layer 126 is disposed over a portion of the drift region 116 which extends past the field relief oxide 122, and over a portion of the body 120 between the drift region 116 and the source 128. The field relief oxide 122 is at least twice as thick as the gate dielectric layer 126. The field-plated FET 110 includes a gate 130 disposed over the gate dielectric layer 126, extending from the source 128, over the portion of the body 120 between the drift region 116 and the source 128, and over the portion of the drift region 116 which extends past the field relief oxide 122. In this example, the gate 130 extends partway over the field relief oxide 122 to provide a field plate 132 over a portion of the drift region 116. In an alternative version of this example, the field plate may be provided by a separate structural element from the gate 130. The thickness of the field relief oxide 122 may be selected to provide a desired maximum value of electric field in the drift region 116 during operation of the field-plated FET 110.[0011] The field-plated FET 110 may possibly include an optional charge adjustment region 134 disposed in the substrate immediately under the field relief oxide 122. The charge adjustment region 134 is substantially aligned with the field relief oxide 122. In one version of this example, dopants in the charge adjustment region 134 may be n-type, such as phosphorus and/or arsenic, so that a net dopant density in the charge adjustment region 134 is higher than in the drift region 116 below the charge adjustment region 134. In this version of this example, the charge adjustment region 134 may be considered to be a part of the drift region 116. In another version of this example, dopants in the charge adjustment region 134 may be p-type, such as boron, gallium and/or indium, which compensate, but do not counterdope, the n-type dopants of the drift region 116, so that a net dopant density in the charge adjustment region 134 is lower than in the drift region 116 below the charge adjustment region 134, but remains n-type. In this version of this example, the charge adjustment region 134 may also be considered to be a part of the drift region 116. In a further version of this example, the dopants in the charge adjustment region 134 may be p-type, which counterdope the n-type dopants of the drift region 116, so that a net dopant density in the charge adjustment region 134 is converted to p-type. In this version of this example, the charge adjustment region 134 may be considered to be separate from the drift region 116. Dopant polarity and density in the charge adjustment region 134 may be selected to provide desired values of breakdown voltage and specific resistance for the field-plated FET 110.[0012] The field-plated FET 110 may also include a p-type body contact region 136 disposed in the substrate 102 in the body 120. Gate sidewall spacers 138 may be disposed on side surfaces of the gate 130. Metal silicide 140 may be disposed on the drain contact region 118 and the source 128 and body contact region 136. The field-plated FET 110 may have a drain-centered configuration in which the drain contact region 118 is surrounded by the field relief oxide 122, which is surrounded by the body 120 and source 128. Other configurations of the field-plated FET 110 are within the scope of this example.[0013] The planar FET 112 includes an n-type drift region 142 disposed in the substrate 102. The drift region 142 extends from an n-type drain contact region 144 to a p-type body 146 of the planar FET 112. The planar FET 1 12 is free of a layer of field relief oxide similar to the field relief oxide 122 of the field-plated FET 110. The planar FET 112 is also free of charge adjustment regions similar to the charge adjustment region 134 of the field-plated FET 110. The drift region 142 of the planar FET 112 has a similar distribution and species of dopants as the drift region 116 of the field-plated FET 110, as a result of being formed concurrently.[0014] A gate dielectric layer 148 of the planar FET 112 is disposed at the top surface 108 of the substrate 102, extending from the drain contact region 144 to an n-type source 150 of the planar FET 112 abutting the body 146 opposite from the drift region 142. The gate dielectric layer 148 is disposed over a portion of the drift region 142 between the drain contact region 144 and the body 146, and over a portion of the body 146 between the drift region 142 and the source 150. The planar FET 112 includes a gate 152 disposed over the gate dielectric layer 148, extending from the source 150 to a position proximate to the drain contact region 144.[0015] The planar FET 112 may also include a p-type body contact region 154 disposed in the substrate 102 in the body 146. Gate sidewall spacers 156 may be disposed on side surfaces of the gate 152. The metal silicide 140, if present on the field-plated FET 110 may be disposed on the drain contact region 144 and the source 150 and body contact region 154. The planar FET 112 may have a drain-centered configuration or other configuration.[0016] FIG. 2A through FIG. 2K are cross sections of the integrated circuit of FIG. 1, depicting successive stages of an example method of formation. Referring to FIG. 2A, the substrate 102 may be formed by starting with a p-type silicon wafer, possibly with an epitaxial layer on a top surface, and forming the n-type buried layer 104 by implanting n-type dopants such as antimony at a dose of 1 x 1015 cm-2 to 1 x 1016 cm-2. A thermal drive process heats the wafer to activate and diffuse the implanted n-type dopants. The p-type layer 106 is formed on the wafer by an epitaxial process with in-situ p-type doping. For example, the epitaxially formed material may be 4 microns to 6 microns thick, advantageously enabled by the relatively shallow drift region 116 of FIG. 1, which is made possible by the self-aligned nature of the field relief oxide 122 of FIG. 1 relative to the drift region 116. The n-type dopants diffuse partway into the epitaxially grown material, so that the n-type buried layer 104 overlaps a boundary between the original silicon wafer and the epitaxially grown material. For example, an average bulk resistivity of the p-type layer 106 may be 1 ohm-cm to 10 ohm-cm. An optional p-type buried layer may be formed in the p-type layer 106 by implanting boron at an energy, such as 2 mega-electron volts (MeV) to 3 MeV.[0017] The field oxide 114 is formed at the top surface 108 of the substrate 102, such as by an STI process or a LOCOS process. An example STI process includes forming a chemical mechanical polish (CMP) stop layer of silicon nitride and a layer of STI pad oxide over the substrate 102. Isolation trenches are etched through the CMP stop layer and the STI pad oxide and into the substrate 102. The isolation trenches are filled with silicon dioxide using a plasma enhanced chemical vapor deposition (PECVD) process using tetraethyl orthosilicate (TEOS), a high density plasma (HDP) process, a high aspect ratio process (HARP) using TEOS and ozone, an atmospheric chemical vapor deposition (APCVD) process using silane, or a sub-atmospheric chemical vapor deposition (SACVD) process using dichlorosilane. Excess silicon dioxide is removed from over the CMP stop layer by an oxide CMP process. The CMP stop layer is subsequently removed, leaving the field oxide 114. An example LOCOS process includes forming a silicon nitride mask layer over a layer of LOCOS pad oxide over the substrate 102. The silicon nitride mask layer is removed in areas for the field oxide 114, exposing the LOCOS pad oxide. Silicon dioxide is formed in the areas exposed by the silicon nitride mask layer by thermal oxidation, to form the field oxide 114. The silicon nitride mask layer is subsequently removed, leaving the field oxide 114 in place.[0018] A layer of pad oxide 158 is formed at the top surface 108 of the substrate 102. For example, the pad oxide 158 may be 5 nanometers to 25 nanometers thick, and may be formed by thermal oxidation or by any of several chemical vapor deposition (CVD) processes. A layer of oxide mask 160 is formed over the layer of pad oxide 158. For example, the layer of oxide mask 160 may include silicon nitride, formed by a low pressure chemical vapor deposition (LPCVD) process using dichlorosilane and ammonia. Alternatively, silicon nitride in the layer of oxide mask 160 may be formed by decomposition of bis(tertiary-butyl-amino) silane (BTBAS). Other processes to form the layer of oxide mask 160 are within the scope of this example. For example, the layer of oxide mask 160 may be around 1 to 2 times the thickness of the field relief oxide 122 of FIG. 1.[0019] An etch mask 162 is formed over the layer of oxide mask 160 which exposes an area for the field relief oxide 122 of FIG. 1 in the area for the field-plated FET 110, and exposes an area for implanting the drift region 142 of FIG. 1 in the area for the planar FET 112. The etch mask 162 may include photoresist formed by a photolithographic process, and may include hard mask material such as amorphous carbon, and may include an anti -reflection layer such as an organic bottom anti-reflection coat (BARC). The exposed area for the field relief oxide 122 in the area for the field-plated FET 110 has lateral dimensions that are sufficiently wide so that after etching the layer of oxide mask 160, a central portion of the etched area remains clear after formation of dielectric sidewalls. The exposed area for implanting the drift region 142 in the area for the planar FET 112 has a width sufficiently narrow so that after etching the layer of oxide mask 160, the exposed area for implanting the drift region 142 remains blocked by the dielectric material used to form the dielectric sidewalls.[0020] Referring to FIG. 2B, the layer of oxide mask 160 is removed in the areas exposed by the etch mask 162, exposing the layer of pad oxide 158. A portion of the pad oxide 158 may also be removed in the areas exposed by the etch mask 162. Removing the layer of oxide mask 160 in the area for the field-plated FET 110 forms a first opening 164 in the layer of oxide mask 160. Removing the layer of oxide mask 160 in the area for the planar FET 112 forms a second opening 166 in the layer of oxide mask 160. Lateral dimensions 168 of the first opening 164 are sufficiently wide so that a central portion of the first opening 164 remains clear after formation of dielectric sidewalls. For example, in a version of this example in which the dielectric sidewalls are formed by deposition of a conformal layer that is 80 nanometers to 100 nanometers thick, the lateral dimensions 168 are greater than about 350 nanometers. A width 170 of the second opening 166 is sufficiently narrow so that the second opening 166 remains blocked by the dielectric material used to form the dielectric sidewalls. To attain a desired amount of dielectric material in the second opening, the width 170 of the second opening 166 may be less than 2.5 times a thickness of a subsequently formed dielectric layer to form dielectric sidewalls in the first opening 164. For example, in the version of this example described hereinabove in which the dielectric sidewalls are formed by deposition of a conformal layer that is about 80 nanometers thick, the width 170 is less than about 200 nanometers. The layer of oxide mask 160 may be removed by a wet etch, such as an aqueous solution of phosphoric acid, which undercuts the etch mask 162 as depicted in FIG. 2B. Alternatively, the layer of oxide mask 160 may be removed by a plasma etch using fluorine radicals, which may produce less undercut. The etch mask 162 may optionally be removed after etching the layer of oxide mask 160, or may be left in place to provide additional stopping material in a subsequent ion implant step.[0021] Referring to FIG. 2C, n-type dopants 172 are implanted into the substrate 102 in the areas exposed by removing the layer of oxide mask 160, including the first opening 164 in the area for the field-plated FET 1 10 and the second opening 166 in the area for the planar FET 1 12, advantageously self-aligning the subsequently-formed drift region 1 16 of FIG. 1 to the subsequently-formed field relief oxide 122 of FIG. 1. For example, the n-type dopants 172 may include phosphorus 174, which may be implanted at a dose of 1 x 1012 cm-2 to 4x 1012 cm-2 at an energy of 150 kilo-electron volts (keV) to 225 keV, and arsenic 176 which may be implanted at a dose of 2x 1012 cm-2 to 6x 1012 cm-2 at an energy of 100 keV to 150 keV. The implanted phosphorus 174 forms a first phosphorus implanted region 178 under the first opening 164 and a second phosphorus implanted region 180 under the second opening 166. Similarly, the implanted arsenic 176 forms a first arsenic implanted region 182 under the first opening 164 and a second arsenic implanted region 184 under the second opening 166. The first phosphorus implanted region 178 and the second phosphorus implanted region 180 are advantageously deeper than the first arsenic implanted region 182 and the second arsenic implanted region 184, to provide graded junctions in the drift region 1 16 of FIG. 1 in the field-plated FET 1 10 and the drift region 142 of FIG. 1 in the planar FET 1 12. Optionally, the phosphorus dopants 174 of the n-type dopants 172 may also include a deep dose of phosphoms which forms a first deep compensating implanted region 186 in the substrate 102 below the first phosphorus implanted region 178 and forms a second deep compensating implanted region 188 in the substrate 102 below the second phosphorus implanted region 180. The deep dose of phosphorus is intended to compensate the p-type layer 106 so as to reduce the net dopant density without counterdoping the p-type layer 106 to n-type. Any remaining portion of the etch mask 162 is removed after the n-type dopants 172 are implanted.[0022] Referring to FIG. 2D, an optional thermal drive operation may be performed, which activates and diffuses the implanted n-type dopants 172 of FIG. 2C. For example, the thermal drive operation may include a ramped furnace anneal at about 900 °C to 1050 °C for 30 minutes to 60 minutes. The phosphorus dopants in the first phosphorus implanted region 178 of FIG. 2C form a first phosphorus diffused region 190 under the first opening 164, and the phosphorus dopants in the second phosphorus implanted region 180 of FIG. 2C form a second phosphorus diffused region 192 under the second opening 166. Similarly, the arsenic dopants in the first arsenic implanted region 182 of FIG. 2C form a first arsenic diffused region 194 under the first opening 164, and the arsenic dopants in the second arsenic implanted region 184 of FIG. 2C form a second arsenic diffused region 196 under the second opening 166. The first phosphorus diffused region 190 and the second phosphorus diffused region 192 are advantageously deeper than the first arsenic diffused region 194 and the second arsenic diffused region 196. If the first deep compensating implanted region 186 and the second deep compensating implanted region 188 are formed as described in reference to FIG. 2C, the optional thermal driver operation diffuses and activates the phosphorus dopants in the first deep compensating implanted region 186 of FIG. 2C to form a first compensated region 198 in the substrate 102 under and around the first phosphorus diffused region 190, and diffuses and activates the phosphorus dopants in the second deep compensating implanted region 188 of FIG. 2C to form a second compensated region 200 in the substrate 102 under and around the second phosphorus diffused region 192. In lieu of the optional thermal drive operation, the implanted n-type dopants 172 may be activated and diffused during a subsequent thermal oxidation operation to form the field relief oxide 122 of FIG. 1.[0023] Referring to FIG. 2E, a conformal dielectric layer 202 is formed over the layer of oxide mask 160 and in the first opening 164 in the area for the field-plated FET 110 and in the second opening 166 in the area for the planar FET 112. The conformal dielectric layer 202 may comprise a single layer of dielectric material, or may comprise two or more sub-layers. The conformal dielectric layer 202 may include silicon nitride, silicon dioxide and/or other dielectric material. In the version of this example depicted in FIG. 2E, the conformal dielectric layer 202 may include a thin layer of silicon dioxide 204 formed on the layer of oxide mask 160 and on the pad oxide 158, and a layer of silicon nitride 206 formed on the thin layer of silicon dioxide 204. A thickness of the conformal dielectric layer 202 is selected to provide a desired width of subsequently-formed dielectric sidewalls in the first opening 164 on lateral edges of the layer of oxide mask 160, and to block the second opening 166. For example, the thickness of the conformal dielectric layer 202 may be 80 nanometers to 100 nanometers to provide dielectric sidewalls that are 75 nanometers to 90 nanometers wide. The conformal dielectric layer 202 in a center of the second opening 166 is thicker than the conformal dielectric layer 202 in a center of the first opening 164, as a result of the limited width 170 of the second opening 166. Silicon nitride in the conformal dielectric layer 202 may be formed by an LPCVD process or decomposition of BTBAS. Silicon dioxide in the conformal dielectric layer 202 may be formed by decomposition of TEOS.[0024] Referring to FIG. 2F, an anisotropic etch process is performed which removes the conformal dielectric layer 202 from a central portion of the first opening 164, leaving dielectric material of the conformal dielectric layer 202 to form dielectric sidewalls 208 in the first opening 164 on lateral edges of the layer of oxide mask 160. For example, a width of the dielectric sidewalls 208 may be 50 percent to 90 percent of the thickness of the conformal dielectric layer 202 as formed in the center of the first opening 164. The anisotropic etch does not remove all of the dielectric material of the conformal dielectric layer 202 from the second opening 166 so that a continuous portion of the dielectric material covers the pad oxide 158 in the second opening 166.[0025] Referring to FIG. 2G, an optional charge adjustment implant operation may be performed which implants charge adjustment dopants 210 are implanted into the substrate 102, using the dielectric sidewalls 208 and the layer of oxide mask 160 as an implant mask. The implanted charge adjustment dopants 210 form a charge adjustment implanted region 212 in the substrate 102 immediately under the first opening 164; lateral extents of the charge adjustment implanted region 212 are defined by the dielectric sidewalls 208, advantageously self-aligning the subsequently-formed charge adjustment region 134 of FIG. 1 to the subsequently-formed field relief oxide 122 of FIG. 1. The dielectric material of the conformal dielectric layer 202 remaining in the second opening 166 blocks the charge adjustment dopants 210 from the substrate 102 below the second opening 166. In one version of this example, the charge adjustment dopants 210 may be n-type dopants such as phosphorus and/or arsenic. In another version of this example, the charge adjustment dopants 210 may be p-type dopants, such as boron, gallium and/or indium. For example, a dose of the charge adjustment dopants 210 may be 1 x 1010 cm-2 to 1 x 1012 cm-2. The charge adjustment dopants 210 may be implanted at an energy sufficient to place a peak of the implanted dopants 25 nanometers to 100 nanometers into the substrate 102 below the pad oxide 158.[0026] Referring to FIG. 2H, the field relief oxide 122 is formed by thermal oxidation in the first opening 164 in the area for the field-plated FET 110. Properties of the dielectric sidewalls 208 and the layer of oxide mask 160 affect a length and shape of the tapered profile, that is, the bird's beak, at lateral edges of the field relief oxide 122. Thermal oxide does not form in the second opening 166 in the area for the planar FET 112, because the dielectric material of the conformal dielectric layer 202 remaining in the second opening 166 blocks an oxidizing ambient of the thermal oxidation process. An example furnace thermal oxidation process may include ramping a temperature of the furnace to about 1000 °C in a time period of 45 minutes to 90 minutes with an ambient of 2 percent to 10 percent oxygen, maintaining the temperature of the furnace at about 1000 °C for a time period of 10 minutes to 20 minutes while increasing the oxygen in the ambient to 80 percent to 95 percent oxygen, maintaining the temperature of the furnace at about 1000 °C for a time period of 60 minutes to 120 minutes while maintaining the oxygen in the ambient at 80 percent to 95 percent oxygen and adding hydrogen chloride gas to the ambient, maintaining the temperature of the furnace at about 1000 °C for a time period of 30 minutes to 90 minutes while maintaining the oxygen in the ambient at 80 percent to 95 percent oxygen with no hydrogen chloride, and ramping the temperature of the furnace down in a nitrogen ambient. The temperature profile of the thermal oxidation process diffuses and activates the implanted dopants in the charge adjustment implanted region 212 of FIG. 2G to form the charge adjustment region 134. The temperature profile of the thermal oxidation process also further diffuses the n-type dopants of the first phosphorus diffused region 190, the second phosphorus diffused region 192, the first arsenic diffused region 194 and the second arsenic diffused region 196, and the first compensated region 198 and the second compensated region 200, if present. A majority of the n-type dopants in the first arsenic diffused region 194 are arsenic, and a majority of the n-type dopants in the first phosphorus diffused region 190 are phosphorus. Similarly, a majority of the n-type dopants in the second arsenic diffused region 196 are arsenic, and a majority of the n-type dopants in the second phosphorus diffused region 192 are phosphorus. The first phosphorus diffused region 190 and the first arsenic diffused region 194 provide the drift region 1 16 of the field-plated FET 1 10. Similarly, the second phosphorus diffused region 192 and the second arsenic diffused region 196 provide the drift region 142 of the planar FET 1 12. The first compensated region 198 and the second compensated region 200 are p-type, with a lower net dopant density than the underlying p-type layer 106. The first compensated region 198 and the second compensated region 200 advantageously provide reduced drain junction capacitances for the field-plated FET 1 10 and the planar FET 1 12, respectively. The layer of oxide mask 160, the dielectric sidewalls 208 and the dielectric material of the conformal dielectric layer 202 remaining in the second opening 166 are subsequently removed. Silicon nitride may be removed by an aqueous solution of phosphoric acid. Silicon dioxide may be removed by an aqueous solution of buffered dilute hydrofluoric acid.[0027] Referring to FIG. 21, the p-type body 120 of the field-plated FET 1 10 and the p-type body 146 of the planar FET 1 12 are formed, possibly concurrently. The body 120 and the body 146 may be formed by implanting p-type dopants such as boron at one or more energies, to provide a desired distribution of the p-type dopants. An example implant operation may include a first implant of boron at a dose of l x 1014 cm-2 to 3 x 1014 cm-2 at an energy of 80 keV to 150 keV, and a second implant of boron at a dose of 1 x 1013 cm-2 to 3 x 1013 cm-2 at an energy of 30 keV to 450 keV. A subsequent anneal process, such as a rapid thermal anneal at 1000 °C for 30 seconds, activates and diffuses the implanted boron.[0028] A layer of gate dielectric material 214 is formed on exposed semiconductor material at the top surface 108 of the substrate 102, including in the areas for the field-plated FET 1 10 and the planar FET 1 12. The layer of gate dielectric material 214 may include silicon dioxide, formed by thermal oxidation, and/or hafnium oxide or zirconium oxide, formed by CVD processes, and may include nitrogen atoms introduced by exposure to a nitrogen-containing plasma. A thickness of the layer of gate dielectric material 214 reflects operating voltages of the field-plated FET 110 and the planar FET 112. A layer of gate material 216 is formed over the layer of gate dielectric material 214 and the field relief oxide 122. For example, the layer of gate material 216 may include poly crystalline silicon, referred to herein as poly silicon, possibly doped with n-type dopants. Other gate materials, such as titanium nitride, in the layer of gate material 216 are within the scope of this example. For example, polysilicon in the layer of gate material 216 may be 300 nanometers to 800 nanometers thick.[0029] A gate mask 218 is formed over the layer of gate material 216 to cover areas for the gate 130 of FIG. 1 of the field-plated FET 110 and the gate 152 of FIG. 1 of the planar FET 112. In this example, the gate mask 218 extends partway over the field relief oxide 122 to cover an area for the field plate 132 of FIG. 1. The gate mask 218 may include photoresist formed by a photolithographic process. The gate mask 218 may also include a layer of hard mask material such as silicon nitride and/or amorphous carbon. Further, the gate mask 218 may include a layer of anti-reflection material, such as a layer of B ARC.[0030] Referring to FIG. 2J, a gate etch process is performed which removes the layer of gate material 216 of FIG. 21 where exposed by the gate mask 218, to form the gate 130 of the field-plated FET 110 and to form the gate 152 of the planar FET 112. For example, the gate etch process may be a reactive ion etch (RIE) process using fluorine radicals. The gate mask 218 may be eroded by the gate etch process. After the gates 130 and 152 are formed, the remaining gate mask 218 is removed.[0031] Referring to FIG. 2K, the layer of gate dielectric material 214 of FIG. 2J provides the gate dielectric layer 126 of the field-plated FET 110 and the gate dielectric layer 148 of the planar FET 112. The gate sidewall spacers 138 may be formed on side surfaces of the gate 130 of the field-plated FET 110 by forming a conformal layer of sidewall material, possibly comprising more than one sub-layer of silicon nitride and/or silicon dioxide, over the gate 130 and the top surface 108 of the substrate 102. Subsequently, an anisotropic etch such as an RIE process removes the layer of sidewall material from top surfaces of the gate 130 and the substrate 102, leaving the gate sidewall spacers 138 in place. The gate sidewall spacers 156 on the gate 152 of the planar FET 112 may be formed similarly to, and possibly concurrently with, the gate sidewall spacers 138 of the field-plated FET 110.[0032] The n-type source 128 and n-type drain contact region 118 of the field-plated FET 110 may be formed by implanting n-type dopants such as phosphorus and arsenic, such as at a dose of 1 x 1014 cm-2 to 5x 1015 cm-2 into the substrate 102 adjacent to the gate 130 and the field relief oxide 122, followed by an anneal operation, such as a spike anneal or a flash anneal, to activate the implanted dopants. An n-type drain extension portion of the source 128 which extends partway under the gate 130 may be formed before forming the gate sidewall spacers 138 by implanting n-type dopants into the substrate adjacent to the gate 130. The n-type source 150 and n-type drain contact region 144 of the planar FET 112 may be formed similarly to, and possibly concurrently with, the source 128 and drain contact region 1 18 of the field-plated FET 110.[0033] The p-type body contact region 136 in the body 120 of the field-plated FET 110 may be formed by implanting p-type dopants (e.g., boron), such as at a dose of 1 x 1014 cm-2 to 5x 1015 cm-2 into the substrate 102, followed by an anneal operation, such as a spike anneal or a flash anneal, to activate the implanted dopants. The p-type body contact region 136 in the body 146 of the planar FET 112 may be formed similarly to, and possibly concurrently with, the body contact region 136 in the body 120 of the field-plated FET 1 10.[0034] Forming the drift region 116 to be self-aligned with the field relief oxide 122 may provide a desired low value of the lateral distance 124 the gate 130 overlaps the drift region 116, advantageously providing a low gate-drain capacitance. Further, the self-aligned configuration may provide the lateral distance 124 to be controllable from device to device without undesired variability due to unavoidable photolithographic alignment variations, sometimes referred to as alignment errors.[0035] FIG. 3A through FIG. 3F are cross sections of another example integrated circuit containing a field-plated FET, depicted in successive stages of an example method of formation. In this example, an n-channel field-plated FET is described. An analogous p-channel field-plated FET may be described with appropriate changes in polarities of dopants. Referring to FIG. 3A, the integrated circuit 300 includes a substrate 302 with a p-type layer 306 extending to a top surface 308 of the substrate 302. The p-type layer 306 may be an epitaxial layer on a semiconductor wafer, or may be a top portion of a bulk silicon wafer. The integrated circuit 300 includes the n-channel field-plated FET 310, which in this example has a symmetric drain-centered configuration. The integrated circuit 300 may also optionally include a planar FET, not shown in FIG. 3A through FIG. 3F. In this example, the integrated circuit 300 includes field oxide 314 around an area for the field-plated FET 310. The field oxide 314 is formed by an STI process, as described in reference to FIG. 2A. The STI process uses a layer of STI pad oxide 420 over the top surface 308 of the substrate 302, and a CMP stop layer 422 of silicon nitride over the layer of STI pad oxide 420. In this example, the layer of STI pad oxide 420 and the CMP stop layer 422 are not removed after forming the field oxide 314, and are used to form the field-plated FET 310.[0036] The layer of STI pad oxide 420 and the CMP stop layer 422 extend across the area for the field-plated FET 310. An etch mask 362 is formed over the CMP stop layer 422 which exposes areas for a subsequently-formed field relief oxide in the area for the field-plated FET 310. The etch mask 362 may be formed as described in reference to FIG. 2A. The exposed areas for the field relief oxide have lateral dimensions that are sufficiently wide so that after etching the CMP stop layer 422, central portions of the etched areas remains clear after formation of dielectric sidewalls.[0037] Referring to FIG. 3B, the CMP stop layer 422 is removed in the areas exposed by the etch mask 362, exposing the layer of STI pad oxide 420, forming openings 364 in the CMP stop layer 422. Lateral dimensions 368 of the openings 364 are sufficiently wide so that central portions of the openings 364 remain clear after formation of dielectric sidewalls. The CMP stop layer 422 may be removed by a plasma etch using fluorine radicals, which may produce very little undercut, as depicted in FIG. 3B. Alternatively, the CMP stop layer 422 may be removed by a wet etch, as described in reference to FIG. 2B.[0038] N-type dopants 372 are implanted into the substrate 302 in the areas exposed by removing the CMP stop layer 422, including the openings 364 in the area for the field-plated FET 310, advantageously self-aligning a subsequently-formed drift region to the subsequently-formed field relief oxide. For example, the n-type dopants 372 may include phosphorus and arsenic as described in reference to FIG. 2C. The implanted n-type dopants 372 form drift implanted regions 424 under the openings 364. Any remaining portion of the etch mask 362 is removed after the n-type dopants 372 are implanted.[0039] Referring to FIG. 3C, dielectric sidewalls 408 are formed in the openings 364 on lateral edges of the CMP stop layer 422, such as described in reference to FIG. 2E and FIG. 2F. Additional sidewalls 426 may be formed over the field oxide 314 on lateral edges of the CMP stop layer 422, concurrently with the dielectric sidewalls 408 in the openings 364. Central portions of the openings 364 are clear after forming the dielectric sidewalls 408. [0040] Referring to FIG. 3D, the field relief oxide 322 is formed by thermal oxidation in the openings 364 in the area for the field-plated FET 310. Properties of the dielectric sidewalls 408 and the CMP stop layer 422 affect a length and shape of lateral edges of the field relief oxide 322. The field relief oxide 322 may be formed by a furnace thermal oxidation process as described in reference to FIG. 2H. The temperature profile of the thermal oxidation process diffuses and activates the implanted n-type dopants in the drift implanted region 424 of FIG. 3C to form a drift region 316 of the field-plated FET 310. The CMP stop layer 422, the dielectric sidewalls 408 and the additional sidewalls 426 are subsequently removed.[0041] Referring to FIG. 3E, an n-type well 428 may optionally be formed in the substrate 302 under the drift region 316 centrally located with respect to the field relief oxide 322. The n-type well 428 may advantageously reduce a drain resistance of the field-plated FET 310 and spread current flow through a central portion of the drain of the field-plated FET 310, providing improved reliability. The n-type well 428 may be formed concurrently with other n-type wells under p-channel metal oxide semiconductor (PMOS) transistors in logic circuits of the integrated circuit 300. A p-type body 320 of the field-plated FET 310 is formed in the substrate 302 abutting the drift region 316. The body 320 may be formed by implanting p-type dopants such as boron, such as described in reference to FIG. 21. A subsequent anneal process activates and diffuses the implanted boron.[0042] The layer of STI pad oxide 420 of FIG. 3D is removed. A gate dielectric layer 326 is formed at the top surface 308 of the substrate 302 adjacent to the field relief oxide 322. The gate dielectric layer 326 may be formed, such as described in reference to FIG. 21. A gate 330 of the field-plated FET 310 is formed over the gate dielectric layer 326, extending from proximate the field relief oxide 322 to partway overlapping the body 320. The gate 330 extends over a portion of the drift region between the field relief oxide 322 and the body 320. The gate 330 may be formed as described in reference to FIG. 21 and FIG. 2J.[0043] Gate sidewall spacers 338 are formed on side surfaces of the gate 330, such as described in reference to FIG. 2K. In this example, a gate cap 430 of dielectric material is formed over a top surface of the gate 330. The gate cap 430 and the gate sidewall spacers 338 electrically isolate the top surface and lateral surfaces of the gate 330. The gate cap 430 may be formed, such as by forming a dielectric layer over a layer of gate material before forming a gate mask and performing a gate etch. [0044] Referring to FIG. 3F, an n-type drain contact region 318 is formed in the substrate 302 in the drift region 316 between two opposing portions of the field relief oxide 322. An n-type source 328 is formed in the substrate 302 adjacent to the gate 330 opposite from the drain contact region 318. The drain contact region 318 and the source 328 may be formed as described in reference to FIG. 2K, and may be formed concurrently. An n-type drain extension portion of the source 328 which extends partway under the gate 330 may be formed before forming the gate sidewall spacers 338.[0045] In this example, a field plate 432 is formed immediately over a portion of the field relief oxide 322, extending to the gate 330. The field plate 432 is electrically isolated from the gate 330. The field plate 432 may be formed by forming a layer of conductive material, such as polysilicon or titanium nitride, over the gate 330 and field relief oxide 322, forming an etch mask over the layer of conductive material to cover an area for the field plate 432, and performing an etch process to define the field plate 432. The integrated circuit 300 may be configured to apply separate bias voltages to the gate 330 and the field plate 432. Forming the field plate 432 to be electrically isolated and separately biasable from the gate 330 may advantageously enable reduction of an electric field in the drift region 316 during operation of the field-plated FET 310 compared to an analogous field-plated FET with a gate overlapping field relief oxide to provide a field plate.[0046] The drift region 316 extends past the field relief oxide 322 a first lateral distance 434 on a first side of the field-plated FET 310, and extends past the field relief oxide 322 a second lateral distance 436 on a second side opposite from the first side. As a result of the drift region 316 being formed in a self-aligned manner with the field relief oxide 322, the first lateral distance 434 is substantially equal to the second lateral distance 436, which may advantageously provide for uniform current distribution through the field-plated FET 310. Forming the drift region 316 to be self-aligned with the field relief oxide 322 may also advantageously provide a desired narrow range of values for the first lateral distance 434 and the second lateral distance 436 which is controllable from device to device without undesired variability due to unavoidable photolithographic alignment variations, sometimes referred to as alignment errors.[0047] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
A device stand for a portable device, comprising a foldable extension leg which supports the portable device at a cable connection instead of directly supporting the portable device itself. In one or more embodiments, the device stand can be connected to a storage device such as a flash drive, or can directly incorporate a storage device into its form.
WHAT IS CLAIMED IS:1. A stand for a portable device, comprising:a body having a top side opposed a bottom side, the bottom side configured to engage with a plug housing; andfirst and second segments coupled to the top side of the body, the first segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along an end of the top side of the body, wherein the second segment is rotatively coupled around a further hinged joint positioned along the second end of the first segment.2. The stand of claim 1 wherein the first and second segments are foldably stored within the top side of the body in response to being rotated around the first and second hinged joints, respectively.3. The stand of claim 2, further comprising:a protrusion extending from the second end of the first segment; and a recess through a portion of the top side of the body, wherein the recess is configured to engage with the protrusion in response to the first and second segments being foldably stored within the top side of the body.4. The stand of claim 3, wherein the folded first and second segments are locked within the top side of the body in response to the protrusion being engaged with the recess.5. The stand of claim 4, wherein the locked first and second segments within the top side of the body are released in response to the protrusion being disengaged from the recess.6. The stand of claim 5, wherein the protrusion is disengaged from the recess in response to die body flexing by applying at least a mechanical pull force to the second end of the first segment to cause the first segment to rotate around the first hinged joint.7. The stand of claim 1, wherein the first and second segments are unfolded around the first and second hinged joints, respectively, to provide balance to the portable device.8. The stand of claim 7, wherein the plug housing encapsulates at least a portion of one of a connection cable or portable storage device.9. The stand of claim 8, wherein the at least portion of one of the connection cable or the portable storage device is engaged to an edge of the portable device such that the unfolded first and second segments provide balance at the edge of the portable device.10. The stand of claim 9, wherein the connection cable comprising at least one of a charging cable or a USB cable and the portable storage device comprises a memory stick.11. A stand for a portable device, comprising:a body having a top side opposed a bottom side, the bottom side configured to attachably engage with a plug housing;a segment coupled to the top side of the body, the segment having a first end opposite a second end, the first end rotativeiy coupled around a hinged joint positioned along the top side of the body;a protrusion extending from the second end of the segment; and a recess through a portion of the top side of the body, wherein the protrusion engages with the recess in response to the segment being foldably stored within the top side of the body.12. The stand of claim 11, wherein the segment is foldably stored within the top side of the body in response to the segment rotating about the hinged joint in a first rotational direction.13. The stand of claim 1 , wherein the segment foldably stored within the top side of the body is unfolded in response to the protrusion being disengaged from the recess.14. The stand of claim 13, wherein the protrusion is disengaged from the recess in response to the segment rotating about the hinged joint in a second rotational direction opposite the first rotational direction.15. The stand of claim 14, wherein the segment is unfolded around the hinged joint to provide balance to the portable device.16. The stand of claim 15, wherein the plug housing encapsulates at least a portion of one of a connection cable or portable storage device, the at least portion of one of the connection cable or the portable storage device is engaged to an edge of the portable device such that the unfolded first and second segments provide balance at the edge of the portable device.17. An integrated stand for a portable device, comprising:a storage device configured to store data therein, the storage device having a front end opposite a back end, each of the front end and the back end configured to have at least one connector extruded therethrough:an actuator coupled to the storage device to cause the at least one of the connectors to extrude through at least one of the back end or front end of the storage device; andfirst and second segments coupled to a side of the storage device, the first segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along an end of the side of the storage device, wherein the second segment is rotatively coupled around a further hinged joint positioned along the second end of the first segment,18. The integrated stand of claim 17, wherein the first and second segments are foldabiy stored proximate the side of the storage device in response to being rotated around the first and second hinged joints, respectively, in a first direction,19. The integrated stand of claim 18, wherein the first and second segments are extended from the side of the storage device in response to being rotated around the first and second hinged joints, respectively, in a second direction opposite the first direction.20. The integrated stand of claim 19, wherein the first and second segments are rotated about the first and second hinged joints, respectively, in the second direction to provide balance to the portable device.21. The integrated stand of claim 20, wherein the at least one extruded connector is engaged at an edge of the portable device such that the extended first and second segments provide balance at the edge of the portable device.22. The integrated stand of claim 17, further comprising:a protrusion extending from the second end of the first segment; and a recess through a portion of the side of the storage device, wherein the recess is configured to engage with the protmsion in response to the first and second segments being foldabiy stored proximate the side of the storage device.23. The integrated stand of claim 22, wherein the foldabiy stored first and second segments are locked proximate the side of the storage device in response to the protrusion being engaged with the recess.24. The integrated stand of claim 23, wherein the locked first and second segments within the side of the storage device are released in response to the protmsion being disengaged from the recess.25. The integrated stand of claim 24, wherein the protrusion is disengaged from the recess in response to application of a mechanical pull force to the second end of the first segment to cause the first segment to rotate about the first hinged joint in the second direction.26. The integrated stand of claim 17, wherein the actuator takes a form of a mechanical slider positioned proximate the storage device.27. The integrated stand of claim 17, where the at least one connector is configured to allow data transmission between the storage device and the portable device.28. The integrated stand of claim 17, wherein the at least one connecter is a USB connector or a lighting connector.29. An integrated stand for a portable device, comprising: a storage device configured to store data therein, the storage device having a front end opposite a back end, the front end and the back end configured to have at least one connector extruded therethrough;an actuator coupled to the storage device to cause one of the connectors to extrude through at least one of the back end or front end of the storage device;a segment coupled to a side of the storage device, the segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along the side of the storage device;a protrusion extending from the second end of the segment; and a recess through a portion of the side of the storage device, wherein the recess is configured to engage with the protrusion in response to the segment being folded proximate the side of the storage device.30. The integrated stand of claim 29, wherein the segment is folded proximate the side of the storage device in response to the segment rotating about the hinged joint in a first rotational direction.31. The integrated stand of claim 29, wherein the segment folded proximate the side of the storage device is extended in response to the protmsion being disengaged from the recess.32. The integrated stand of claim 31, wherein the protmsion is disengaged from the recess in response to the segment rotating about the hinged joint in a second rotational direction opposite the first rotational direction.33. The integrated stand of claim 32, wherein the segment is extended to provide balance to the portable device.34. The integrated stand of claim 33, wherein the at least one extruded connector is engaged at an edge of the portable device such that the extended first segment provides balance at the edge of the portable device.
FOLDING DEVICE STAND FOR PORTABLE DEVICESCROSS REFERENCE TO RELATED APPLICATION! S[0001] This application claims the benefit under 35 U.S.C. § 119(e) of the earlier filing date of U.S. Provisional Application 62/308, 136 entitled "Folding Device Stand for Portable Devkes," filed March 14, 2016, and which provisional application is hereby incorporated by reference in its entirety for any puipose.BACKGROUND[0002] Many portable devices may be used for viewing content. To facilitate viewing, it is often desirable to position the portable device at a viewing angle that provides easy viewing of the portable device. However, it is not easy to balance the portable device for viewing without supporting the center of the device. For example, the device stand may fall over easily when the user touches the screen to play or pause media while using the portable devices. Savvy users viewing their portable devices often have to use additional bulky kickstands or hold the device by hand, however, the bulky stands may not be suitable for small and confined spaces, and holding the portable device may be tiring.SUMMARY[0003] According to one aspect, a stand for a portable device comprising: a body having a top side opposed a bottom side, the bottom side configured to engage with a plug housing; and first and second segments coupled to the top side of the body, the first segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along an end of the top side of the body, wherein the second segment is rotatively coupled around a further hinged joint positioned along the second end of the first segment.[0004] According to another aspect, a stand for a portable device, comprising: a body having a top side opposed a bottom side, the bottom side configured to attachably engage with a plug housing; a segment coupled to the top side of the body, the segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along the top side of the body; a protrusion extending from the second end of the segment; and a recess through a portion of the top side of the body, wherein the protrusion engages with the recess in response to the segment being foldably stored within the top side of the body. [0005] According to yet another aspect, an integrated stand for a portable device, comprising: a storage device configured to store data therein, the storage device having a front end opposite a back end, each of the front end and the back end configured to have at least one connector extruded therethrough; an actuator coupled to the storage device to cause the at least one of the connectors to extrude through at least one of the back end or front end of the storage device; and first and second segments coupled to a side of the storage device, the first segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along an end of the side of the storage device, wherein the second segment is rotatively coupled around a further hinged joint positioned along the second end of the first segment.[0006] According to a further aspect, an integrated stand for a portable device, comprising: a storage device configured to store data therein, the storage device having a front end opposite a back end, the front end and the back end configured to have at least one connector extruded therethrough; an actuator coupled to the storage device to cause one of the connectors to extrude through at least one of the back end or front end of the storage device; a segment coupled to a side of the storage device, the segment having a first end opposite a second end, the first end rotatively coupled around a hinged joint positioned along the side of the storage device; a protrusion extending from the second end of the segment; and a recess through a portion of the side of the storage device, wherein the recess is configured to engage with the protrusion in response to the segment being folded proximate the side of the storage device.BRIEF DESCRIPTION OF THE DRAWINGS[0007] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.[0008] Figure I is a perspecti ve view of a folding device stand, according to one illustrated embodiment,[0009] Figure 2 provides six views of the folding device stand of Figure 1, according to one illustrated embodiment. [0010] Figure 3 shows a folding device stand in accordance with an embodiment of the present disclosure in use after one segment of the stand has been unfolded, according to one illustrated embodiment.[0011] Figure 4 shows the folding device stand of Figure 3 in use after two segments of the stand has been unfolded, according to one illustrated embodiment.[0012] Figure 5 shows the folding de vice stand of Figure 3 in use after three segments of the stand has been unfolded, according to one illustrated embodiment.[0013] Figure 6 shows a perspective view of a folding device stand in accordance with an embodiment of the present disclosure after one segment of the stand has been unfolded, according to one illustrated embodiment.[0014] Figure 7 shows a perspective view of the folding device stand of Figure 6 after two segments of the stand have been unfolded, according to one illustrated embodiment.[0015] Figure 8 shows a perspective view of the folding device stand of Figure 6 after three segments of the stand have been unfolded, according to one illustrated embodiment.[0016] Figure 9 shows the folding device stand of Figure 6 after three segments of the stand have been unfolded, with the third leg segment at a 90 degree angle to the first two unfolded segments, according to one illustrated embodiment.[0017] Figure 10 illustrates how the folding device stand of Figure 6 can be used from a left-handed user's position, according to one illustrated embodiment.[0018] Figure 11 illustrates how the folding device stand of Figure 6 can be used from a right-handed user's position, according to one illustrated embodiment.[0019] Figure 12 is a perspective view of an alternate embodiment of a folding device stand, according to one illustrated embodiment.[0020] Figure 13 provides five different views of the alternate embodiment of the folding device stand of Figure 12, according to one illustrated embodiment.[0021] Figure 14 shows a cross-sectional view of a folding device stand in accordance with an embodiment of the present disclosure, according to one illustrated embodiment.[0022] Figure 15 shows a folding device stand in accordance with an embodiment of the present disclosure attached to a storage device in a folded configuration, according to one illustrated embodiment.[0023] Figure 16 shows the folding device stand of Figure 15 attached to a storage device in an unfolded configuration, according to one illustrated embodiment. [0024] Figure 17 shows a folding device stand in accordance with an embodiment of the present disclosure holding a portable device for viewing at different viewing angles, according to one illustrated embodiment.[0025] Figure 18A-18D are illustrated views of a stand to balance a portable device, where tlie stand is coupleable to at least a portion of a housing of a connection cable, according one illustrated embodiment.[0026] Figures 19A-19C are three dimensional graphical representations of the stand, including a locking mechanism, according to one illustrated embodiment.[0027] Figure 20A is a top view of an integrated stand, according to one illustrated embodiment.[0028] Figure 20B is a side view of the integrated stand, according to one illustrated embodiment.[0029] Figure 20C is a front view of the integrated stand, according to one illustrated embodiment.[0030] Figure 20D is a rear view of the integrated stand, according to one illustrated embodiment.[0031] Figure 20E is a bottom view of the integrated stand, according to one illustrated embodiment,[0032] Figures 21A-21B are illustrations of an actuator operable to cause at least one connector to protrude from a storage de vice, according to one embodiment.[0033] Figures 22-23 are three dimensional illustrations of the integrated stand having an extension unit in an unfolded position, according to one illustrated embodiment.DETAILED DESCRIPTION[0034] Various examples of embodiments of the invention will now be described. The following description provides specific details for a thorough understanding and enabling description of these examples. One skilled in the relevant art will understand, however, that embodiments of the invention may be practiced without many of these details. Likewise, one skilled in the relevant art will also understand that embodiments incorporate many other obvious features not described in detail herein. Additionally, some well-known structures or functions may not be shown or described in detail below, so as to avoid unnecessarily- obscuring the relevant description.[0035] The terminology used herein is to be interpreted in its broadest reasonable manner, even though it is being used in conjunction with a detailed description of certain specific examples of embodiments of the invention. Indeed, certain terms may even be emphasized below; any terminology intended to be interpreted in any restricted manner will, however, be oveslly and specifically defined as such in this Detailed Description section.[0036] Disclosed embodiments are directed to a folding device stand that attaches to a plug housing of a connection cable or a portable storage device, such as a flash drive.[0037] In some embodiments, a device stand includes a foldable extension leg which allows for a small storage size when the device stand is not in use, and which allows for an adjustment to a viewing angle of a portable device.[0038] The folding device stand may be designed to attach to the plug housing of a connection cable for a portable device, such as a charging cord or a USB cord. The folding device stand may also be designed to attach to the outer enclosure of a storage device, such as a thumb drive or memory stick. In an alternate embodiment, the folding device stand may be designed to incorporate a storage device directly as an integral component of the device stand.[Θ039] The folding device stand allows users to view the screen of a portable device, such as a smart phone, tablet computer, or handheld device, without holding the device. This folding device stand may be designed such that it can be used when the portable device is used in a right-handed orientation or a left-handed orientation.Embodiment One[0040] Presented herein is a folding device stand or "kickstand" structural concept for use with storage devices and portable device cables. The device may include foldable extension legs, and provides an "A" stand by supporting the cable's connector, instead of balancing or supporting the device itself.[0041] As known, it is not easy to balance a portable device by the connector area, instead of supporting the portable device's center area, which is how most device stands currently operate.[0042] There are many portable storage devices and instant charging cables for smart phones, portable devices such as iPad mini, and other medium size touch pads. Savvy users are watching movies and TV from their portable devices and often have to use additional bulky kickstands or hold the device by hand. Embodiments of the present disclosure may provide a solution to support the device from the cable's connector area. The user can use the folding device stand in a small area without carrying an additional bulky device stand which needs to support the center of the device. [0043] Embodiments of the disclosure include a kickstand that provides balance from the connector side (that is, from a connector built in to an edge of a portable device).[0044] Embodiments of the disclosure are distinguishable from existing device stands in the market today. As previously mentioned, it is not easy to balance the devices without supporting the center of the device. For example, the device stand may fall over easily when the user touches the screen to play or pause media while using the portable devices.[0045] Embodiments of the disclosure include kickstands for any size of device, using a foldable concept which can adjust the angle at which die portable device is being positioned m relation to a horizontal surface such as a desk, preventing the need for several different "kickstands" per device.[0046] The folding legs allow the size of the device to be minimized to keep the smallest form factor, instead of carrying a bulky kickstand. The folding legs also allow one folding device stand to be adjusted to work with older phones with smaller screens to bigger screen devices such as iPad mini. As a result, a small kickstand may be used for many sizes of portable devices, or to easily use it in an airplane or on a desk which can have limited space.[0047] In an alternative embodiment, the folding device stand concept can be extended for USB storage devices. For example, as will be described in greater detail below, in some embodiments, the folding device stand may be used with particular USB storage devices, such as the c20i flash drive device from Lexar, which provides an example folding device stand with a built-in storage device. The folding display stand structure may be small and allows the creation of a built-in USB thumb drive so that the user can store files, such as movies, in the storage device and connect it to a portable device without the need to transfer the files. The built-in device stand is convenient, allowing users to watch the screen without holding their device.[0048] The folding display stand works left or right, both ways, and it is not necessary to remove the folding display stand from the device or rotate to switch back and forth. Left- handed users may prefer to locate their device (and its controls) to be on the right side and watch the movie, while right-handed users may prefer to locate their device to be on the left side and watch instead.[0049] The features of the folding device stand may include:1) Unique concept for balancing a device stand from the connector area.2) A small form factor compared to similar existing products in the market. 3) Adjustable viewing angle by folding legs, allowing a user to use one kickstand for many sizes of de vices.4) Support balance of the device from both sides (left or right) without switching or detaching from the device.5) As will be described in more detail below, a storage device product including a built-in device stand.[0050] Turning to Figs. 1-11, embodiments of a folding device stand shall be described in additional detail.[0051] Figure 1 is a perspective view of a folding device stand in accordance with an embodiment of the present disclosure as seen in its unfolded configuration. In one embodiment, the folding device stand has a main body, a first leg segment, a second leg segment, and a third leg segment, attached to each oilier by hinged joints. The first leg segment attaches to the main body by a hinged joint, the second leg segment attaches to the first leg segment by a second hinged joint, and the third leg segment attaches to the second leg segment by a third hinged joint. It would be obvious to one skilled in the art that the number of leg segments could vary without deviating from the inventive concept captured herein.[0052] In one embodiment, the hinged joints may be constructed from roll pins that snap into c-shaped channels between the main body of the folding device stand and the first leg segment, or between successive leg segments. The hinged joints may include features such as slots, detents, or friction pads to hold an unfolded leg in a deployed position. The hinge joints may be made from any appropriate material and method, and the examples discussed herein are intended as examples only.[0053] In one embodiment, the main body of the folding device stand may be designed to attach to or encapsulate the plug housing and/or strain relief collar of a connection cable for a portable device, such as a charging cable or a USB cable .[0054] Figure 2 provides six different views of the folding device stand of Figure 1 in its fully folded (storage) configuration, including a top view, side view, bottom view, front view, back view, and perspective view.[0055] Figure 3 shows a side view of a folding device stand in accordance with an embodiment of the present disclosure in use on a portable device after the first leg segment of the stand has been unfolded. In the example shown in Fig. 3, the portable device is held at a 51 degree angle to the surface on which the portable device is resting when one leg segment is unfolded. However, it would be obvious to one skilled in the art that the exact angle would change based on the size and type of the portable device, the length of the leg segments, and the angle at which the leg segment is held in comparison to the angle of the main body when rt is attached to the portable device.[0056] Figure 4 shows a side view of the folding device stand of Figure 3 in use on a portable device after the first and second leg segments of the stand has been unfolded. In the example shown in Fig. 4, the portable device is held at a 64 degree angle to the surface on which the portable device is resting when two leg segments are unfolded. These figures and angles are examples only and not meant to be limiting in any way.[0057] Figure 5 shows a side view of the folding device stand of Figure 3 in use on a portable device after the first, second, and third leg segments of the stand has been unfolded. In the example shown in Fig. 5, the portable device is held at a 70 degree angle to the surface on which the portable device is resting when three leg segments are unfolded. These figures and angles are examples only and not meant to be limiting in any way.Θ058] Figure 6 shows a perspective view of a folding device stand in accordance with an embodiment of the present disclosure in use on a portable device after one leg segment of the stand has been unfolded.[0059] Figure 7 shows a perspective view of the folding device stand of Figure 6 in use on a portable device after two leg segments of the stand have been unfolded.[0060] Figure 8 show s a perspective view of the folding device stand of Figure 6 in use on a portable device after three leg segments of the stand have been unfolded.[0061] Figure 9 shows a perspective view of the folding device stand of Figure 6 after three segments of the stand have been unfolded, with the third leg segment at a 90 degree angle to the first two unfolded segments. This configuration allows for additional stability.[0062] Figure 10 illustrates how the folding device stand of Figure 6 can be used from a left-handed user's position. Figure 11 illustrates how the folding device stand of Figure 6 can be used from a right-handed user's position.[0063] Additional information on an alternate embodiment of the folding device stand for use with a storage device (such as a flash drive) is provided in the following section.Alternate Embodiment with Storage Device[0064] Presented herein is a folding device stand or "kickstand" structural design concept incorporating a storage device which extends the memory of a smartphone device and charging functional cable. The examples and drawing provided herein are based on the c20i flash drive product by Lexar. However, any type of storage device, cable, or oilier product that plugs into a connector on the edge of a portable device could be used without departing from tlie scope of the present disclosure.[0065] This alternate embodiment is a small form factor foldable extension leg which provides an "A" device stand by supporting a flash memory drive plugged into a portable device, instead of balancing or supporting the portable device itself. The balancing point is on tlie memory device's connector which is unique to other "kickstand" concepts.[0066] The folding device stand provides a solution to support the portable device from the cable's connector area of a flash drive plugged into tlie portable device. The user can use the folding device stand in an airplane or small area without carrying an additional bulky kickstand which needs to support the center of the device.[0067] Additional detail on the embodiments described above is provided in the discussion of the drawings in the following paragraphs. Figures 12-17 describe an embodiment based on a storage device for a portable device.[0068] In an alternate embodiment, the main body of the folding device stand may be designed to attach to or encapsulate the enclosure of a separate portable storage device, such as a thumb drive. In yet another alternate embodiment, tlie mam body of the folding device stand may directly incorporate a storage device as an integral component of the folding device stand.[0069] Turning to Figs. 12-17, an alternate embodiment of tlie folding device stand utilizing additional storage shall be described in additional detail.[0070] Figure 12 is a perspective view of an alternate embodiment of the folding device stand designed to work with a separate storage device. In this embodiment, a split in the device creates a channel through with a cord or cable can be fed to facilitate installation of the folding device stand on a storage device. The openings on either end of the folding device stand may be different sizes to accommodate both the device connector (for example, a lightning connector for an iPlione) and a standard serial connector (for example, a USB or similar connector[0071] Figure 13 provides five different views of the alternate embodiment of the folding device stand of Figure 12. Structural ribs 100 are incorporated into the mam body to provide structural strengtli. Inside the USB connector (also shown as 103), side ribs 102 and bottom rib 101 are used to guide the insertion of a U SB connector from a storage device when the folding device stand with integrated storage is folded for travel or storage. Protrusions 104 along an edge of a channel 105 may be provided to assist with containing a cable in the cable guide. A roil pin 106 may be used to provide a hinge for the legs of the folding device stand. The device connector opening on the folding device stand may offer additional features such as small nubs 107 which put additional force and friction on the device connector to provide an interference fit when the device connector is in place, preventing the folding device stand from slipping out from the device connector.[0072] In some embodiments, the cord of a storage device (or any cable with a connector capable of plugging into a portable device) could be pushed into the channel such that the cable's device connector extends out through and past the device connector opening in the folding device stand. Then, a user can pull on the end of the cord that is extending out of the USB connector opening in the folding device stand until the device connector on the cable is pulled back such that the housing of the device connector seats itself just inside the device connector opening on the folding device stand. The inside of the device connector opening may have a built-in feature such as a physical stop to prevent the device connector from being pulled inside and possibly through the body of the folding device stand. An example of a physical stop is shown in Figure 13.ΘΘ73] Figure 14 shows a cross-sectional view of a folding device stand in accordance with an embodiment of the present disclosure in its folded configuration, with no storage device present. A hinge may be formed from a c-shaped channel 108 that receives a pin or axle to allow rotation of a leg of the folding device stand. The c-shaped channel 108 may be configured to provide rotational resistance to the pin when the leg is rotated. For example, arms of the c-shaped channel may hav e portions that decrease the radius of the channel at the ends of the arms, along with spring force of resilient arms, to apply a friction force to the pin to provide rotational resistance. In some embodiments, the pin and channel may be configured to form detents to provide positions along the rotation of the arm at which the arm is held open.[0074] Figure 15 shows a folding device stand in accordance with an embodiment of the present disclosure attached to a storage device (for example, the c20i flash drive by Lexar) in a folded configuration. The USB end 109 of the storage device can be inserted into the USB connector of the folding device stand for storage. The USB end 109 may include a semiconductor device for storing data. The device connector end 1 10 is inserted into the folding device stand and sticks out through the device connector opening on the folding device stand. The device connector end 110 sticks out far enough to allow users to use the folding device without removing their sniartphone case, because the offset distance allows for various phone case thicknesses. [0075] Figure 16 shows the folding device stand of Figure 15 attached to a storage device m an unfolded configuration. A cable 111 connects the device connector to the USB connector.[0076] Figure 17 shows a folding device stand in accordance with an embodiment of the present disclosure holding two types and sizes of portable devices, showing how the size of the portable device can affect the viewing angle of the portable device. For example, a first viewing angle 112 is provided with a first portable device, and a second viewing angle 113 is provided with a second portable device.[0077] Fig. 18A-18D sho illustrated views of a stand 1800 to balance a portable device 1805 where the stand 1800 is attachable to at least a portion of a housing 1810 of a connection cable 1815, according to one illustrated embodiment.[0078] The stand 1800 includes a body 1802 and extension unit 1804 coupled to the body 1802. The body 1802 may comprise a first side 1820 opposed a second side 1825. The first side 1820 includes a receptacle formed to engage with the housing 1810 of the connection cable 1815. The receptacle may be any shape or size to accommodate the housing 1810 of any type of connection cable 1815 or storage device. For example, the receptacle may be sized to engage with the housing 1810 of a lightning connector or the housing 1810 of a flash drive. As illustrated in Figs. 18B-18C, the first side of the stand 1800 engages with the housing of the connection cable 1815 in response to inserting the connection cable 1815 into the receptacle and then laterally sliding the connection cable 1815 until the housing 1810 is at least partially encapsulated by the receptacle.[0079] The extension unit 1804 comprises a plurality of segments (illustrated as 1830a, 1830b, 1830c and collectively referenced herein as 1830) that are foldable about a plurality of hinged joints (illustrated as 1835a, 1835b, 1835c and collectively referenced herein as 1835), respectively. The plurality of segments 1830 may be coupled to the second side 825 of the body 1802. As illustrated in Figs. 18A-C, a first hinged joint 1835a of the plurality of hinged joints 1835 may be positioned along an end of the second side 1825 of the body 1802. The fi rst segment 1830a of the plurality of segments 1830 may have a first end rotatively coupled about the first hinged joint 1 835a.[0080] Furthermore, the second and third segments 1830b, 1830c may be rotatively coupled about the second and third lunged joints 1835b, 1835c, respectively. The second hinged joint 1835b may be positioned along an end of the first segment 1830a, while the third hinged joint 1835c may be positioned along an end of the second segment 1830b. It will be understood by those of ordinary skill in the art, that any number of segments and hinges may be employed to implement the extension unit 1804. For example, a single segment and corresponding hinged joint may be employed as well as multiple segments interconnected by respective hinged joints.[0081] The plurality of segments 1830 may be in a folded position (as illustrated in Figs. 19A-19C) or may be in an extended position (as illustrated in Figs. 18A and 18D). The plurality of segments 1830 may be folded in response to being rotated about the plurality of hinged joints 1835, respectively, in a first direction 1860. In one example, the third segment 1835c located in the outermost position away from the second side 1825 of the body 1802 may be folded within the second segment 1835b before the second segment 1835b is folded within the first segment 1835a. It will be understood by those of ordinary skill in the art that regardless of the number of segments, the outermost segment may typically be folded within a preceding segment prior to the preceding segment being itself folded within its preceding segment.[0082] As illustrated in Figs. 1-2, 12-14, and 19A-19C, in response to being in the folded position, the plurality of segments 1830 may be foldably stored within the second side 1825 of the body 1802 of the stand 1800. Foldably storing the plurality of segments 1830 within the body 1802 of the stand 1800 may be advantageous in assuring a minimal form factor of the stand 1800.[0083] As illustrated in Figs. 6-8, the plurality of segments 1830 may be extended from the folded and stored position in response to being rotated about the plurality of hinged joints 1835, respectively, in a second direction 1850. The second direction 1850 is opposite the first direction 1860. In one example, the first segment 1830a having the second and third segments 1830b, 1830c folded therein, may be unfolded from the second side of the body 1802 before the second segment 1830b is unfolded. Additionally, the second segment 1830b may be unfolded from within the first segment 1830a before the third segment 1830c is unfolded.[0084] The plurality of segments 1830 are extended to provide balance to the portable device 1805. Furthermore, the connection cable 1815 may be coupled to an edge of the portable device 1 805. Because the stand 1800, having the extended plurality of segments 1830, engages with die housing 1810 of the connection cable 1815, the plurality of segments 1830 provide balance from the edge of the portable device.[0085] The unfolded or extended plurality of segments 1830 may be advantageous in adjusting an angle at which the portable device is positioned relative a horizontal surface, such as, for example, a desk. A desired angle of the portable device may be achieved by, for example, unfolding or extending respective ones of the plurality of segments 1830 to a desired position. It will be appreciated by those of ordinary skill in the art, the greater the number of the plurality of segments 1830 to be adjusted, the greater the granularity in achieving the desired angle. The adjustability of the angle may be further advantageous to allow for operation with various sizes of portable devices. For example, a large sized portable device may employ a longer extension unit 1804 to achieve a desires balance angle, while a small-sized portable device may employ a short extension unit 1804 to achieve a desired balance angle.[0086] Figs. 19A-19C show three dimensional graphical representations of the stand 1800, including a locking mechanism embedded within the stand 1800, according to one illustrated embodiment.[0087] The locking mechanism comprises a protrusion 1905 configured to engage with a recess 1910. The protrusion 1905 may extend from the end of the first segment 1830a, which includes the second hinged joint 1835b. The recess 1910 may extend through a further end of the second side 1825 of the body 1802 of the stand 1800, The further end positioned opposite the end of the second side 1825 of the body 1802 having the first hinged joint 1835a.[0088]'The protrusion 1905 may be configured to engage with the recess 1910 in response to the plurality of segments 1830 being in the folded position. In particular, when the second segment 1830b and the third segment 1830c are folded proximate the first segment 1830a, the first segment 1830a may rotate in the first rotational direction 1860 until the protrusion 1905 engages with the recess 1910. The engagement of the protrasion 1905 with the recess 1910 forms a lock to substantially prevent the plurality of segments 1830 from freely rotating about the respective plurality of hinged joints 1835 and become extended.[0089]'The protrusion 1905 may be disengaged from the recess 1910 in response to a mechanical pull force F being applied to the extension unit 1804, In particular, the pull force F may be applied proximate the second hinged joint 1835b to cause the further end of the second side 1825 of the body 1802 of the stand 1800 to flex enough to release the protrusion 1905 from the recess 1910.[0090] It will be understood by those of ordinary skill in the art that the locking mechanism may comprise any other fastening technique known in the art. For example, the locking mechanism may include more than one protrusion or even more than one recess that engages with the protrusions. [0091] Figs. 20A-20E show illustrations of several views of an integrated stand 2000, while Figs. 22-23 show three dimensional illustrations of the integrated stand 2000 having the extension unit 1804 in an unfolded position, according to one illustrated embodiment.[0092] The integrated stand 2000 may comprise a storage device 2005 configured to store data therein, at least one connector 2020, 2025 to allow for data transmission between the storage device 2005 and the portable device 1805, an actuator 2015 to cause the at least one connector 2020, 2025 to be extruded from the storage device 2005, and the extension unit 1804 integrated with a bottom side of the storage device 2005.[0093] As discussed above, the storage device 2005 may, for example, be a flash drive (e.g., c20i flash drive product by Lexar) or USB thumb drive. As illustrated in Fig. 23, it may be advantageous to couple the storage device 2005 to the portable device 1805 such that data embedded within the storage device 2005 may be accessed via the portable device 1805. The storage device 2005 may be coupled to the portable device 1805 via the at least one connector 2020, 2025. The at least one connector may, for example, be at least one of a lightning cable connector 2020 or a USB cable connector 2025 embedded within the storage device 2005.[0094] The storage device 2005 comprises a front end 2030 opposed a back end 2035. The front end 2030 and back end 2035 of the storage device 2005 may be configured to have the at least one connector 2020, 2025 extruded therethrough. The extruded connector 2020, 2025 may be coupled to the portable device 1805, as illustrated in Fig. 23. In particular, the at least one connector 2020, 2025 may engage with a connector port positioned at the edge of the portable device 1805. Engaging at the edge of the portable dev ice 1805 allows the extended plurality of segments 1830 to provide balance at the edge of the portable device 1805.[0095] The extension unit 1804 may be relatively coupled about the first hinged joint 1835a positioned proximate the bottom side of the storage device 2005. The plurality of segments 1830 may be in a folded position in response to rotation about the respective hinged joints 1835 in the first direction 1860. Additionally, the plurality of segments 1830 may be in the extended position in response to rotation about the respective hinged joints 1835 in the second direction 1850. In one embodiment, the extension unit 1804 rotatively coupled proximate the storage device 2005 may employ the locking mechanism described above.[0096] Figs. 21A-21B show illustrations of the actuator 2015 causing protrusion of the at least one connector 2020, 2025, while Figs. 22-23 show illustrations of the integrated stand 2000 with the extruded connector 2020, 2025 and unfolded extension unit 1805, according to one illustrated embodiment. [0097] The actuator 2015 may be a mechanical slider coupled to the at least one connector 2020, 2025. A user may cause the actuator 2015 to slide and extrude the at least one connector 2020, 2025 from, the front end 2030 or the back end 2035, For example, in response to the actuator 2015 being shifted toward the back end 2035 of the storage device 2005, the extruded connector 2025 (e.g., USB connector) is extruded through the back end 2035. On the other hand, in response to actuator 2015 sliding toward the front end 2030 of the storage device 2005, the extruded connector 2020 (e.g., lightening connector) is extruded through the front end 2030. As illustrated in Fig. 23, the extruded connector 2020 may engage with the connection port at the edge of the portable device 1805. Engaging at the edge of the portable device 1805 allows for the extended plurality of segments 1830 to balance the portable device 1805 from the edge. As discussed above, the plurality of segments 1830 rotatively coupled about the respective plurality of hinged joints 1835 may be at least partially unfolded to balance various sizes of portable devices 1805.[0098] Having described some embodiments of the invention, additional embodiments will become apparent to those skilled in the art to which it pertains. Specifically, although reference was made throughout the specification and drawings to the stand 1800 being coupled to a connection cable, including a connector housing such as a lightening connector or USB connector, it will be appreciated that the stand 1800 may also be coupleable to other connector types, such as, storage devices including a connector housing such as, micro-USB, USB type B, USB type C, or serial port to name a few. The embodiment describing and illustrating the stand 1800 coupled to the lightening connector housing was merely to convey the functionality and various aspects of the stand 1800 when being leveraged to balance a iphone or ipad product. Of course, embodiments described above are not limited to APPLE products and are applicable to any portable device on the market.[0099] The dimensions, configurations, and angles shown are meant as examples only, and not meant to be limiting in any way.[0100] "Connection cable" refers to any cable or cord, such as a charging cable or a connector to a storage device (e.g., flash drive) where there is a plug housing on at least one end of the connection cable.[0101] "Storage device" refers to any device that is configured to score data or charge therein. The storage device may be disposed within the plug housing of the "connection cable." Additionally, the storage device may be an external charger configured to charge a portable device. [0102] "Integrated stand" refers to the combination of the extension unit 1804 with the storage device. As discussed above, the extension unit 1804 is effectively integrated with the storage device by being rotatively coupleable about the first hinged joint 1835a.[0103] "Extension unit"refers to the plurality of segments 1830 that are foldable and extendable about the respective hinged joints 1835. Although multiple segments is disclosed and illustrated herein, it will be appreciated that an extension unit having a single segment, for example having only the first segment 1830a, is well within the scope of embodiments described herein. It will be noted that having multiple segments is advantageous in providing balancing to portable devices of variable sizes.[0104] Various different "locking mechanism" schemes are within the scope of the embodiments described above and are not limited to any specific locking mechanism scheme found in the above description and drawings. For example, the locking mechanism may comprise multiple protrusions and/or multiple recesses. Additionally, a loop and fastener system (e.g., VELCRO-type loop and fastener) as well as a Socking system employing magnets or even a latch system are well within the scope of embodiments described above. Furthermore, the locking mechanism may be applicable to ail embodiments of the invention described herein and not limited to a particular embodiment.[0105] Additionally, various different schemes of the actuator 2015 are within the scope of the embodiments described above and are not limited to a mechanical slider scheme described above. For example, the actuator 2015 may be a push-button rather than a slider, or even an electronic configuration such as a touch sensor, electric switch, or any other electrical configuration to cause the at least one connector to protrude from the storage device.[0106] While the particular methods, devices and systems described herein and described in detail are fully capable of attaining the above-described objects and advantages of embodiments of the invention, it is to be understood that these are example embodiments of the invention and are thus representative of the subject matter which is broadly contemplated by the present disclosure, that the scope of the present disclosure fully encompasses other embodiments which may become obvious to those skilled in the art, and that the scope of the present disclosure is accordingly to be limited by nothing other than the appended claims, in which reference to an element in the singular means "one or more" and not "one and only one", unless otherwise so recited in the claim. [0107] li will be appreciated that modifications and variations of embodiments of the invention are covered by the abov e teachings and within the purview* of the appended claims without departing from the spirit and intended scope of embodiments of the invention.
A device to validate security credentials is disclosed. The device comprises a credential validation module to recalculate security credentials received in a datagram and to determine if the security credentials are valid. The device may also include a parser to extract the security credentials from the payload data of the received datagram, and a memory to store validated credentials for further use.
WHAT IS CLAIMED IS: 1. A device, comprising: a credential validation module to recalculate security credentials received in a datagram and to determine if those security credentials are valid. 2. The device of claim 1, wherein the device further comprises a parser to extract the security credentials from the payload data of the received datagram. 3. The device of claim 2, wherein the parser and the credential validation module reside together. 4. The device of claim 1, wherein the device further comprises a memory to store the security credentials. 5. The device of claim 1, wherein the credential validation module further comprises an arithmetic logic unit and a comparator. 6. The device of claim 1, wherein the security credentials are one of the group comprised of : a token, a digital signature, a cryptographic key, and a digital certificate. 7. The device of claim 1, wherein the security credentials further comprise a digital signature in compliance with the simple object access protocol. 8. A method of validating security credentials, the method comprising: receiving a datagram including security credentials; recalculating a representation of the security credentials; and comparing the representation of the security credentials to a provided representation to determine if the security credentials are valid. <Desc/Clms Page number 13> 9. The method of claim 8, wherein the method further comprises parsing payload data in the datagram to obtain the security credentials. 10. The method of claim 8, wherein the method further comprises storing the security credentials. 11. The method of claim 8, wherein the security credentials further comprise a digesting method, a credential to be verified and a credential value. 12. The method of claim 11, wherein the credential to be verified further comprises a digital signature in compliance with simple object access protocol. 13. The method of claim 11, wherein the credential value further comprises a digital signature value in compliance with simple object access protocol. 14. A system, comprising: a credential generator to provide outgoing security credentials; a port to allow transmission of datagrams to other devices, wherein the transmission may include the outgoing security credentials; and a credential validation agent to validate incoming credentials received through the port. 15. The system of claim 14, wherein the outgoing security credentials further comprise one of the group comprised of public and private key pairs, tokens, digital certificates, and digital signatures. 16. The system of claim 14, wherein the system further comprises a memory operable to store the incoming credentials. 17. The system of claim 14, wherein the system further includes a parser to extract credentials received in datagrams through the port.
<Desc/Clms Page number 1> HARDWARE-ASSISTED CREDENTIAL VALIDATION BACKGROUND The explosion in web-based services has led to an increased need for security, especially in financial transactions. An interaction between a vendor and a financial institution across a network offers opportunities for malicious interference from hackers, such as'spoofing'or outright identity theft, as examples. When a user purchases a product from a vendor, the user sends sensitive financial information to the vendor. The vendor then validates the financial information with the financial institution and accepts the user's order, as an example. During this transaction, the user's financial information may be transmitted through several network links. Hackers may intercept this information, or a hacker may assume an involved entity's identity and either misappropriate the information or attempt to enter some of the other involved entity's sites. These are just examples of some problems that may occur during a transaction with which most users would be familiar, but demonstrate the problems inherent in such a transaction. Typically, however, there are many transactions or transfers of information that may occur across the Internet or similar networks that do not involve consumers' information directly. Financial institutions may transfer information back and forth, producers and their suppliers may transfer order information, purchase order specifics, etc. All of these transactions need to be secure, or these entities become vulnerable to attack. In addition to the growing number of transactions involving confidential information, there is a movement towards interoperability. Currently, there are several <Desc/Clms Page number 2> different kinds of devices that use the Internet to communicate. True interoperability would allow these different platforms to access services, objects and servers in a platform- independent manner. For example, the Simple Object Access Protocol (SOAP) is a protocol that acts as the glue between heterogeneous software components. It offers a mechanism for bridging competing technologies in a standard way. The main goal of SOAP is to facilitate interoperability. However, the increase in interoperability may lead to even easier spoofing and misappropriation of partners'identities in network transactions. In response to these types of problems, many entities such as vendors and banks have instituted security procedures. For example, the HyperText Transfer Protocol (HTTP) has authentication measures such as the secure socket layer (SSL) which can be used by most web browsers to employ a key to encrypt and decrypt information transmitted over the Internet (or any Network) between partners in a secure transaction. Other examples include the use of symmetric keys, asymmetric keys, session keys, tokens or other types of security credentials. An initiating partner sends its security credentials to a receiving partner. The receiving partner then checks any incoming messages with the security credentials to ensure that each message it receives from the sending partner has credentials that match. Credentials may include a certificate, a token or a signature. Currently, these credentials are implemented and verified in software. This is not very efficient and may still be subjected to manipulation. For example, keys stored in a file system are typically managed by software applications. During the processing of the software application, the keys may <Desc/Clms Page number 3> be exposed. Similarly, if the keys are stored in a database, they may be exposed after they are stored. BRIEF DESCRIPTION OF THE DRAWINGS Embodiments of the invention may be best understood by reading the disclosure with reference to the drawings, wherein: Figure 1 shows a network over which a transaction may occur. Figure 2 shows an embodiment of a system with credential validation. Figure 3 shows an embodiment of a credential validation module. Figure 4 shows a flowchart of an embodiment of a method to validate security credentials. Figure 5 shows a flowchart of an embodiment of a method to validate digital signatures using the Simple Object Access Protocol. DETAILED DESCRIPTION OF THE EMBODIMENTS Figure 1 shows an environment in which services are provided and transactions executed between two partners in an electronic commerce environment. In this example, Partner A 10 has confidential information related to transactions occurring on its web site, such as an on-line ordering system. Partner B 12 is a supplier of Partner A from which Partner A wishes to purchase parts in order to manufacture goods to fill the orders received from its site. Partner A will transmit a purchase order to Partner B. The purchase order may have sensitive information in it such as the financial institutions involved, legal documents, credit information, competitive pricing, account numbers and routing information that allows Partner B to confirm the purchase order. Competitive information, such as the number of units and pricing of a specific part could <Desc/Clms Page number 4> also be transmitted that would allow competitors to gain unfair advantage of either Partner A or Partner B. Currently, the data transmitted from Partner A to Partner B would more than likely be transmitted across the Internet. However, while the Internet will be used here as an example, it is not intended that application or scope of the invention be limited in any way. The network could be any distributed network in which data transmitted from one endpoint to another may make intervening hops. Similarly, the transmitted data could take many forms other than packets in an Internet Protocol (IP) network. For that reason, the discrete pieces of data transmitted will be referred to as datagrams. As shown in Figure 1, Partner A's transmission makes 5 intervening hops between endpoint 10 and endpoint 12. For example, a hop could include a hop to an intermediate server at a financial organization, a desktop in a credit services bureau, or some third-party supplier to Partner A. Any one of these hops could be a point of attack for a hacker to assume the other partner's identity. For example, an attacker could assume Partner B's identity and garner sensitive financial data on Partner A that could be manipulated. Alternatively, an attacker could assume Partner A's identity and garner information about Partner B, or even steal the parts being order by causing Partner B, who assumes that Partner A is really Partner A, to ship parts to the attacker instead of the actual Partner A. Current implementations of security protocols institute software processes at either end to confirm that the other partner is really the other partner. Software is inherently vulnerable to being'fooled'or spoofed, as well as requiring often unacceptable system overhead to process the security credentials of the other party. If an attacker knows how a particular software package used for security validates credentials, that hacker could figure <Desc/Clms Page number 5> out ways to steal or recreate security credentials. Figure 2 shows an embodiment of a system in which security credentials are validated by the hardware, rather than by a software process. The system 20 of Figure 2 includes a security credential validation module 30. In this particular system embodiment, the system includes a credential validation module 22, a memory 24, a parser 26, and a port 28. This is just one example of a system configuration and the additional components are optional. Indeed, in many systems the credential generator 22 would reside separate from the credential validation module 30. Using this system embodiment, however, it is possible to see how a networked device can employ security measures to mitigate the likelihood of attacks. For outgoing data transmissions, the credential generator 22 generates security credentials. As used here, security credentials include public-private encryption key pairs, tokens, digital signatures or any other type of credential that can be used to verify the identify of the transmitting entity. The memory 24 may store credentials generated to allow the system 20 to include the credentials in outgoing data transmissions. These data transmissions would be sent out through port 28. Port 28 also allows the system 20 to receive datagrams. The security credentials in these datagrams would then be verified and validated by the credential validation module 30. For example, a transmission may include a public key from a partner. The security validation module would then operate on the public key to ensure that the public key transmitted with the data matches the public key previously received from that partner. This allows the receiving party to determine that it is dealing with the right partner, not an impostor. <Desc/Clms Page number 6> As part of receiving the datagram through port 28, a parser may extract the security credentials from the datagram payload data. As used here, payload data refers to the data contained inside the datagram that does not include information in the datagram necessary for transmission and management of the datagram, such as the header. The parser may not be required, however, as the credentials may be received in such a format that they do not require extraction, or the credential validation module may have the capability of extracting the credentials without need for a parser. This is shown in more detail in an embodiment of the invention shown in Figure 3. In this embodiment either the parser 26 provides an arithmetic logic unit (ALU) 36 with specific information about the security credentials, or the ALU receives it directly, as mentioned above. In this embodiment, the security credentials have at least three parts. The first part is the actual credential. The second is the method that was used to generate the credential. This may include an arithmetic algorithm executed to obtain the credential. The third part is the value of the credential. In this particular embodiment, the ALU 36 uses the digest method provided to recalculate the credential value by operating on the credential. The comparator 38 then compares the recalculated result with the original value and determines if the credential is valid. The use of an ALU and a comparator are merely examples of hardware components that could perform this process and is not intended to limit the scope of the possible embodiments of the invention in any way. In this manner, the security credentials are validated in the hardware of the system, leaving them a little less vulnerable than software validation, and speeding the process of validation by moving it into hardware. <Desc/Clms Page number 7> An embodiment of a process of performing such a validation is shown in Figure 4. An incoming datagram with associated security credentials is optionally parsed at 40. At 42, the actual security credentials are received. As mentioned previously, the validation module may not require the credentials to be parsed. At 44, the digest of the security credentials is recalculated. A digest or a'hash'is a representation of the security credentials resulting from series of operations performed on it, where the series of operations are the digesting method or algorithm mentioned above. At 46, the recalculated representation or digest is compared to the provided representation or digest. If the two values compare, the security credentials are valid and the data can be trusted as being from where it appears to originate. Also, if the security credentials are valid, they may be stored at 48. Having a stored credential to be checked against incoming credentials allows the process to shorten to just a comparison of the previous received credential and a recently received credential to determine if the data is still trustworthy. However, as noted above, storing the credential is optional. Storing the credentials in hardware, rather than in a file system or database, may increase security. A specific application of this type of credential validation may be discussed in terms of the Simple Object Access Protocol (SOAP). The payload of a SOAP message includes several elements that represent security credentials. An embodiment of a process to validate security credentials in hardware for a SOAP payload is shown in Figure 5. At 50, the SOAP payload is parsed, producing three elements of the SOAP signature: SignatureMethod, Signedlnfo, and SignatureValue. SignedInfo 54 is the information that is actually being signed by the digital signature. This is typically canonicalized, meaning transformed into a well-defined, standard format, with a method <Desc/Clms Page number 8> that is shown in the element CanonicalizationMethod, shown below in the example of a SOAP payload. The SignatureMethod element 52 is the method that is used to convert the canonicalized Signedlnfo into the SignatureValue 54. The process then uses the SignatureMethod and the SignedInfo to recalculate the digest at 58 and compares the resulting signature value from that recalculation to the provided SignatureValue 56, at 60. If this passes, the process moves forward. If the calculated SignatureValue does not correctly compare to the SignatureValue that was provided, the process fails and the digital signature is presumed to be invalid. If the SignatureValue is correct, the process recalculates the digest of the references contained in a Reference element at 62. Each Reference element includes the digest method and resulting digest value calculated over the identified data object. A data object is signed by computing its digest value and a signature over that value. The signature is later checked via reference and signature validation. At 64, the recalculated digest is compared to a provided digest value 66 as a second check on the signature validation. If that match is correct, the signature may be optionally stored at 68 for future comparison in transactions with that partner. If the match is not correct, the signature is assumed to be invalid and the process either returns to validating another signature, as shown, or progresses to handle the invalid signature. Handling of invalid signatures is outside the scope of this disclosure. An example of a SOAP payload with these elements is shown below. <Desc/Clms Page number 9> < Signature Id="MyFirstSignature" xmlns="http ://www. w3. org/2000/09/xmldsig#" > < SignedInfo > CanonicalizationMethod Algorithm="http ://www. w3. org/TR/2001/REC-xml-cl4n-20010315"/ > SignatureMethod Algorithm="http ://www. w3. org/2000/09/xmldsig#dsa-shal"/ > Reference URI="http ://www. w3. org/TR/2000/REC-xhtmll-20000126/" > < Transforms > Transform Algorithm="http ://www. w3. org/TR/2001/REC-xml-cl4n-20010315"/ > < /Transforms > < DigestMethod Algorithm="http ://www. w3. org/2000/09/xmldsig#shal"/ > < DigestValue > j6lwx3rvEPOOvKtMup4NbeVu8nk= < /DigestValue > < /Reference > < /SignedInfo > < SignatureValue > MCOCFFrVLtRlk=... < /SignatureValue > < KeyInfo > KeyValueo < KeyValue > < DSAKeyValue > < P > ... < /PxQ > ... < /QxG > ... < /GxY > ... < /Y > < /DSAKeyValue > < /KeyValue > < /KeyInfo > < /Signature > As mentioned above, the required Signedlnfo element is the information that is actually signed. Note that the algorithms used in calculating the SignatureValue are also included in the signed information while the SignatureValue element is outside SignedInfo. The CanonicalizationMethod is the algorithm that is used to canonicalize the Signedlnfo element before it is digested as part of the signature operation. Note that this example is not in canonical form. This is an optional process and not required for implementation of embodiments of the invention. The SignatureMethod is the algorithm that is used to convert the canonicalized Signedlnfo into the SignatureValue. It is a combination of a digest algorithm and a key dependent algorithm and possibly other algorithms. The algorithm names are signed to protect against attacks based on substituting a weaker algorithm. To promote application interoperability one may specify a set of signature algorithms that must be implemented, <Desc/Clms Page number 10> though their use is at the discretion of the signature creator. One may specify additional algorithms as'recommended'or'optional'for implementation; the design also permits arbitrary user specified algorithms. Each Reference element includes the digest method and resulting digest value calculated over the identified data object. It also may include transformations that produced the input to the digest operation. A data object is signed by computing its digest value and a signature over that value. The signature is later checked via reference and signature validation. Keylnfo indicates the credential to be used to validate the signature. Possible forms for credentials include digital certificates, tokens, key names, and key agreement algorithms and information, as examples. KeyInfo is optional for two reasons. First, the signer may not wish to reveal key information to all document processing parties. Second, the information may be known within the application's context and need not be represented explicitly. Since Keylnfo is outside of Signedlnfo, if the signer wishes to bind the keying information to the signature, a Reference can easily identify and include the Keylnfo as part of the signature. It must be noted that the specifics of the above message are only intended as an example, and that the use of a SOAP payload is also intended as an example to promote better understanding of embodiments of the invention. No limitation on the scope of the claims is intended, nor should any be implied. Thus, although there has been described to this point a particular embodiment for a method and apparatus for hardware-assisted credential validation, it is not intended that such <Desc/Clms Page number 11> specific references be considered as limitations upon the scope of this invention except in- so-far as set forth in the following claims.
A method for forming a high-k gate dielectric film (106) by CVD of a M-SiN or M-SION, such as HfSiO2. Post deposition anneals are used to adjust the nitrogen concentration. <IMAGE>
A method for fabricating an integrated circuit, comprising the steps of:providing a partially fabricated semiconductor body; andforming a gate dielectric by depositing a high-k film comprising metal, silicon, and nitrogen by chemical vapor deposition on a surface of a semiconductor body.The method of claim 1, wherein said high-k film comprises a metal-silicon-oxynitride.The method of claim 1, wherein said high-k film comprises a material selected from the group consisting of HfSiN, HfSiON, ZrSiN, ZrSiON, LaSiN, LaSiON, YSiN, YSiON, GdSiN, GdSiON, EuSiN, EuSiON, PrSiN, and PrSiON.The method of any preceding claim, wherein said chemical vapor deposition step occurs at a temperature in the range of 200°C to 900°C and a pressure in the range of 0.1 Torr to 760 Torr.The method of any preceding claim further comprising the step of annealing the high-k film to control the nitrogen concentration and vacancies within the high-k film.The method of claim 5, wherein said annealing step comprises:a first higher temperature anneal in a non-oxidizing ambient; anda second lower temperature anneal in an oxidizing ambient, wherein said lower temperature is lower than said higher temperature.A method for fabricating an integrated circuit, comprising the steps of:providing a partially fabricated semiconductor body; andforming a gate dielectric by:chemical vapor deposition of a high-k film comprising metal, silicon, and nitrogen a surface of a semiconductor body using a silicon precursor selected from the group consisting of tetrakis(dimethylamido)silicon and tetrakis(diethylamido)silicon, a metal precursor selected from the group consisting of tetrakis(dimethylamido)metal and tetrakis(diethylamido)metal, where metal is Hf, Zr, La, Y, Gd, Eu, or Pr; and a nitrogen-containing precursor.The method of claim 7, wherein said high-k film comprises a metal-silicon-oxynitride and the chemical vapor deposition step further comprises using an oxygen precursor.The method of claim 7 or claim 8, further comprising the step of annealing the high-k film to control the nitrogen concentration.The method of claim 9, wherein said annealing step comprises:a first higher temperature anneal in a non-oxidizing ambient; anda second lower temperature anneal in an oxidizing ambient, wherein said lower temperature is lower than said higher temperature.
FIELD OF THE INVENTIONThe invention is generally related to the field of forming high dielectric constant (high-k) films in semiconductor devices and more specifically to forming metal-silicon-oxynitride gate dielectrics by chemical vapor deposition or atomic layer deposition.BACKGROUND OF THE INVENTIONAs semiconductor devices have scaled to smaller and smaller dimensions, the gate dielectric thickness has continued to shrink. Although further scaling of devices is still possible, scaling of the gate dielectric thickness has almost reached its practical limit with the conventional gate dielectric material, silicon dioxide, and silicon oxynitride. Further scaling of silicon dioxide gate dielectric thickness will involve a host of problems: extremely thin layers allow for large leakage currents due to direct tunneling through the oxide. Because such layers are formed literally from a few layers of atoms, exacting process control is required to repeatably produce such layers. Uniformity of coverage is also critical because device parameters may change dramatically based on the presence or absence of even a single monolayer of dielectric material. Finally, such thin layers form poor diffusion barriers to dopants from polycrystalline silicon electrodes.Realizing the limitations of silicon dioxide, researchers have searched for alternative dielectric materials which can be formed in a thicker layer than silicon dioxide and yet still produce the same field effect performance. This performance is often expressed as "equivalent oxide thickness": although the alternative material layer may be thicker, it has the equivalent effect of a much thinner layer of silicon dioxide (commonly called simply "oxide"). In some instances, silicon dioxide has been replaced with a SiON. However, even higher-k dielectrics will soon be needed. Some films currently being investigated include deposited oxides or nitrides such as ZrO2, ZrSiO, ZrSiON, HfO2, HfON, HfSiO, HfSiON, AION, and AlZrO, HfAlO, YSiO, LaSiO, LaAlO, YalO etc.. Manufacturable processes for incorporating these materials into the CMOS flow are needed.BRIEF DESCRIPTION OF THE DRAWINGSIn the drawings:FIG. 1 is a cross-sectional diagram of a HfSiO2gate dielectric with an interfacial oxide formed according to the prior art; andFIGs. 2-6 are cross-sectional diagrams of a high-K gate dielectric formed according to an embodiment of the invention at various stages of fabrication.DETAILED DESCRIPTION OF THE EMBODIMENTSOne particularly desirable class of high-k films is the metal-silicon-oxides (MSiO2), where the metal is Hf, Zr, La, Y, etc. Unfortunately, when a MSiO2such as HfSiO214 is deposited by CVD an interfacial oxide (silicon dioxide) 12 forms at the interface between the substrate 10 and the HfSiO2, as shown in Fig. 1. The Si/O rich interface prevents scaling below ∼1.5nm.One possible solution is nitridation of the Si substrate surface. Nitridation of the surface is very effective in minimizing the oxidation of the Si substrate during the initial stages of deposition. However, nitridation of the Si substrate surface gives rise to a high interfacial trap density and low minority carrier mobility.The current invention provides a method for forming a high-k dielectric without a SiO2interfacial layer. Embodiments of the invention deposit MSiON or MSiN by CVD directly on the Si substrate surface. Post deposition anneals are then used to adjust the nitrogen concentration and to anneal out defects.A first embodiment of the invention will now be described in conjunction with a method for forming a MOSFET transistor. Referring to FIG. 2, a semiconductor body 100 is processed through the formation of isolation structures 102 and any desired channel or threshold adjust implants. Semiconductor body 102 typically comprises a silicon substrate with or without additional epitaxial layers formed thereon as is known in the art.The surface 104 of semiconductor body 100 is preferably a clean, oxide free surface. In addition, the surface 104 may be hydrogen terminated. Methods for providing such a surface are known in the art. U.S. Patent No. 6,291,867, issued 9/18/01 assigned to Texas Instruments Incorporated and incorporated herein by reference describes several methods for providing such a surface.A MSiON gate dielectric 106 is deposited by CVD on the surface of semiconductor body 102, as shown in FIG. 3. MSiON gate dielectric 106 may, for example, comprise HfSiON, ZrSiON, LaSiON, YSiON, GdSiON, EuSiON, or PrSiON. Including nitrogen in the CVD deposition prevents or at least minimizes the formation of an interfacial oxide. The deposition process may be a thermal CVD process at a temperature in the range of 200-900°C and a pressure in the range of 0.1 Torr to 760 Torr with any of the following precursor gases: M(N(CH 3 ) 2 ) 4 + Si(N(CH 3 ) 2 ) 4 + RG = M-SiONM(N(C 2 H 5 ) 2 ) 4 + Si(N(CH 3 ) 2 ) 4 + RG = M-SiONM(N(C 2 H 5 ) 2 ) 4 + Si(N(C 2 H 5 ) 2 ) 4 + RG = M-SiONM(N(CH 3 ) 2 ) 4 + Si(N(C 2 H 5 ) 2 ) 4 + RG = M-SiONM(i-O-Pr) 2 (thd) 2 + DBDAS + RG = M-SiON   Where M = Hf, Zr, La, Y, etc,   M(i-O-Pr)2(thd)2is bis(isopropoxy)bis(tetramethylheptanedionato) "metal", DBDAS is [(CH3)CO]-Si-[(O2C(CH3)]2and   RG is a reactant gas or combination of reactant gases comprising NH3, N2O, NO or other nitriding gases in any relative ratio (e.g., 50% NH3, 50% N2O, and 0% NO).Alternatively, the MSiON can be formed by using plasma enhanced CVD techniques to break down the metalorganic species and decrease the carbon content . There are many embodiments that one can generate using the plasma enhanced techniques.Referring to FIG. 3, M-SiON gate dielectric 106 may be subjected to an oxidizing anneal. The purpose of the anneal is to adjust the nitrogen concentration and to anneal out defects. An oxidizing anneal increases the oxygen content and decreases the nitrogen content. In the preferred embodiment, a two-step anneal, such as that described in copending U.S. Patent Application Serial No.            (TI-33776) filed            , assigned to Texas Instruments Incorporated and incorporated herein by reference. The two-step anneal comprises a first high temperature anneal (e.g., 700-1100°C) in a non-oxidizing ambient (e.g., N2) followed by a lower temperature anneal (e.g., <a maximum of 1100°C) in an oxidizing ambient (e.g., O2, N2O, NO, ozone, UV O2, H2O2).A MSiON formed by the above CVD process has several advantages. First, the interfacial oxide thickness is reduced versus a MSiO2deposition. In the example of FIG. 1, 9Å of interfacial oxide formed at the interface when a 36Å HfSiO2was formed. Incorporating nitrogen in the CVD process according to the invention decreases this interfacial oxide. Second, the addition of nitrogen further increases the dielectric constant. Finally, dopant penetration is decreased because of the presence of nitrogen and thermal stability is increased.After the anneal, a gate electrode material 110 is deposited over the high-k gate dielectric 106, as shown in FIG 4. Processing then continues by patterning and etching to form the gate electrode, forming the source/drain junction regions, forming interconnects and packaging the device.A second embodiment of the invention will now be described in conjunction with a method for forming a MOSFET transistor. As in the first embodiment, a semiconductor body 100 is processed through the formation of isolation structures 102 and any desired channel or threshold adjust implants. Semiconductor body 102 typically comprises a silicon substrate with or without additional epitaxial layers formed thereon as is known in the art.The surface 104 of semiconductor body 100 is preferably a clean, oxide free surface. In addition, the surface 104 may be hydrogen terminated. Methods for providing such a surface are known in the art. U.S. Patent No. 6,291,867, issued 9/18/01 assigned to Texas Instruments Incorporated and incorporated herein by reference describes several methods for providing such a surface.A MSiN gate dielectric 108 is deposited by CVD on the surface of semiconductor body 102, as shown in FIG. 5. MSiN gate dielectric 108 may, for example, comprise HfSiN, ZrSiN, LaSiN, YSiN, GdSiN, EuSiN, or PrSiN. Including nitrogen in the CVD deposition prevents or at least minimizes the formation of an interfacial oxide. The MSiN film 108 can be deposited using a number of precursors such as amido precursors [Tetrakis(dimethylamido)silicon, Tetrakis(diethylamido)silicon, Tetrakis(dimethylamido)hafnium - or other metal, and Tetrakis(diethylamido)hafnium - or other metal], beta diketontates, tertiary butoxide metal precursors, etc.Alternatively, the MSiN can be formed by using plasma enhanced CVD techniques to break down the metalorganic species and decrease the carbon content . There are many embodiments that one can generate using the plasma enhanced techniques.Referring to FIG. 6, M-SiN gate dielectric 108 is subjected to an oxidizing anneal to form M-SiON 106. The purpose of the anneal is to adjust the nitrogen concentration, to anneal out defects, and incorporate oxygen. As described above, a two-step anneal sequence may be used.After the anneal, a gate electrode material 110 is deposited over the high-k gate dielectric 106, as shown in FIG 4. Processing then continues by patterning and etching to form the gate electrode, forming the source/drain junction regions, forming interconnects and packaging the device.While this invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention, will be apparent to persons skilled in the art upon reference to the description. It is therefore intended that the appended claims encompass any such modifications or embodiments.
Systems and methods for mapping a time-based data acquisition (DAQ) to an isochronous data transfer channel of a network. A buffer associated with the isochronous data transfer channel of the network may be configured. A clock and a local buffer may be configured. A functional unit may be configured to initiate continuous performance of the time-based DAQ, transfer data to the local buffer, initiate transfer of the data between the local buffer and the buffer at a configured start time, and repeat the transferring and initiating transfer in an iterative manner, thereby transferring data between the local buffer and the buffer. The buffer may be configured to communicate data over the isochronous data transfer channel of the network, thereby mapping the time-based DAQ to the isochronous data transfer channel of the network.
Claims1. A method for configuring the mapping of a continuous time-based data acquisition to an isochronous data transfer channel of a network, the method comprising:configuring a local buffer for receipt of data from the continuous time-based data acquisition; configuring a first buffer for receipt of data from the local buffer, wherein the first buffer is associated with the isochronous data transfer channel, wherein the isochronous data transfer channel has an associated bandwidth, and wherein said configuring the first buffer reserves the associated bandwidth;configuring a functional unit to:initiate performance of the continuous time-based data acquisition, wherein the continuous time-based data acquisition is performed according to a first clock, wherein the data from the continuous time-based data acquisition is stored in the local buffer; andinitiate continuous transfer of the data from the local buffer to the first buffer, wherein the transfer from the local buffer to the first buffer is performed according to the first clock.2. The method of claim 1, wherein the first clock is configured to synchronize to a global clock of the network.3. The method of claim 1, wherein said configuring the first buffer comprises:configuring a buffer size of the first buffer; andconfiguring a transfer frequency of the first buffer. 4. The method of claim 1, wherein said configuring the local buffer comprises:configuring a size of the local buffer based on a transfer frequency of the first buffer and a data rate of the continuous time-based data acquisition.5. The method of claim 1, further comprising:configuring a start time of the continuous transfer of the data from the local buffer to the first buffer, wherein the start time is based on a buffer size of the first buffer, a start time of the continuous time-based data acquisition, and a data rate of the continuous time-based data acquisition;wherein the functional unit is further configured to:initiate performance of the continuous time-based data acquisition at the start time of the continuous time-based data acquisition; andinitiate continuous transfer of the data from the local buffer to the first buffer at the start time of the continuous transfer.6. The method of claim 5, wherein the start time of the continuous time-based data acquisition is in phase with a global clock of the network.7. The method of claim 1, wherein said configuring the functional unit to initiate continuous transfer of the data from the local buffer to the first buffer further comprises configuring the functional unit to perform a data integrity process during the continuous transfer, thereby preventing data loss.8. The method of claim 7, wherein the data integrity process comprises:embedding a forward error correction code in the data, wherein said performing the data integrity process uses the embedded forward error correction code.9. The method of claim 1, further comprising:configuring one or more additional local buffers for receipt of data from a corresponding one or more additional continuous time-based data acquisitions; andconfiguring the functional unit to:for each of the one or more additional continuous time-based data acquisitions:initiate performance of the continuous time-based data acquisition, wherein the continuous time-based data acquisition is performed according to a respective first clock, wherein the data from the continuous time-based data acquisition is stored in a respective local buffer of the one or more additional local buffers; and initiate continuous transfer of the data from the respective local buffer to the first buffer according to the respective first clock.10. The method of claim 9, further comprising:configuring one or more further additional local buffers for receipt of data from a corresponding one or more further additional continuous time-based data acquisitions;configuring at least one additional first buffer for receipt of data from the one or more further additional local buffers, wherein each additional first buffer is associated with a corresponding additional isochronous data transfer channel and a corresponding additional functional unit; andconfiguring each of the additional functional units to:for a respective at least one of the one or more further additional continuous time- based data acquisitions:initiate performance of the continuous time-based data acquisition, wherein the continuous time-based data acquisition is performed according to a respective first clock, wherein the data from the continuous time-based data acquisition is stored in a respective local buffer of the one or more further additional local buffers; and initiate continuous transfer of the data from the respective local buffer to a first buffer corresponding to the functional unit, wherein the transfer from the respective local buffer to the first buffer is performed according to the respective first clock.1 1. The method of claim 1 , further comprising:configuring one or more additional local buffers for receipt of data from a corresponding one or more additional continuous time-based data acquisitions;configuring at least one additional first buffer for receipt of data from the one or more additional local buffers, wherein each additional first buffer is associated with a corresponding additional isochronous data transfer channel and a corresponding additional functional unit; andconfiguring each of the additional functional units to:for a respective at least one of the one or more additional continuous time-based data acquisitions:initiate performance of the continuous time-based data acquisition, wherein the continuous time-based data acquisition is performed according to a respective first clock, wherein the data from the continuous time-based data acquisition is stored in a respective local buffer of the one or more additional local buffers; and initiate continuous transfer of the data from the respective local buffer to a first buffer corresponding to the functional unit, wherein the transfer from the respective local buffer to the first buffer is performed according to the respective first clock.12. The method of claim 1, further comprising:configuring one or more additional local buffers for receipt of data from a corresponding one or more additional continuous time-based data acquisitions;configuring one or more additional first buffers for receipt of data from the one or more additional local buffers respectively, wherein each of the additional one or more first buffers is associated with a corresponding additional isochronous data transfer channel and a corresponding additional functional unit; andconfiguring each of the corresponding additional functional units to:initiate performance of a respective continuous time-based data acquisition of the one or more additional continuous time-based data acquisitions, wherein the respective continuous time-based data acquisition is performed according to a respective first clock, wherein the data from the respective continuous time-based data acquisition is stored in a respective local buffer of the one or more additional local buffers; andinitiate continuous transfer of the data from the respective local buffer to a corresponding additional first buffer associated with the corresponding additional functional unit, wherein the transfer from the respective local buffer to the corresponding additional first buffer is performed according to the respective first clock.13. The method of claim 1, wherein the local buffer is a first local buffer, wherein the isochronous data transfer channel comprises a first isochronous data transfer channel, wherein the functional unit comprises a first functional unit, and wherein the method further comprises:configuring a second local buffer for receipt of data from a continuous time-based control operation;configuring a second buffer for receipt of data from the second local buffer, wherein the second buffer is coupled to a second isochronous data transfer channel; andconfiguring a second functional unit to:initiate performance of the continuous time-based control operation, wherein the continuous time-based control operation is performed according to a second clock, wherein the data from the continuous time-based control operation is stored in the second local buffer; andinitiate continuous transfer of the data from the second local buffer to the second buffer, wherein the transfer from the second local buffer to the second buffer is performed according to the second clock.14. The method of claim 13, wherein the first buffer comprises the second buffer, wherein isochronous data transfer channel further comprises the second isochronous data transfer channel, and wherein the functional unit further comprises the second functional unit.15. The method of claim 1, wherein the network comprises:a real time network;an Ethernet network; ora memory mapped bus.16. A system for mapping a continuous time-based data acquisition to an isochronous data transfer channel of a network, the system comprising:a functional unit;a local buffer, coupled to the functional unit, and configured to receive data from the continuous time-based data acquisition; anda first buffer, coupled to the functional unit and the local buffer, and configured to receive data from the local buffer, wherein the first buffer is associated with the isochronous data transfer channel; and wherein the functional unit is configured to:initiate performance of the continuous time-based data acquisition, wherein the continuous time-based data acquisition is performed according to a first clock, wherein the data from the continuous time-based data acquisition is stored in the local buffer; andinitiate continuous transfer of the data from the local buffer to the first buffer, wherein the transfer from the local buffer to the first buffer is performed according to the first clock.17. The system of claim 16, wherein the first clock is configured to synchronize to a global clock of the network.18. The system of claim 16, wherein the first buffer has a configurable buffer size and a configurable transfer frequency. 19. The system of claim 16, wherein the local buffer has a size, wherein the size is configured based on a transfer frequency of the first buffer and a data rate of the continuous time- based data acquisition.20. The system of claim 16, wherein the functional unit is further configured to:initiate performance of the continuous time-based data acquisition at a start time of the continuous time-based data acquisition; andinitiate continuous transfer of the data from the local buffer to the first buffer at a start time of the continuous transfer, wherein the start time of the continuous transfer is based on a buffer size of the first buffer, the start time of the continuous time-based data acquisition, and a data rate of the continuous time-based data acquisition.21. The system of claim 20, wherein the start time of the continuous time-based data acquisition is in phase with a global clock of the network. 22. The system of claim 16, wherein to initiate continuous transfer of the data from the local buffer to the first buffer, the functional unit is further configured to perform a data integrity process during the continuous transfer, thereby preventing data loss.23. The system of claim 22, wherein the data integrity process comprises: embedding a forward error correction code in the data, wherein said performing the data integrity process uses the embedded forward error correction code.24. The system of claim 16, further comprisingone or more additional local buffers, each configured to receive data from a respective additional continuous time-based data acquisition, wherein the functional unit is further configured to:for each respective additional continuous time-based data acquisition:initiate performance of the respective additional continuous time-based data acquisition, wherein the respective additional continuous time-based data acquisition is performed according to a respective first clock, wherein the data from the respective additional continuous time-based data acquisition is stored in a respective local buffer of the one or more additional local buffers; andinitiate continuous transfer of the data from the respective local buffer to the first buffer according to the respective first clock.25. The system of claim 16, wherein the network comprises:a real time network;an Ethernet network; ora memory mapped bus.26. A method for mapping a continuous time-based data acquisition to an isochronous data transfer channel of a network, the method comprising:configuring a local buffer for receipt of the data from the continuous time-based data acquisition; configuring a first buffer for receipt of data from the local buffer, wherein the first buffer is coupled to the isochronous data transfer channel;performing the continuous time-based data acquisition according to a first clock, wherein the data from the continuous time-based data acquisition is stored in the local buffer;performing continuous transfer of the data from the local buffer to the first buffer according to the first clock; andproviding the data from the first buffer to the isochronous data channel isochronously.
Title: Lossless Time Based Data Acquisition and Control in a Distributed SystemField of the Invention[0001] The present invention relates to the field of isochronous data transfer, and more particularly to a system and method for a time-based waveform acquisition engine.Description of the Related Art[0002] Time based, or isochronous (i.e., regular, periodic), data transfer is used by control applications in which timely transfer of a data stream or buffer is of utmost importance. If for any reason data arrive late they cannot be used and are discarded. Accordingly, control based applications are typically designed to tolerate some loss or late arrival of data. For example, in some control based applications, if data for one control period are lost, the applications can detect this and defer the control loop calculation until the next period. Additionally, if data continued to arrive late or did not arrive at all for multiple control loops, the control based application could flag an error and take more severe actions.[0003] Furthermore, networks and processor interconnects have implemented features specific to the support of isochronous data transfer for control based applications. For example, features in standards such as PCI Express and time-sensitive (TS) networking (i.e., IEEE 802.1) incorporate support for isochronous data transfer. These features incorporate the two fundamental requirements necessary to support isochronous data transfer - first, the requirement that there is synchronization between endpoints participating in isochronous data transfer, thus guaranteeing the coordinated transmission and reception of data; and second, the requirement that there is reserved bandwidth all the way from the producer of the data to the consumer of the data, thus guaranteeing the delivery and the synchronization of endpoints.[0004] There are new advances in applications that may benefit from these fundamental requirements of isochronous data transfer. For example, the so-called "Internet of Things" is expanding Internet connectivity to machines in a broad range of fields from power systems to medical devices, among others. In such applications, the most common usage of data is aggregation for analysis or logging. Furthermore, since measurement nodes that acquire data in these applications are typically distributed over wide geographical areas, conventional signal based synchronization techniques cannot be applied. Instead, time is used to synchronize measurements and data are acquired either via finite or continuous acquisition. [0005] In a finite acquisition, a set of data points is acquired at periodic intervals, e.g., a set of data points per interval. Each interval is synchronized in phase and frequency amongst all nodes in a system. Examples of such systems include power phasor measurement units as well as structural and machine monitoring systems.[0006] In a continuous acquisition, data are continuously acquired once the acquisition has been started. The start time, tO, and the time between acquisitions, At, are synchronized amongst all nodes in a system. Additionally, the acquisition generally terminates only when a command explicitly terminating the acquisition is received. An example of such a system is an in-vehicle data logger.[0007] In such time-based data acquisition systems, since data are aggregated, applications may tolerate late arrival of data but not loss. Hence, currently data transferred in these systems are treated either independently of isochronous data, e.g., transferred as best effort or asynchronous data, or aggregated at the endpoints by transmitting one data point at a time using an isochronous channel. Treatment of the data as independent of isochronous data addresses the lossless requirement by acknowledging the data transfer and re-transmitting it in case of loss. Use of an isochronous channel uses the reserved bandwidth on the isochronous channel to eliminate loss due to congestion, but does not handle the case for loss due to electromagnetic interference (EMI) or data corruption due to bit errors on the network or bus. Additionally, in prior approaches, aggregation at the endpoints and use of the isochronous channel can only be mapped to finite acquisition and cannot address the continuous acquisition model. Thus, improvements in data transfer in such systems are sought.[0008] For example, improvements in the timely delivery of time based measurements would provide multiple benefits. First, timely delivery may reduce aggregation latency, thereby improving processing efficiency. Additionally, timely delivery may improve monitoring cycle time, allowing real-time analysis and data set reduction, from a storage prospective, and allow for faster response times. Timely delivery may also increase network bandwidth utilization by reducing delays due to retransmission and congestion and improve coexistence with control systems without introducing jitter. Further, timely delivery may allow for the introduction of new control models where algorithms may use coherent sets of aggregated waveforms as inputs to compute control outputs. Finally, timely delivery may reduce and simplify memory management allowing for precise pre-runtime allocation to match acquisition rate using data transfer rate. Since there are a multitude of advantages to improving the timely delivery of time based measurements, the current application describes various embodiments of a novel way of mapping time-based data acquisitions into an isochronous data transfer channel.Summary of the Invention[0009] Various embodiments of a system and method for configuring and performing a mapping of a continuous time-based data acquisition (DAQ) to an isochronous data transfer channel of a network are presented below.[0010] In one embodiment, a system for mapping a continuous time-based data acquisition to an isochronous data transfer channel of a network may include a functional unit, a local buffer, and a buffer. The network may be a real time network, an Ethernet network, or a memory mapped bus. The local buffer may be coupled to the functional unit and may be configured to receive data from the continuous time-based data acquisition. The buffer may be associated with the isochronous data transfer channel. The buffer may be coupled to the functional unit and the local buffer, and may be configured to receive data from the local buffer. The system (e.g., the functional unit) may be configured via the methods presented below.[0011] The functional unit may be configured to initiate performance of the continuous time- based DAQ. The continuous time-based DAQ may be performed according to a clock. In one embodiment, the clock may be configured to synchronize to a global clock of the network. The data from the continuous time-based DAQ may be stored in the local buffer. Further, the functional unit may be configured to initiate continuous transfer of the data from the local buffer to the buffer. The transfer from the local buffer to the buffer may be performed according to the first clock.[0012] In one embodiment, the buffer may have a configurable buffer size and a configurable transfer frequency. Further, the local buffer may have a size and the size may be configured based on a transfer frequency of the buffer and a data rate of the continuous time-based DAQ.[0013] In some embodiments, the functional unit may be configured to initiate performance of the continuous time-based DAQ at a start time of the continuous time-based DAQ, and to initiate continuous transfer of the data from the local buffer to the buffer at a start time of the continuous transfer. The start time of the continuous transfer may be based on a buffer size of the buffer, the start time of the continuous time-based DAQ, and a data rate of the continuous time-based DAQ. In one embodiment, the start time of the continuous time-based data acquisition may be in phase with a global clock of the network. [0014] In certain embodiments, the functional unit may be configured to perform a data integrity process during the continuous transfer, thereby preventing data loss. The data integrity process may include embedding a forward error correction code in the data.[0015] In one embodiment the system may include additional local buffers and each additional local buffer may be configured to receive data from a respective additional continuous time- based DAQ. In such embodiments, the functional unit may be configured to, for each respective additional continuous time-based DAQ, initiate performance of the respective additional continuous time-based DAQ and initiate continuous transfer of the data from the respective local buffer to the buffer according to the respective clock. Note that the respective additional continuous time-based DAQ may be performed according to a respective clock, and the data from the respective additional continuous time-based DAQ may be stored in a respective local buffer of the additional local buffers.[0016] An exemplary method for configuring the mapping of a continuous time-based DAQ an isochronous data transfer channel of a network may include configuring a local buffer for receipt of data from the continuous time-based DAQ, configuring a buffer for receipt of data from the local buffer, and configuring a functional unit as described above. Note that the buffer may be associated with the isochronous data transfer channel, the isochronous data transfer channel may have an associated bandwidth, and configuring the buffer may reserve the associated bandwidth.[0017] An exemplary method for mapping a continuous time-based DAQ to an isochronous data transfer channel of a network may include configuring a local buffer for receipt of the data from the continuous time-based DAQ, configuring a buffer that may be coupled to the isochronous data transfer channel for receipt of data from the local buffer. Further, the continuous time-based DAQ may be performed according to a clock and the data from the continuous time-based DAQ may be stored in the local buffer. The continuous transfer of the data from the local buffer to the buffer may be performed according to the clock and the data may be isochronously provided from the buffer to the isochronous data channel.Brief Description of the Drawings[0018] A better understanding of the present invention can be obtained when the following detailed description of the preferred embodiment is considered in conjunction with the following drawings, in which:[0019] Figure 1 illustrates a system configured to map an iterative time-based DAQ operation to an isochronous data transfer channel of a network, according to one embodiment; [0020] Figure 2 illustrates a mapping of a continuous time-based DAQ to a buffer, according to one embodiment;[0021] Figure 3 illustrates an exemplary timeline of a buffer, according to one embodiment;[0022] Figure 4 is a flowchart diagram illustrating one embodiment of a method for configuring the mapping of a continuous time-based DAQ to an isochronous data transfer channel of a network;[0023] Figure 5 is a flowchart diagram illustrating an embodiment of a method for mapping of a continuous time-based DAQ to an isochronous data transfer channel of a network;[0024] Figure 6 illustrates a distributed measurement and control system, according to one embodiment; and[0025] Figure 7 illustrates another distributed measurement and control system, according to one embodiment.While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the appended claims.Detailed Description of the InventionIncorporation by Reference:[0026] The following references are hereby incorporated by reference in its entirety as though fully and completely set forth herein:[0027] U.S. Patent Application Serial No. 14/072,297, titled "Lossless Time Based Data Acquisition and Control in a Distributed System", filed November 11, 2013.[0028] U.S. Patent Application No. 13/244,572 titled "Configuring Buffers with Timing Information," filed on September 25, 2011.Terms[0029] The following is a glossary of terms used in the present application:[0030] Memory Medium - Any of various types of memory devices or storage devices. The term "memory medium" is intended to include an installation medium, e.g., a CD-ROM, floppy disks 104, or tape device; a computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, Rambus RAM, etc.; a non-volatile memory such as a Flash, magnetic media, e.g., a hard drive, or optical storage; registers, or other similar types of memory elements, etc. The memory medium may comprise other types of memory as well or combinations thereof. In addition, the memory medium may be located in a first computer in which the programs are executed, or may be located in a second different computer which connects to the first computer over a network, such as the Internet. In the latter instance, the second computer may provide program instructions to the first computer for execution. The term "memory medium" may include two or more memory mediums which may reside in different locations, e.g., in different computers that are connected over a network.[0031] Carrier Medium - a memory medium as described above, as well as a physical transmission medium, such as a bus, network, and/or other physical transmission medium that conveys signals such as electrical, electromagnetic, or digital signals.[0032] Programmable Hardware Element - includes various hardware devices comprising multiple programmable function blocks connected via a programmable interconnect. Examples include FPGAs (Field Programmable Gate Arrays), PLDs (Programmable Logic Devices), FPOAs (Field Programmable Object Arrays), and CPLDs (Complex PLDs). The programmable function blocks may range from fine grained (combinatorial logic or look up tables) to coarse grained (arithmetic logic units or processor cores). A programmable hardware element may also be referred to as "reconfigurable logic".[0033] Software Program - the term "software program" is intended to have the full breadth of its ordinary meaning, and includes any type of program instructions, code, script and/or data, or combinations thereof, that may be stored in a memory medium and executed by a processor. Exemplary software programs include programs written in text-based programming languages, such as C, C++, PASCAL, FORTRAN, COBOL, JAVA, assembly language, etc.; graphical programs (programs written in graphical programming languages); assembly language programs; programs that have been compiled to machine language; scripts; and other types of executable software. A software program may comprise two or more software programs that interoperate in some manner. Note that various embodiments described herein may be implemented by a computer or software program. A software program may be stored as program instructions on a memory medium.[0034] Hardware Configuration Program - a program, e.g., a netlist or bit file, that can be used to program or configure a programmable hardware element. [0035] Program - the term "program" is intended to have the full breadth of its ordinary meaning. The term "program" includes 1) a software program which may be stored in a memory and is executable by a processor or 2) a hardware configuration program useable for configuring a programmable hardware element.[0036] Graphical Program - A program comprising a plurality of interconnected nodes or icons, wherein the plurality of interconnected nodes or icons visually indicate functionality of the program. The interconnected nodes or icons are graphical source code for the program. Graphical function nodes may also be referred to as blocks.[0037] The following provides examples of various aspects of graphical programs. The following examples and discussion are not intended to limit the above definition of graphical program, but rather provide examples of what the term "graphical program" encompasses:[0038] The nodes in a graphical program may be connected in one or more of a data flow, control flow, and/or execution flow format. The nodes may also be connected in a "signal flow" format, which is a subset of data flow.[0039] Exemplary graphical program development environments which may be used to create graphical programs include Lab VIEW®, DasyLab™, DIADem™ and Matrixx/SystemBuild™ from National Instruments, Simulink® from the Math Works, VEE™ from Agilent, WiT™ from Coreco, Vision Program Manager™ from PPT Vision, SoftWIRE™ from Measurement Computing, Sanscript™ from Northwoods Software, Khoros™ from Khoral Research, SnapMaster™ from HEM Data, VisSim™ from Visual Solutions, ObjectBench™ by SES (Scientific and Engineering Software), and VisiDAQ™ from Advantech, among others.[0040] The term "graphical program" includes models or block diagrams created in graphical modeling environments, wherein the model or block diagram comprises interconnected blocks (i.e., nodes) or icons that visually indicate operation of the model or block diagram; exemplary graphical modeling environments include Simulink®, SystemBuild™, VisSim™, Hypersignal Block Diagram™, etc.[0041] A graphical program may be represented in the memory of the computer system as data structures and/or program instructions. The graphical program, e.g., these data structures and/or program instructions, may be compiled or interpreted to produce machine language that accomplishes the desired method or process as shown in the graphical program.[0042] Input data to a graphical program may be received from any of various sources, such as from a device, unit under test, a process being measured or controlled, another computer program, a database, or from a file. Also, a user may input data to a graphical program or virtual instrument using a graphical user interface, e.g., a front panel.[0043] A graphical program may optionally have a GUI associated with the graphical program. In this case, the plurality of interconnected blocks or nodes are often referred to as the block diagram portion of the graphical program.[0044] Computer System - any of various types of computing or processing systems, including a personal computer system (PC), mainframe computer system, workstation, network appliance, Internet appliance, personal digital assistant (PDA), television system, grid computing system, or other device or combinations of devices. In general, the term "computer system" can be broadly defined to encompass any device (or combination of devices) having at least one processor that executes instructions from a memory medium.[0045] Measurement Device - includes instruments, data acquisition devices, smart sensors, and any of various types of devices that are configured to acquire and/or store data. A measurement device may also optionally be further configured to analyze or process the acquired or stored data. Examples of a measurement device include an instrument, such as a traditional stand-alone "box" instrument, a computer-based instrument (instrument on a card) or external instrument, a data acquisition card, a device external to a computer that operates similarly to a data acquisition card, a smart sensor, one or more DAQ or measurement cards or modules in a chassis, an image acquisition device, such as an image acquisition (or machine vision) card (also called a video capture board) or smart camera, a motion control device, a robot having machine vision, and other similar types of devices. Exemplary "stand-alone" instruments include oscilloscopes, multimeters, signal analyzers, arbitrary waveform generators, spectroscopes, and similar measurement, test, or automation instruments.[0046] A measurement device may be further configured to perform control functions, e.g., in response to analysis of the acquired or stored data. For example, the measurement device may send a control signal to an external system, such as a motion control system or to a sensor, in response to particular data. A measurement device may also be configured to perform automation functions, i.e., may receive and analyze data, and issue automation control signals in response.[0047] Functional Unit (or Processing Element) - refers to various elements or combinations of elements. Processing elements include, for example, circuits such as an ASIC (Application Specific Integrated Circuit), portions or circuits of individual processor cores, entire processor cores, individual processors, programmable hardware devices such as a field programmable gate array (FPGA), and/or larger portions of systems that include multiple processors, as well as any combinations thereof.[0048] Automatically - refers to an action or operation performed by a computer system (e.g., software executed by the computer system) or device (e.g., circuitry, programmable hardware elements, ASICs, etc.), without user input directly specifying or performing the action or operation. Thus the term "automatically" is in contrast to an operation being manually performed or specified by the user, where the user provides input to directly perform the operation. An automatic procedure may be initiated by input provided by the user, but the subsequent actions that are performed "automatically" are not specified by the user, i.e., are not performed "manually", where the user specifies each action to perform. For example, a user filling out an electronic form by selecting each field and providing input specifying information (e.g., by typing information, selecting check boxes, radio selections, etc.) is filling out the form manually, even though the computer system must update the form in response to the user actions. The form may be automatically filled out by the computer system where the computer system (e.g., software executing on the computer system) analyzes the fields of the form and fills in the form without any user input specifying the answers to the fields. As indicated above, the user may invoke the automatic filling of the form, but is not involved in the actual filling of the form (e.g., the user is not manually specifying answers to fields but rather they are being automatically completed). The present specification provides various examples of operations being automatically performed in response to actions the user has taken.[0049] Concurrent - refers to parallel execution or performance, where tasks, processes, or programs are performed in an at least partially overlapping manner. For example, concurrency may be implemented using "strong" or strict parallelism, where tasks are performed (at least partially) in parallel on respective computational elements, or using "weak parallelism", where the tasks are performed in an interleaved manner, e.g., by time multiplexing of execution threads.[0050] Lossless - refers to a class of data compression algorithms allowing reconstruction of the exact original data from compressed data.[0051] Forward Error Correction (FEC) - refers to a technique for controlling errors in data transmission in which redundancy in the sender message prevents data loss due to bit errors and network reconfiguration.[0052] Finite Acquisition - refers to an acquisition in which a set of data points is acquired at periodic intervals. Each interval is synchronized in phase and frequency amongst all nodes in a system. [0053] Continuous Acquisition - refers to an acquisition in which data are continuously acquired once the acquisition has been started. The start time, to, and the time between acquisitions, At, are synchronized amongst all nodes in a system. Additionally, the acquisition may terminate only when a command explicitly terminating the acquisition is received[0054] Internet Protocol (IP) - refers to the networking model and a set of protocols for communication used for networks such as the Internet.[0055] Transmission Control Protocol (TCP) - refers to a core protocol of the internet protocol suite and provides delivery of a stream of octets between programs running on computers connected to a local area network, intranet, or the public Internet.[0056] Ethernet - refers to a family of computer networking technologies for local area networks (LANs) as standardized in IEEE 802.3.[0057] Local Area Network (LAN) - refers to a computer network that interconnects computers in a limited geographical area such as an office building or office complex.[0058] Virtual Local Area Network (VLAN) - refers to a computer network that is logically segmented on an organizational basis, in other words, segmentation is based on functions or applications rather than on a physical or geographic basis as is the case with LANs.[0059] Media Access Control (MAC) Layer - refers to the sub-layer of a multi-layer computer network model which provides addressing and channel access control mechanisms that enable communication between multiple network nodes that share a common medium, such as Ethernet. MAC layer acts as an interface between the logical link control sub-layer and the network's physical (PHY) layer.[0060] Time-Sensitive (TS) Network - refers to networks adhering to the IEEE 802.1 standard for real-time data transfer.[0061] Time-Sensitive (TS) Packet - refers to specific packets of data routed through a TS network that contain time-sensitive data. May include packets from a non-IEEE 802.1 compliant real-time network with a VLAN tag inserted using embodiments of the present invention.[0062] Isochronous - refers generally to events that occur regularly, or in other words, at equal time intervals.[0063] Asynchronous - refers generally to events that occur irregularly, or in other words, at unscheduled and intermittent time intervals.Figures 1 - 5: Systems and Methods for Mapping Time-based Data to an Isochronous Data Transfer Channel [0064] Figure 1 illustrates a system configured to map a continuous time-based DAQ to an isochronous data transfer channel of a network, according to an embodiment of the present invention. Figures 2 and 3 illustrate mapping the data associated with the time-based DAQ to a buffer and the transfer of the data from the buffer over time, respectively. Additionally, embodiments of methods for configuring the mapping and mapping of a time-based DAQ to an isochronous data transfer channel of a network are described below in reference to Figures 4 and 5.[0065] As shown in Figure 1, the system 100 may include a functional unit 1 10, a local buffer 120, a buffer 130, and a data rate clock 150. In certain embodiments, as further described below, the system 100 may be included in, or coupled to, a network interface controller (NIC). Additionally, in some embodiments, the system 100 may be coupled to, or included in a measurement device, such as a traditional stand-alone "box" instrument, a computer-based instrument (e.g., instrument on a card) or external instrument, a data acquisition card, a device external to a computer that operates similarly to a data acquisition card, a smart sensor, one or more DAQ or measurement cards or modules in a chassis, an image acquisition device, such as an image acquisition (or machine vision) card (also called a video capture board) or smart camera, a motion control device, a robot having machine vision, or other similar types of devices.[0066] Accordingly, in one embodiment, the system may be included in a NIC coupled to, or included in, a measurement device. Further, the measurement device may be a distributed measurement device, e.g., part of a network, such as a time-sensitive (TS) network, adhering to the IEEE 802.1 standard for real-time data transfer, or another real time network not adhering to the IEEE 802.1 standard, e.g., a real-time Ethernet network implementation such as PROFINET, which uses standards such as TCP/IP and Ethernet along with a mechanism for real time and isochronous real time communication, EtherCAT, which is an open high performance Ethernet- based fieldbus system, Ethernet/IP, which is designed for use in process control and other industrial automation applications, or Ethernet Powerlink, which is a deterministic real-time protocol for standard Ethernet, among others. Alternatively, the system may be included in a memory-mapped distributed system, such as peripheral component interconnect (PCI), PCI express (PCIe), or CompactPCI, among others. Examples of such systems include, among others, systems based on National Instruments Corporation's CompactRIO platform and systems based on National Instruments Corporation's PXI (PCI Extensions for Instrumentation) platform.[0067] In one embodiment, the functional unit 110 may be coupled to the local buffer 120, the buffer 130, and the data rate clock 150. Note that, in certain embodiments, the system 100 may include one or more functional units, however, for simplicity, the functionality of the system 100 is described in terms of a single functional unit. Also note that the term functional unit may be used interchangeably with the term processing element and is meant in its broadest sense. In other words, the term functional unit, or processing element, refers to any of a variety of elements or combinations of such elements, as noted in the Terms section above. Processing elements include, for example, circuits such as an ASIC (Application Specific Integrated Circuit), portions or circuits of individual processor cores, entire processor cores, individual processors, programmable hardware devices such as a field programmable gate array (FPGA), and/or larger portions of systems that include multiple processors, as well as any combinations thereof.[0068] In an exemplary embodiment, the data rate clock 150 may be configured to synchronize to a global clock of a network, such as global clock 250 of network 200. In various embodiments, the system 100 may be included in or coupled to the network 200 via an Ethernet type connection, or alternatively, via a memory mapped bus, such as PCI, PCIe, or CompactPCI. Further, note that data rate clock 150 may be one of multiple data rate clocks included in system 100, e.g., data rate clock 150 may be one of a plurality of data rate clocks, each of which may be configured to synchronize to a global clock of a network, either directly or indirectly, e.g., via another clock synchronized to the global clock.[0069] Buffer 130 may include a configurable buffer size and transfer frequency. A start time for transferring data to buffer 130 may be configured accordingly. In one embodiment, the start time for transferring the data may be configured based on the start time and data rate of the time- based DAQ. The buffer may be associated with the isochronous data transfer channel of the network 200.[0070] Additionally, the size of the local buffer 120 may be configured. In one embodiment, the size of the local buffer 120 may be configured based the transfer frequency of the buffer and the data rate of the time-based DAQ. Thus, the size of the local buffer 120 may be dependent on the both the configuration of the buffer and the configuration of the time-based DAQ, as described in more detail in reference to Figures 2 and 3.[0071] In one embodiment, the functional unit 110 may be configured to initiate performance of the time-based DAQ at the start time of the time-based DAQ. In some embodiments, the functional unit 1 10 may receive a trigger indicating the start time of the time-based DAQ. In other embodiments, the functional unit 1 10 may be configured to schedule the start time in accordance to a schedule provided to the functional unit 110. [0072] The performance of the time-based DAQ may produce, or generate, data, and thus, the functional unit may be configured to transfer the data to the local buffer in response to the production or generation of data. Further, the functional unit 1 10 may be configured to, at the start time for transferring the data, initiate transfer of the data between the local buffer 120 and the buffer 130.[0073] Note that in some embodiments, the data may not be transferred linearly from the iterative time-based DAQ to the buffer. For example, in one embodiment, data may accumulate in the local buffer prior to transfer to the buffer. Further, the buffer may not immediately communicate the data transferred from the local buffer. Rather, the buffer may accumulate additional data (e.g., more than one transfer of data from the local buffer) prior to communicating the data over the isochronous data transfer channel. In other embodiments, the local buffer may accumulate data, and a subset of the data may be transferred to the buffer and immediately communicated over the isochronous data transfer channel. In such embodiments, the size of the local buffer may be configured to prevent loss of data or the frequency of the buffer may be configured to prevent loss of data.[0074] In an exemplary embodiment, the functional unit 1 10 may be further configured to perform a data integrity process which may prevent data loss during the transfer of the data between the local buffer 120 and the buffer 130. Accordingly, in one embodiment, the data integrity process may include or utilize a lossless algorithm, e.g., a data compression algorithm allowing reconstruction of the exact original data from compressed data. For example, the data integrity process may include embedding forward error correction code (FEC), a technique for controlling errors in data transmission in which redundancy in the sender message prevents data loss due to bit errors and network reconfiguration. Additionally, or alternatively, the data integrity process may include any other types of error correction algorithms, such as Reed Solomon, Hamming Codes, Viterbi, Erasure Coding, and Application-Level Forward Erasure Correction, among others.[0075] In certain embodiments, the buffer 130 may be configured to communicate the data over the isochronous data transfer channel of the network 200 over a cycle of the buffer 130 at the transfer frequency of the buffer 130. Thus, the time-based DAQ may be mapped to the isochronous data transfer channel of the network 200.[0076] Figure 2 is an illustration of an exemplary mapping of continuous time-based DAQ to a buffer, such as buffer 130, according to one embodiment. As illustrated in Figure 2, an iteration of the time-based DAQ may include a data transfer of data 200, which may include an associated data transfer size, Dsz. Thus, for each iteration, a transfer of data 200 to local buffer 220 may occur. Note that local buffer 220 may have similar or the same functionality as local buffer 120.[0077] Local buffer 220 may include an associated block size, Bsz, where the block size may be a multiple of the data transfer size, Dsz. Accordingly, transferring a block of data may include one or more data transfers. In other words, a block of data may include data from multiple (one or more) time-based DAQ iterations. Additionally, local buffer 220 may have an associated size, Lsz, which may be greater than or equal to the block size, Bsz. Note, in some embodiments, the size of the local buffer may be equivalent to the data transfer size, Dsz. Further, in some embodiments, the size of the local buffer, Lsz, may be based on the size of the block, e.g., block size, Bsz, as well as the output frequency of buffer 230 and the data rate of the time-based DAQ.[0078] Once data 200 have been transferred to local buffer 220, they may then be transferred to buffer 230. In some embodiments, FEC may be embedded in the data to increase resilience of the data such that re-transmission is not required. FEC is known to prevent data loss due to bit errors, EMI, and network reconfiguration. Note that data 200 may be transferred from local buffer 220 to buffer 230 as one or more blocks of data. The size, Sz, of buffer 230 may be configured along with a transfer frequency by a user, a functional unit coupled to the local buffer, a functional unit coupled to buffer 230, either locally or otherwise, or a computer system coupled to buffer 230, either locally or otherwise. Once data 200 have been transferred to buffer 230, buffer 230 may provide data 200, which may be included in one of the one or more blocks of data, over an isochronous data transfer channel of a network. The term network is meant to broadly include any of an Ethernet network, a real time network (e.g., an IEEE 802.1 compliant real time network or a non-IEEE 802.1 compliant real time network), or a memory-mapped bus, among others. Thus, since the isochronous channel may provide guaranteed delivery and FEC may remove the possibility of data loss due to bit errors, EMI, and network reconfiguration, data 200 may be provided over the network in a lossless manner.[0079] Figure 3 illustrates an exemplary timeline of a buffer, such as buffer 130, according to one embodiment of the invention. In such an embodiment, at the start time of a time-based DAQ, t0, data may be produced or generated at a data rate, r, as indicated by each upward arrow, and written, or copied, into a local buffer, such as local buffer 120 described above in reference to Figure 1. The start time, s, of the buffer may be calculated using ts, the start time of the time- based DAQ, the size of the buffer, Sz, the size of the data, Dsz, acquired each iteration of the time-based DAQ, and r, the data rate. Note that "copy time" indicates the time necessary to transfer data from the local buffer to the buffer every period, p, of the buffer. Note further, that the buffer may be configured to transfer data, e.g., one or more blocks of data, periodically, e.g., at a transfer frequency. Thus, in certain embodiments, the start time, s, of the buffer may be represented mathematically by equation 1 :s = t0+ (Sz * r)Dsz+ Copy Time (l)[0080] Further, the buffer may be configured with a combination of size and frequency to ensure that the local buffer may always have data to output to the buffer at the configured transfer frequency. Also note that by increasing either the local buffer size or the size of the buffer, embodiments where the data rate and transfer frequency of the buffer are not integer multiples of one another may be accommodated or implemented. Additionally, in certain embodiments, the size of the data transferred to the buffer, or the data payload size, may be provided along with the data. Said another way, the buffer may transmit a packet of data equivalent to the size of the buffer when the buffer is not full. Thus the buffer may transmit a packet of data that includes empty, or filler, data. In such instances, the data payload size may be included in the packet of data transmitted so that the empty, or filler, data may be ignored by a consumer of the data transmitted by the buffer. In other words, where the data rate is not an integer multiple of the transfer frequency of the buffer, the size of the local buffer and buffer may be configured to allow for lossless data transfer and, in some embodiments, the data payload size may be included in the data transferred by the buffer. For example, in one embodiment, where the start time, s, of the buffer may be represented by equation 1, the size of the local buffer, Lsz, may be represented mathematically by equation 2:L _ {Sz(p + Copy Time)) ^r[0081] Alternatively, in certain embodiments, the size of the buffer, Sz, may be determined based on the size of the local buffer, Lsz, as represented mathematically by equation 3 :Sz = 7^ (3)p + Copy Time )[0082] Further, in yet another embodiment, the frequency, or period, p, of the buffer may be determined based on the size of both the local buffer, Lsz, and the size, Sz, of the buffer, as represented mathematically by equation 4: p =sz r- Copy Time (4)Sz[0083] Figure 4 is a flowchart diagram illustrating one embodiment of a method for configuring the mapping of a time-based DAQ to an isochronous data transfer channel of a network. The method shown in Figure 4 may be used in conjunction with any of the systems or devices described above in reference to Figures 1 - 3, among other systems and devices. In various embodiments, some of the method elements shown may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. As shown, the method may operate as follows.[0084] First, in 402 a local buffer may be configured for receipt of data from the time-based DAQ. Note, the buffer may be configured by a functional unit, such as functional unit 110, or the buffer may be configured by another functional unit or computer system coupled to the buffer, e.g., either via a memory mapped bus or a network connection, among other communicative couplings. In 404, a first buffer (e.g., buffer 130 or 230) may be configured for receipt of data from the local buffer. The first buffer may be associated with the isochronous data transfer channel.[0085] In 406, a functional unit may be configured to initiate performance of the continuous time-based data acquisition. The (continuous) time-based data acquisition may be performed according to a clock and the data from the time-based data acquisition may be stored in the local buffer. In certain embodiments, the clock (e.g., a data rate clock) may be one of a plurality of clocks. In such embodiments, a first clock may be synchronized to a global clock and each additional clock of the plurality of clocks may synchronize to the global clock via the first clock. In other embodiments, the clock may synchronize to a local clock and the local clock may synchronize to the global clock.[0086] In 408, the functional unit may be configured to initiate continuous transfer of the data from the local buffer to the first buffer. The transfer from the local buffer to the first buffer may be performed according to the clock, e.g., the first clock.[0087] In further embodiments, a start time for transferring data to the buffer may be configured. In some embodiments, the start time may be based on the size of the buffer. Additionally, the start time may be based on a start time and data rate of the time-based DAQ. The data rate may be in accordance with the clock. The start time of the time-based DAQ may be in phase with the global clock of the network. Further, the start time may be based on the amount (e.g., the size of a block) of the data transferred.[0088] In some embodiments, a size of a local buffer may be configured. The size of the local buffer may be based on the amount the data transferred and the data rate of the time-based DAQ. Additionally, in some embodiments, the size of the local buffer may be further based on the transfer frequency of the buffer. Further, in a particular embodiment, the size of the local buffer may be further based on a copy time, i.e.., the amount of time required to read the data from the local buffer and write the data to the buffer. Further, in certain embodiments, the local buffer may be further configured as a first-in-first-out (FIFO) buffer.[0089] Note, in certain embodiments, the functional unit may be configured by a user of the system. In other embodiments, the functional unit may be configured by another functional unit or computer system coupled to the functional unit. Additionally, in some embodiments, the start time of the time-based DAQ may be received by the functional unit as a trigger indicating the start time of the time-based DAQ. In other embodiments, the functional unit may be configured to schedule the start time in accordance to a schedule provided to the functional unit.[0090] In certain embodiments, transferring of the one or more blocks of data between the local buffer and the buffer may include the functional unit performing a data integrity process during the transfer. The data integrity process may prevent data loss. In one embodiment, the data integrity process may include embedding forward error correction code.[0091] In one embodiment of the method, the functional unit may also be configured to configure the buffer size and transfer frequency of the buffer. Additionally, in certain embodiments, the functional unit may be configured to configure the clock and the start time for transferring the data between the local buffer and the buffer. Further, the functional unit may be configured to configure the size of the local buffer and to configure the local buffer for transferring the data.[0092] In another embodiment of the method, another, or an additional, functional unit may be configured to configure the buffer size and transfer frequency of the buffer. Additionally, in certain embodiments, the additional functional unit may be configured to configure the clock and the start time for transferring the data between the local buffer and the buffer. Further, the additional functional unit may be configured to configure the size of the local buffer and to configure the local buffer for transferring the data.[0093] In yet another embodiment of the method, a plurality of time-based DAQs may be configured. In such embodiments, the plurality of time-based DAQs may include the time-based DAQ and one or more additional time-based DAQs. Hence, for each of the one or more additional time-based DAQs, a start time, a clock and a size of a local buffer may be configured, respectively. Accordingly, each local buffer associated with each time-based DAQ may be configured for transfer of the data produced, or generated, by the respective time-based DAQ associated with the local buffer. Thus, the plurality of time-based DAQs may have a corresponding plurality of local buffers and clocks. [0094] In some embodiments that include a plurality of time-based DAQs, the functional unit may be configured to, for each of the plurality of time-based DAQs initiate performance of the time-based DAQ at a respective start time of the time-based DAQ, and transfer respective data to the respective local buffer in response to the performance of the time-based DAQ. Additionally, the functional unit may be configured to, for each of the plurality of time-based DAQs, initiate transfer of respective data between the respective local buffer and the buffer at the start time for transferring the data, and repeat the transferring and the initiating transfer one or more times, thereby transferring the data from the respective local buffer to the buffer. Accordingly, in certain embodiments, the functional unit may, for each of the plurality of time-based DAQs, perform the actions for which it is configured.[0095] Further, in certain embodiments, the buffer may be configured to communicate the respective data for each time-based DAQ over the isochronous data transfer channel of the network over at least one cycle at the transfer frequency of the buffer, thereby mapping the plurality of time-based DAQs to the isochronous data transfer channel of the network. In other words, the plurality of local buffers may be multiplexed to a single buffer for communication over the isochronous data transfer channel of the network. Accordingly, in certain embodiments, the buffer may communicate the respective data for each time-based DAQ over the isochronous data transfer channel of the network over at least one cycle at the transfer frequency of the buffer, thereby mapping the plurality of iterative time-based DAQ operations to the isochronous data transfer channel of the network.[0096] Additionally, in some embodiments that include the plurality of time-based DAQs, the method may further include performing: configuring buffer size and transfer frequency, and configuring a start time for transferring data, for each of one or more additional buffers. Additionally, each of the one or more additional buffers may be associated with a corresponding functional unit of one or more additional functional units. Thus, the buffer and the additional one or more buffers may compose a plurality of buffers, and the functional unit and the one or more additional functional units may compose a plurality of functional units. Accordingly, the plurality of time-based DAQs with the corresponding plurality of local buffers may be mapped to the plurality of buffers. Hence, each of the plurality of buffers may be configured to communicate respective data over a corresponding isochronous data transfer channel of the network at a respective transfer frequency of the respective buffer, thereby mapping the plurality of time-based DAQs to a plurality of isochronous data transfer channels of the network. [0097] Note that in various embodiments the mapping may be linear, e.g., one-to-one, or nonlinear. In other words, a time-based DAQ may be mapped to a corresponding buffer, e.g., a linear or one-to-one mapping. Alternatively, the mapping may be non-linear, e.g., there may not be a corresponding buffer for each time-based DAQ. For example, a first time-based DAQ may be mapped to a first buffer whereas a second and a third time-based DAQ may be mapped to a second buffer, and so forth.[0098] Additionally, in such embodiments, for each respective buffer of the plurality of buffers, a corresponding functional unit of the plurality of functional units may be configured to, for at least one time-based DAQ of the plurality of time-based DAQs, initiate continuous performance of the at least one time-based DAQ at a respective start time of the at least one time- based DAQ, transfer the data to the respective local buffer in response to the performance of the at least one time-based DAQ, initiate transfer of the data between the respective local buffer and the buffer at the start time for transferring the data, and repeat said transferring and said initiating transfer one or more times, thereby transferring the respective data between the respective local buffer and the respective buffer. Further, each of the plurality of buffers may be configured to communicate the respective data from each of the plurality of buffers over a corresponding isochronous data transfer channel of the network at the respective transfer frequency of the respective buffer, thereby mapping the plurality of time-based DAQs to a plurality of isochronous data transfer channels of the network. Accordingly, in certain embodiments, the method may further include, for each respective buffer of the plurality of buffers, the corresponding functional unit of the plurality of functional units may perform, for at least one time-based DAQ of the plurality of time-based DAQs, the above actions for which it is configured.[0099] In certain embodiments, the method may further include configuration and performance of time-based control. In such embodiments, the functional unit may include a first functional unit, the local buffer may include a first local buffer, the buffer may be or include a first buffer, the clock may be or include a first clock, and the isochronous data transfer channel may be or include a first isochronous data transfer channel. The method may further include configuring buffer size of a second buffer for the time-based control. The second buffer may be associated with a second isochronous data transfer channel of the network.[00100] In addition, the method may further include configuring a transfer frequency of the second buffer and configuring a second data rate clock, associated with the time-based control operation, to synchronize to the global clock of the network. Also, the method may include configuring a second start time for transferring data from the second buffer. The second start time for transferring data may be based on the buffer size of the second buffer, a start time of the time-based control operation, a data rate of the time-based control operation in accordance with the second clock, and size of the data transferred. In certain embodiments, the start time of the time-based control operation may be in phase with the global clock of the network. Further, a size of a second local buffer may be configured, e.g., based on the size of the data transferred, the transfer frequency of the second buffer, and the data rate of the time-based control operation. The second local buffer may also be configured for transfer of the data from the second local buffer in response to continuous performance of the time-based control operation.[00101] Accordingly, a second functional unit may be configured to initiate the continuous performance of the time-based control operation at the start time of the time-based data operation, transfer the data from the local buffer in response to the continuous performance of the time-based control operation, initiate transfer of the data between the second local buffer and the second buffer at the second start time for transferring the data, and repeat the transferring and initiating transfer one or more times in an iterative manner, thereby transferring the data between the second local buffer and the second buffer. Hence, the second buffer may be configured to communicate the data over a second isochronous data transfer channel of the network over at least one cycle at the transfer frequency of the second buffer, thereby mapping the time-based control operation to the second isochronous data transfer channel of the network. Note, that in some embodiments, the second functional unit may perform the above actions for which it is configured.[00102] In an exemplary embodiment that includes both a time-based DAQ operation and a time-based control operation, the functional unit may be configured to implement or operate as both the first and second function units discussed above, and similarly, the buffer may be configured to implement or operate as both the first and second buffers. In other words, the method may be performed using a single functional unit and a single buffer. Thus, the functional unit may include the first functional unit and the second functional unit and the buffer may include the first buffer and the second buffer. Accordingly, a single isochronous data transfer channel may be configured as both the first and second isochronous data transfer channel. Thus, the isochronous data transfer channel may include the first isochronous data transfer channel and the second isochronous data transfer channel of the network.[00103] Note, that in certain embodiments, the first buffer may be a first partition of the buffer and the second buffer may be a second partition of the buffer. Accordingly, the time-based DAQ may be mapped to the first partition of the buffer and the time-based control operation may be to the second partition of the buffer. In such embodiments, the buffer size and transfer frequency of the buffer may be configured to accommodate both operations.[00104] Figure 5 is a flowchart diagram illustrating one embodiment of a method for mapping of a time-based DAQ to an isochronous data transfer channel of a network. The method shown in Figure 5 may be used in conjunction with any of the methods, systems or devices described above in reference to Figures 1 - 4, among other systems and devices. In various embodiments, the method described in Figure 4 may be used to configure a system to perform embodiments of the method. Alternatively, other methods may be used to prepare the system to perform embodiments. In various embodiments, some of the method elements shown may be performed concurrently, in a different order than shown, or may be omitted. Additional method elements may also be performed as desired. As shown, the method may operate as follows.[00105] In 502, a local buffer may be configured for receipt of the data from a time-based DAQ (e.g., a continuation time-based DAQ). In some embodiments, a size of the local buffer may be specified. In an exemplary embodiment, the size of the local buffer may be determined as described above.[00106] In 504, a buffer (e.g., a first buffer), such as buffer 130 described above, may be configured for receipt of data from the local buffer. The buffer may be coupled to an isochronous data transfer channel. In an exemplary embodiment, the buffer may be configured using embodiments of the methods described above.[00107] In 506, the time-based DAQ may be performed according to a clock (e.g., a first clock, or a data rate clock). The data from the time-based DAQ may be stored in the local buffer.[00108] In 508, the data may be transferred (e.g., continuously) from the local buffer to the buffer according to the clock. The local buffer may be a first-in-first-out (FIFO) buffer and may continue to transfer data to the buffer so long as the time-based DAQ is being performed (i.e., generating data).[00109] In 510, the data may be isochronously provided from the buffer to the isochronous data channel. Note that as described above, the rate data are provided to the buffer may not match or be an integer multiple of the rate at which the buffer provides the data to the isochronous data channel.Figures 6-7: Further Embodiments of Distributed Control and Measurement Systems [00110] Figures 6 and 7 illustrate further embodiments of the above described systems and methods. Note that these embodiments, although described in detail, are exemplary only and numerous variations and modifications will become apparent to those skilled in the art.[00111] Figure 6 illustrates a distributed control and measurement system 600 according to one exemplary embodiment. As illustrated, the distributed measurement control system 600 may include one or more computer systems, such as computer systems 610 and 620. The system 600 may also include or more distributed measurement and control devices, such as devices 640, 650, and 660. Note that one of the advantages of the systems and methods for time-based data transfer described herein is the ability of the same device, e.g., any of devices 640 - 660, to be capable of exchanging control, e.g., via a time-based control operation, and measurement data, e.g., via a time-based DAQ operation, simultaneously or concurrently. Thus, each channel on a device, such as any of devices 640 - 660, may be configured independently based on operation (time-based control (dashed lines) or time-based DAQ (solid lines)), or function. Further, data may be mapped onto one or more buffers, with or without a data integrity process such as FEC, based on degree of lossless behavior desired. Additionally, the one or more buffers may be configured to transfer data over isochronous transfer channels at the same or disparate frequencies or rates, such as any of rates rl - r5. Accordingly, all channels may use a common notion of time, e.g., data rate clocks may all be synchronized to a global clock of the network and start times may all be in phase with the global clock, and may transmit data as disparate frequencies, or rates, without compromising determinism.[00112] Further, in certain embodiments, the computer systems 610 - 620 may be coupled to the distributed measurement and control devices 640 - 660 via a network switch such as network switch 630. Thus, in some embodiments, computer systems 610 - 620 and distributed measurement and control devices 640- 660 may include a network interface controller (NIC). Note, in certain embodiments, the network switch 630 may include a network switch adhering to the IEEE 802.1 standard for real-time data transfer. In other embodiments, the network switch 630 may include a network switch that does not adhere to the IEEE 802.1 standard for real-time data transfer, such as a network switch for PROFINET, which uses standards such as TCP/IP and Ethernet along with a mechanism for real time and isochronous real time communication, EtherCAT, which is an open high performance Ethernet-based fieldbus system, Ethernet/IP, which is designed for use in process control and other industrial automation applications, or Ethernet Powerlink, which is a deterministic real-time protocol for standard Ethernet, among others. In yet other embodiments, it is envisioned that network switch 630 may include a memory controller and the network may be a memory mapped network such as PCI, PCIe, or compactPCI, among others.[00113] An exemplary implementation of the above described systems and methods is illustrated as exemplary device 660. Device 660 may include multiple time-based DAQ operations and a time-based control operation and a local buffer may be associated with each operation. Additionally, device 660 includes multiple, e.g., one or more, clocks (e.g., data rate clocks), a respective clock associated with a respective time-based operation. Device 660 may also include a plurality of functional units, each associated with a respective isochronous channel, e.g., an isochronous data transfer channel, and thus, a respective buffer with a respective rate of transfer rl - r4. Thus, a functional unit, for a respective buffer, may be configured to, for at least one time-based DAQ, initiate continuous performance of the at least one time-based DAQ of the plurality of time-based DAQs at the respective start time, transfer respective data to the respective local buffer in response to the continuous performance of the at least one time- based DAQ, initiate transfer of respective data from a respective local buffer to the respective buffer at a respective start time for transferring the respective data, and repeat the transferring and initiating transfer one or more times in an iterative manner, thereby transferring the respective data between the respective local buffer and the respective buffer. Further, the time- based control operation may be associated with a functional unit which may be configured with embodiments of the invention as previously described.[00114] Figure 7 illustrates a distributed control and measurement system according to another exemplary embodiment. A system 700 may include a computer system 710 coupled to a network 720. In one embodiment, network 720 may adhere to the IEEE 802.1 standard for real-time data transfer. Additionally, the system may include nodes 730 and 740, also coupled to the network 720. Each node may include or implement an embodiment of the techniques disclosed herein. Nodes 730 and 740 may include clock generation circuits which may be synchronized using signals created by time stamp units (TSUs) in network interfaces. The nodes may also include a NIC coupled to a TSU, a physical layer (PHY) coupled to a second TSU, as well as an endpoint application, all coupled to the clock generation circuit via a local bus. The nodes may each also include a functional unit, a local buffer, and a buffer, all coupled to the local bus. The clock generation circuit may include one or more data rate clocks, each synchronized to a global clock of the network 720, via either the TSU of the NIC or the TSU of the PHY. Thus, a start time sent from computer system 710 to each node may be used to create coordinated sample events that drive physical acquisition/generation, e.g., to initiate time-based DAQ and control operations. For example, once the functional unit of node 730 initiates continuous performance of a time- based DAQ operation and transfers data to the local buffer, the functional unit may initiate transfer of the data to buffer and the buffer may communicate the data over the network 720 via an isochronous data transfer channel. In such a manner, data may be transferred between nodes 730 and 740 via buffers.[00115] Although the embodiments above have been described in considerable detail, numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
A clock selection circuit for selecting one of a plurality of clocks as an output clock. When the selection circuit switches between two of the plurality of clocks for output, the currently output clock is removed from the output. The removal of the currently output clock is performed synchronously to the currently selected clock. The newly selected clock is then coupled to the output. Coupling of the newly selected clock is performed synchronously to the newly selected clock.
CLAIMS 1. A clock selection circuit, the circuit switching between a plurality of possible clocks, a switch made from an existing clock to a new frequency clock in synchronization with both existing and new frequency clocks, the clock selection circuit comprising : a first clock input to receive the existing clock as input; a new frequency clock input to receive the new frequency clock; first synchronization logic associated with the first clock to enable/disable output of the existing clock; second synchronization logic associated with the new frequency clock to enable/disable output of the new frequency clock; and the first synchronization and second synchronization logic cooperating to disable output of the existing clock synchronously to the existing clock and to enable output of the new frequency clock synchronously to the new frequency clock. 2. The clock selection circuit of claim 1, wherein the first synchronization logic comprises: a first set of one or more cascaded flip-flops, each of the flip- flops clocked by the existing clock signal; and the second synchronization logic comprises: a second set of one or more cascaded flip-flops, each of the flip-flops clocked by the second clock signal. 3. The clock selection circuit of claim 2 further comprising: power control logic to control clocking of the first and second sets of flip-flops such that the first and second sets of flip-flops are not clocked unless the first clock is being disabled and the new clock is being enabled. 4. The clock selection circuit, as per claim 1, wherein the clock selection circuit further comprises logic responsive to a clock valid input to prevent the second synchronization logic from enabling the new clock when the new clock is not valid. 5. The clock selection circuit, as per claim 1, wherein the clock selection <Desc/Clms Page number 19> circuit is responsive to a reset input to place the clock selection circuit in a known state regardless of a state of other inputs. 6. The clock selection circuit, as per claim 1, wherein the existing clock is a base frequency clock and the new frequency clock is generated by a PLL frequency multiplier operating on the base frequency clock. 7. The clock selection circuit, as per claim 1, wherein the clock selection circuit is responsive to a stop input to stop all clock output. 8. A clock selection circuit for outputting an input clock signal selected from among a plurality of input clocks comprising: enable logic responsive to a clock select input to generate, for each input clock, an associated select signal, each select signal indicative of whether or not its associated input clock is selected to be output; for each select signal, synchronization logic responsive to the select signal to generate an enable signal synchronously to the select signal's associated input clock, the enable signal indicative of whether or not the select signal's associated clock is to be output; and output logic responsive to the enable signals to output the selected input clock. 9. The clock selection circuit of claim 8, wherein the synchronization logic for each select signal comprises one or more cascaded flip-flops, each of the flip-flops clocked by the select signal's associated clock, a first one of the flip-flops receiving the select signal as an input. 10. The clock selection circuit, as per claim 9, wherein the enable signal is an output of a last one of the flips-flops. 11. The clock selection circuit of claim 9 further comprising: power control logic responsive to the enable signals to control clocking of the flip-flops. <Desc/Clms Page number 20> 12. The clock selection circuit of claim 8 wherein the enable logic is additionally responsive to the enable signals when generating the select signals. 13. The clock selection circuit, as per claim 8, wherein the clock selection circuit further comprises logic responsive to a clock valid input to prevent enabling of the selected input clock when the selected input clock is not valid. 14. The clock selection circuit, as per claim 8, wherein the clock selection circuit is responsive to a reset input to place the clock selection circuit in a known state regardless of a state of other inputs. 15. The clock selection circuit, as per claim 8, wherein at least one of the plurality of input clocks is generated by a PLL frequency multiplier. 16. The clock selection circuit, as per claim 8, wherein the clock selection circuit is responsive to a stop input to stop all clock output. 17. A clock selection circuit for switching from a first clock signal coupled to an output to a second clock signal coupled to the output, the circuit comprising: enable logic responsive to a clock select signal to generate a first select signal that indicates the first clock is to be decoupled from the output and a second select signal that indicates the second clock signal is to be coupled to the output; first synchronization logic responsive to the first select signal to generate a first enable signal synchronously to the first clock, the first enable signal indicating that the first clock is to be decoupled from the output; second synchronization logic responsive to the second select signal to generate a second enable signal synchronously to the second clock, the second enable signal indicating the second clock signal is to be coupled to the output; output logic responsive to the first enable signal to decouple the first clock signal from the output and responsive to the second enable signal to couple the second clock signal to the output; and wherein the first enable signal is generated before the second enable signal is generated. <Desc/Clms Page number 21> 18. The clock selection circuit of claim 17 wherein the enable logic is additionally responsive to the first and second enable signals when generating the first and second select signals. 19. The clock selection circuit of claim 17, wherein the first synchronization logic comprises: a first set of one or more cascaded flip-flops, each of the flip- flops clocked by the first clock signal, a first flip-flop of the first set receiving the first select signal; and the second synchronization logic comprises: a second set of one or more cascaded flip-flops, each of the flip-flops clocked by the second clock signal, a first flip-flop of the first set receiving the second select signal. 20. The clock selection circuit of claim 19 further comprising: power control logic responsive to the first and second enable signals to control clocking of the first and second sets of flip-flops such that the first and second sets of flip-flops are not clocked. 21. The clock selection circuit, as per claim 17, wherein the clock selection circuit further comprises logic responsive to a clock valid input to prevent to generation of the second enable signal when the second clock is not valid. 22. The clock selection circuit, as per claim 17, wherein the clock selection circuit is responsive to a reset input to place the clock selection circuit in a known state regardless of a state of other inputs. 23. The clock selection circuit, as per claim 17, wherein at least one of the second clock is generated by a PLL frequency multiplier. 24. The clock selection circuit, as per claim 17, wherein the clock selection circuit is responsive to a stop input to stop all clock output. 25. A method of switching from a first clock signal coupled to an output of a <Desc/Clms Page number 22> clock selection circuit to a second clock signal coupled to the output of the clock selection circuit, the method comprising: receiving an indication to switch from outputting the first clock signal to the second clock signal ; removing the first clock from output synchronously to the first clock; and coupling the second clock signal to the output synchronously to the second clock signal.
<Desc/Clms Page number 1> GLITCH FREE CLOCK SELECT SWITCH BACKGROUND OF THE INVENTION The invention relates to the field of clock selection, and in particular to glitch- free clock selection. Digital electronic systems often rely upon a clock signal to synchronize and control the operation of the various circuit elements (e. g. , gates, flip-flops, latches, etc. ). In many present-day digital electronic systems, such as microprocessor-based devices, there exist multiple clock sources and a concomitant need to switch between them. When switching between clocks, it is preferable to avoid glitches and intermediate clock behavior on the clock output of the selection circuitry. Figures la and lb help illustrate the occurrence of a glitch when switching between clock sources. Figure la shows a typical circuit for switching between clock sources. As shown in figure la, two clock signals, CLOCK1 and CLOCK2, are provided as inputs to a switching circuit 100, such as a multiplexor. Multiplexor 100 also receives a Select signal, which switches the output signal, CLOCK OUT, between the input signals CLOCK1 and CLOCK2. For instance, when the Select signal is high, CLOCK1 is output on CLOCKOUT and when the Select signal is low, CLOCK2 is output on CLOCK OUT. Figure lb illustrates a timing relationship between the Select signal, CLOCK~1 and CLOCK2 that results in a glitch on CLOCK-OUT. As shown, the Select signal is initially high, resulting in CLOCK~1 being output on CLOCK OUT. The Select signal then goes low while CLOCK1 is high and CLOCK2 is low. This results in a shortened pulse 102, i. e. a glitch, output on CLOCK OUT. Generally, a glitch signal causes errors during execution of a microprocessor and other components because a glitch may erratically clock subsequent flip-flops, latches, etc. Therefore, there is a need for a switching circuit that enables switching of the clock source, dynamically and cleanly, without any perturbation on the logic driven by the clock. SUMMARY OF THE INVENTION In one aspect of the present invention, a clock selection circuit for switching between a plurality of possible clocks is provided. A switch made from an existing clock to a new frequency clock is made in synchronization with both the existing and <Desc/Clms Page number 2> the new frequency clock. The clock selection circuit comprises a first clock input to receive the existing clock as input and a new frequency clock input to receive the new frequency clock. The circuit also comprises first synchronization logic associated with the first clock to enable/disable output of the existing clock and second synchronization logic associated with the new frequency clock to enable/disable output of the new frequency clock. The first synchronization and second synchronization logic cooperates to disable output of the existing clock synchronously to the existing clock and to enable output of the new frequency clock synchronously to the new frequency clock. In another aspect of the present invention, a clock selection circuit for outputting an input clock signal selected from among a plurality of input clocks is provided. The circuit comprises enable logic responsive to a clock select input to generate, for each input clock, an associated select signal. Each select signal is indicative of whether or not its associated input clock is selected to be output. For each select signal, there is synchronization logic responsive to the select signal to generate an enable signal synchronously to the select signal's associated input clock. The enable signal is indicative of whether or not the select signal's associated clock is to be output. Output logic is responsive to the enable signals to output the selected input clock. In another aspect of the present invention, a clock selection circuit for switching from a first clock signal coupled to an output to a second clock signal coupled to the output is provided. The circuit comprises enable logic responsive to a clock select signal to generate a first select signal that indicates the first clock is to be decoupled from the output and a second select signal that indicates the second clock signal is to be coupled to the output. First synchronization logic is responsive to the first select signal to generate a first enable signal synchronously to the first clock. The first enable signal indicates that the first clock is to be decoupled from the output. Second synchronization logic is responsive to the second select signal to generate a second enable signal synchronously to the second clock. The second enable signal indicates the second clock signal is to be coupled to the output. The first enable signal is generated before the second enable signal is generated. Output logic is responsive to the first enable signal to decouple the first clock signal from the output and responsive to the second enable signal to couple the second clock signal to the output. <Desc/Clms Page number 3> In another aspect of the present invention, a method of switching from a first clock signal coupled to an output of a clock selection circuit to a second clock signal coupled to the output of the clock selection circuit is provided. An indication to switch from outputting the first clock signal to the second clock signal is received. The first clock is then removed from output synchronously to the first clock. The second clock signal is then coupled to the output synchronously to the second clock signal. BRIEF DESCRIPTION OF THE DRAWINGSFigure la illustrates a typical circuit for switching between clock sources;Figure lb illustrates a timing relationship between the Select signal, CLOCK1 and CLOCK2 that results in a glitch on CLOCKOUT for the circuit of figure la ;Figure 2a illustrates a clock selection circuit according to the principles of the present invention;Figure 2b illustrates a timing diagram for the circuit of figure 2a;Figure 2c illustrates an embodiment of clock selection circuit of figure 2a that allows for a synchronous reset of the synchronization logic;Figure 2d illustrates an embodiment of clock selection circuit of figure 2a that allows for an asynchronous reset of synchronization logic;Figure 2e illustrates the clock selection circuit of figure 2a extended to select between three clock sources;Figure 3 illustrates the use of a clock selection circuit to for select between a base clock and a higher frequency clock created from a phase-locked loop (PLL) based frequency multiplier;Figure 4a illustrates a clock selection circuit 400 according to the principles of the present invention which is particularly suited to select between a base clock and a higher frequency clock created from a phase-locked loop. (PLL) based frequency multiplier;Figures 4b-4c illustrate timing diagrams for the selection circuit of figure 4a;Figure 4d illustrates an embodiment of the clock selection circuit of figure 4a without the internal enable signals connected to reset inputs of the flip-flops;Figure 4e illustrates another embodiment of the clock selection circuit of figure 4a without the internal enable signals connected to reset inputs of the flip-flops; <Desc/Clms Page number 4> Figure 5a illustrates an arrangement in which a clock selection circuit according to the principles of the present invention is used to clock a processor that issues the clock selection circuit's control inputs; Figure 5b illustrates a clock selection circuit according to the principles of the present invention which is particularly suited for control signals that change synchronously with the selected clock; and Figures 5c-5f illustrate timing diagrams for the selection circuit of figure 5b. DETAILED DESCRIPTION OF THE INVENTION Figure 2a illustrates a clock selection circuit 200 according to the principles of the present invention. Selection circuit 200 generally comprises enable logic 203, synchronization logic 204a clocked by CLOCK1, synchronization logic 204b clocked by CLOCK 2 and output logic 202. Enable logic 203 generates internal select signals SEL1 and SEL2 based upon the Select input and the current state of the clock selection, i. e. whether CLOCK1 is output or not and whether CLOCK2 is output or not. Internal selection signals indicate which clock, CLOCK~1 or CLOCK2, is to be output on CLOCK OUT. Internal select signal SEL1 is input to synchronization logic 204a, while internal select signal SEL2 is input to synchronization logic 204b. Synchronization logic 204a generates an internal enable signal EN1 synchronously to CLOCK1 based upon internal select signal SEL1. Likewise, synchronization logic 204b generates an internal enable signal EN2 synchronously to CLOCK 2 based upon internal select signal SEL2. Internal enable signals EN1 and EN2 are input to output logic 202 in addition to CLOCK1 and CLOCK2. The states of enable signals EN1 and EN2 determine which clock, CLOCK1 or CLOCK2, is output-by-output logic 202. Enable signals EN1 and EN2 are also fed back to enable logic 203 via inverters 212 and 214, respectively. As shown, enable logic 202 comprises AND gates 218 and 216, and inverters 212. The output of AND gate 218 is SEL1 and the output of AND gate 216 is SEL2. One of the inputs of AND gate 218 is connected directly to the Select input, while one of the inputs of AND gate 216 is connected to the Select input via inverter 220. The other input of AND gate 216 is connected to EN1 via inverter 212. Similarly, the other input of AND gate 218 is connected to EN2 via inverter 214. Synchronization logic 204a preferably comprises a plurality of cascaded <Desc/Clms Page number 5> memory elements or flip-flops, such as D flip-flops. The associated input clock, i. e. CLOCK1, clocks each of the flip-flops, for example, on the negative edge of the input clock. The first flip-flop of the cascade has its input connected to SEL1 and the output of the last flip-flop of the cascade is EN1. In a similar fashion, synchronization logic 204b preferably comprises a plurality of cascaded flip-flops, such as D flip-flops. The associated input clock, i. e. CLOCK2, clocks each of the flip-flops, for example, on the negative edge of the input clock. The first flip-flop of the cascade has its input connected to SEL2 and the output of the last flip-flop of the cascade is EN2. While it is preferred to use a plurality of cascaded flip-flops, it is within the spirit of the present invention for synchronization logic 204a or 204b to be formed with a single flip-flop. The use of a plurality of cascaded flip-flops, however, is preferred as this decreases the possibility of a metastable condition. Output logic 202 comprises an OR gate 206 with one input connected to the output of an AND gate 208 and the other input connected to the output of a second AND gate 210. AND gate 208 has one of its inputs connected to CLOCK~1 and the other input connected to the EN1. Similarly, AND gate 210 has one of its inputs connected to CLOCK2 and the other input connected to the EN2. The output of OR gate 206 is taken as CLOCK OUT. Discussion of the operation of selection circuit 200 for selecting between CLOCK1 and CLOCK2 will be made in conjunction with the timing diagram in figure 2b and is made starting from a state in which CLOCK1 is output on CLOCK OUT. Further, discussion of the operation of selection circuit 200 is made with respect to active high logic, while it is, however, within the spirit of the present invention to use active low logic. Initially CLOCK1 is output on CLOCK OUT, ENl is high, EN2 is low and Select is high. CLOCK 2 is chosen as the output clock by switching Select from high to low. When Select is switched low, this causes SEL1 to go low. Flip-flops 204a are clocked by CLOCK~1, causing the low input signal SEL1 to propagate to the output of synchronization flip-flops 204a, i. e. to EN1, synchronously with CLOCK 1. The signal EN1 going low disables the output of CLOCK1 on CLOCK OUT. The output EN1 going low also causes SEL2 to go high. Flip-flops 204b are clocked by CLOCK2, causing the high input signal SEL2 to propagate to the output of synchronization flip-flops 204b, i. e. to EN2, synchronously with CLOCK2. The <Desc/Clms Page number 6> output EN2 going high enables the output of CLOCK2 on CLOCK OUT. Therefore, as can be seen, the disabling of CLOCK1 on CLOCK-OUT is performed synchronously with CLOCK~1 and the enabling of CLOCK2 on CLOCK-OUT is performed synchronously with CLOCK2, thereby preventing the occurrence of glitches during the switching of the clock output. As will be illustrated below, the selection circuit according to the principles of the present invention can be expanded to include specific needs of the application, e. g. more control lines, power saving features, reduction of synchronization latency, etc. and to satisfy initialization requirements of synchronization logic 204. Generally, clock selection circuit 200 is used to select a clock signal that clocks various components in a system. Select is typically initialized to a default value by the system after power up to provide an initial clock output on CLOCK OUT. For example, Select is designed to default high. When Select defaults high on power up, SEL2 is initialized low. This low signal then propagates through synchronization logic 204b to EN2 to make EN2 take on a defined (low) value. After EN2 becomes low, then SEL1, which is high, can propagate through synchronization logic 204a to EN1. EN1 going high enables the output of CLOCK 1 on CLOCK OUT. In some applications an output on CLOCK OUT may be needed for the system to initialize Select. Or, in some applications the time needed to first propagate SEL2 through synchronization logic 204b then to propagate SEL1 through synchronization logic 204a may be too long. An ability to reset selection circuit 200 is desirable in these applications. In addition, a reset of selection circuit 200 may also be generally desirable so as to allow selection circuit 200 to be placed into a known state during normal operation. Figure 2c illustrates an embodiment of clock selection circuit 200 of figure 2a that allows for a synchronous reset of synchronization logic 204. By synchronous reset, it is meant that a nReset input changes state synchronously to the clock output on CLOCK-OUT. Selection circuit 200 operates as described in relation to figures 2a and 2b, except that reset logic 222 causes the synchronization logic 204 to generate a particular state of the enable signals EN1 and EN2, regardless of the state of Select and the currently selected clock. This results in a particular clock being output when the nReset input is activated. As shown, the nReset signal is supplied to reset logic 222 via an inverter, making nReset active low. However, the inverter may not be needed depending on the reset logic 222. For instance, if nReset is desired to be <Desc/Clms Page number 7> active high, it can be provided straight to reset logic 222. In the embodiment illustrated in figure 2c, reset logic 222 comprises an OR gate 224, inverter 226 and an AND gate 228. OR gate 222 receives nReset and the output of AND gate 218 as inputs and its output is SEL1 to synchronization logic 204a. AND gate 228 receives nReset via inverter and the output of AND gate 216 as inputs and its output is SEL2 to synchronization logic 204b. For the embodiment of figure 2c, nReset in a high state has no effect on which clock is enabled on CLOCK-OUT. When nReset is placed in a low state, however, SEL1 is forced high by reset logic 222, while SEL2 is forced low by reset logic 222. This causes CLOCK1 to be enabled on CLOCK OUT, regardless of the state of Select. As will be apparent to one of skill in the art, reset logic can readily be designed to cause CLOCK2 to be output instead of CLOCK1. For instance, interchanging the outputs of OR gate 224 and AND gate 228 such that the output of OR gate is SEL2 and the output of AND gate is SEL1, while removing the inverter nReset is supplied through, provides for reset logic which results in CLOCK2 output when nReset is in a high state. Figure 2d illustrates an embodiment of clock selection circuit 200 of figure 2a that allows for an asynchronous reset of synchronization logic 204. By asynchronous reset, it is meant that a nReset input changes state asynchronously from the clock output on CLOCK OUT. Selection circuit 200 operates as described in relation to figures 2a and 2b, except a nReset input causes the synchronization logic 204 to generate a particular state of the enable signals EN1 and EN2, regardless of the state of Select and the currently selected clock. This results in a particular clock being output when nReset is activated. The synchronization logic corresponding to the particular clock to be output has a set input connected to nReset, while the other synchronization logic has reset inputs connected to nReset. As shown in figure 2d, CLOCK1 is the clock to be enabled on reset and, consequently, its synchronization logic 204a has a set input connected to nReset via an inverter. Synchronization logic 204b has a reset input connected to nReset via an inverter. A set input forces the output of its corresponding synchronization logic to go high when it is high. Conversely, a reset input forces the output of its corresponding synchronization logic to go low when it is high. For the set or reset inputs low, synchronization logic operate as normal. Therefore, when nReset is low, CLOCK1 is enabled on CLOCK OUT, while nReset high does not affect selection circuit 200. <Desc/Clms Page number 8> A clock selection circuit according to the principles of the present invention can be extended to select between more than two clocks. Figure 2e illustrates clock selection circuit 200 extended to select between three clock sources, CLOCK l, CLOCK 2, and CLOCK~3. Selection circuit 200 is similar to the circuit of figure 2a and generally comprises enable logic 203, synchronization logic 204a clocked by CLOCK1, synchronization logic 204b clocked by CLOCK2 and output logic 202. Selection circuit 200 has been extended by the addition of synchronization logic 204c clocked by CLOCK~3 and the addition of logic in enable logic 203 to generate a third internal select signal SEL3. In addition, to provide selection between three clocks, the Select input is two Select lines, Select 1 and Select 2. Thus, the extended selection circuit 200 of figure 2e operates similar to the two-clock implementation. Enable logic 203 generates internal select signals SEL1, SEL2, and SEL3 based upon the Select input and the current state of the clock selection. Each of these signals is input to the corresponding synchronization logic 204a, 204b, and 204c, respectively. As with the two-clock implementation, synchronization logic 204a, 204b, and 204c generates enable signals EN1, EN2, and EN3, respectively. The enable signals are generated based on the internal select signals such that an enabled clock is disable synchronously to itself and then the clock to be enabled is enabled synchronously to itself. The enable signals are provided to output logic 202 to control which clock is output and are fed back to enable logic to indicate the current state of the clock selection. In the three-clock implementation shown, both of the Select inputs in a low state causes CLOCK1 to be enabled on CLOCK OUT. A low on both Select inputs results in SEL1 being high, while SEL2 and SEL3 are low. Depending upon which clock is already active, EN2 or EN3 goes low synchronously to its respective clock, disabling that clock. For instance, if CLOCK2 is being output, EN2 goes low synchronously to CLOCK2 (in this case EN3 would already be low so it does not change), causing CLOCK2 to be disabled. Likewise, if CLOCK 3 is being output, EN3 goes low synchronously to CLOCK3 (in this case EN2 would already be low so it does not change), causing CLOCK3 to be disabled. After the previously enabled clock is disable, EN1 goes high synchronously to CLOCK1 (it was low previously), causing CLOCK1 to be enabled on CLOCK-OUT. A low on Select 1 and a high on Select 2 causes CLOCK2 to be enabled on CLOCK OUT. A low on Select 1 and a high on Select 2 results in SEL2 being <Desc/Clms Page number 9> high, while SEL1 and SEL3 are low. Depending upon which clock is already active, EN1 or EN3 goes low synchronously to its respective clock, disabling that clock. For instance, if CLOCK 1 is being output, EN1 goes low synchronously to CLOCK1 (in this case EN3 would already be low so it does not change), causing CLOCK1 to be disabled. Likewise, if CLOCK 3 is being output, EN3 goes low synchronously to CLOCK3 (in this case EN2 would already be low so it does not change), causing CLOCK 3 to be disabled. After the previously enabled clock is disable, EN2 goes high synchronously to CLOCK2 (it was low previously), causing CLOCK 2 to be enabled on CLOCK OUT. A high on Select 1 and a low on Select 2 causes CLOCK3 to be enabled on CLOCK OUT. A high on Select 1 and a low on Select 2 results in SEL3 being high, while SEL1 and SEL2 are low. Depending upon which clock is already active, ENl or EN2 goes low synchronously to its respective clock, disabling that clock. For instance, if CLOCK-1 is being output, EN1 goes low synchronously to CLOCK1 (in this case EN2 would already be low so it does not change), causing CLOCK 1 to be disabled. Likewise, if CLOCK2 is being output, EN2 goes low synchronously to CLOCK2 (in this case EN1 would already be low so it does not change), causing CLOCK2 to be disabled. After the previously enabled clock is disable, EN3 goes high synchronously to CLOCK3 (it was low previously), causing CLOCK3 to be enabled on CLOCK OUT. Lastly, a high on Select 1 and a high on Select 2 causes all clocks to be disabled, driving CLOCKOUT low. A high on Select 1 and a high on Select 2 results in SEL1, SEL2, and SEL3 going low. In turn, any clock that was enabled will be disabled synchronously to itself and CLOCKOUT will be driven low. One exemplary application for a clock selection circuit according to the present invention is the selection between a base clock and a higher frequency clock created from a phase-locked loop (PLL) based frequency multiplier. This is generally illustrated in figure 3. As shown, a base clock signal CLOCK-IN is provided (as CLOCK1) to a clock switch and synchronization circuit 302 according to the present invention. CLOCK-IN is also provided to a PLL frequency multiplier 300, which multiplies the frequency of CLOCKIN to derive a clock signal CLOCK2 having a higher frequency than CLOCK1. The second clock signal CLOCK2 is also provided to clock switch and synchronization circuit 302. A Select line is used to select between either CLOCK1 or CLOCK 2 as the output on CLOCK OUT. For <Desc/Clms Page number 10> instance, when Select is high, CLOCK1 is output as CLOCKOUT (i. e. the PLL frequency multiplier 300 is bypassed). When Select is low, however, CLOCK2 is output as CLOCK OUT. Clock selection circuit 302 also has nReset, StopCK and CLOCK VALID as control inputs. The control input nReset is an active low input that resets clock selection circuit 302. The StopCK input is used to stop the clock on the CLOCK OUT output. When StopCK is high, CLOCK OUT is stopped. The CLOCK VALID input is used to prevent switching to the PLL clock, CLOCK2, during the time when the PLL has not achieved lock. When the PLL has achieved lock, CLOCK VALID goes high, allowing a switch to CLOCK2. Figure 4a illustrates a clock selection circuit 400 according to the principles of the present invention which is particularly suited to select between a base clock and a higher frequency clock created from a phase-locked loop (PLL) based frequency multiplier. Selection circuit 400 generally comprises enable logic 404, synchronization logic 408 clocked by CLOCK1, synchronization 406 clocked by CLOCK2 and output logic 402. Enable logic 404 generates internal select signals SEL1 and SEL2 based upon input signals nReset, Select, StopCK and CLOCK VALID and the current state of the clock selection, i. e. whether CLOCK1 is output or not and whether CLOCK 2 is output or not. Internal select signal SEL1 is input to synchronization logic 408, while internal select signal SEL2 is input to synchronization logic 406. Synchronization logic 408 generates an internal enable signal EN1 synchronously to CLOCK1 based upon internal select signal SEL1. Likewise, synchronization logic 406 generates an internal enable signal EN2 synchronously to CLOCK 2 based upon internal select signal SEL2. Internal enable signals EN1 and EN2 are input to output logic 402 in addition to CLOCK1 and CLOCK2. The states of enable signals EN1 and EN2 determine which clock, CLOCK1 or CLOCK2, is output-by-output logic 402. Enable signals ENl and EN2 are also fed back to enable logic 404 via inverters 412 and 414, respectively. As shown, enable logic 404 comprises AND gates 424, 428 and 432, inverters 420, NOR gate 430 and OR gate 422. The output of OR gate 218 is SELl. One of the inputs of OR gate 218 is the output of inverter 420, which has the nReset signal as its input. The other input of OR gate is the output of AND gate 424. AND gate 424 receives as inputs the Select signal, EN2 via inverter 412 and the output of AND gate <Desc/Clms Page number 11> 428 via inverter 426. AND gate 428 receives the nReset signal and StopCK signal as inputs. The output of AND gate 432. AND gate 432 has the CLOCK VALID signal, EN1 via inverter 416 and the output of NOR gate 430 as inputs. NOR gate 430 receives the Select signal and the output of AND gate 428 as inputs. Similar to the embodiment of figure 2a, synchronization logic 408 preferably comprises a plurality of cascaded flip-flops, such as D flip-flops. Each of the flip- flops is clocked by the associated input clock, i. e. CLOCK~1, on the negative edge of the input clock as a result of inverter 434. In addition, each of the flip-flops has its reset input connected to EN2. The first flip-flop of the cascade receives SEL1 as its input. Synchronization logic 408 also comprises OR gate to facilitate the reset function of nReset. The output of inverter 420 is one input of OR gate 434. The output of the last flip-flop of the cascade is the other input to OR gate 434. The output of OR gate 434 is EN1. Similarly, synchronization logic 406 preferably comprises a plurality of cascaded flip-flops, such as D flip-flops. Each of the flip- flops is clocked by the associated input clock, i. e. CLOCK2, on the negative edge of the input clock as a result of inverter 438. In addition, each of the flip-flops has its reset input connected to EN1. The first flip-flop of the cascade receives SEL2 as its input and the output of the last flip-flop of the cascade is EN2. As previously described above in conjunction with the embodiment of figure 2a, the use of a plurality of flip-flops rather than a single flip-flop reduces the possibility of a metastable condition. It is preferable to apply EN1 and EN2 to the reset inputs of the opposite set of flip-flops as shown to cause the opposite set of flip-flops to be in a reset state as described below. This insures that the opposite internal enable signal is low when one of the internal enable signals is high. Output logic 402 comprises an OR gate 440 with one input connected to the output of an AND gate 442 and the other input connected to the output of a second AND gate 446. AND gate 442 has one of its inputs connected to CLOCK1 and the other input connected to EN1. Similarly, AND gate 446 has one of its inputs connected to CLOCK2 and the other input connected to EN2. The output of OR gate 440 is taken as CLOCK OUT. Discussion of the operation of selection circuit 400 for selecting between CLOCK1 and CLOCK 2 will be made in conjunction with the timing diagram in figure 4b and is made starting from a state in which CLOCK 1 is output on <Desc/Clms Page number 12> CLOCK OUT. Further, discussion of the operation of selection circuit 400 is made with respect to active high logic, while it is, however, within the spirit of the present invention to use active low logic. It should be noted that selection circuit 400 provides for the ability of the inputs to change asynchronously with respect to the selected clock. This is because, before becoming fully operative on the output, a change to any of the inputs passes through synchronization flip-flops 406 and 408. In the case of CLOCK1 being output on CLOCK OUT, Select and nReset are high, while StopCK is low. This results in internal selection signal SEL1 being high, while SEL2 is low. Internal enable signal EN1 is consequently high, which holds synchronization flip-flops 406 in a reset state, insuring internal enable signal EN2 is maintained in a low state. Because EN1 is high and EN2 is low, CLOCK1 is output from output logic 402. When CLOCK2 is to be selected for output on CLOCK-OUT, Select is switched low, which causes SEL1 to switch low.. As long as CLOCK VALID is high, indicating PLL lock, switching Select low results in SEL2 going high. At this point, EN1 still holds flip-flops 406 in a reset state, preventing SEL2 from propagating to EN2. Flip-flops 408, however, are not held in a reset state because EN2 is low. Therefore, SEL1 is propagated through flip-flops 408. Flip-flops 408 are clocked by the negative edge of CLOCK1. This results in EN1 synchronously disabling the CLOCK1 output on CLOCK-OUT by going low after a falling edge, but prior to a rising edge, of CLOCK 1. This synchronous disabling of CLOCK1 on CLOCK-OUT prevents a glitch output. Internal enable signal EN1 going low removes flip-flops 406 from a reset state. Therefore, SEL2 is propagated through flip-flops 406. Flip-flops 406 are clocked by the negative edge of CLOCK~2. This results in EN2 synchronously enabling the CLOCK 2 output on CLOCK OUT by going high after a falling edge, but prior to a rising edge, of CLOCK~2. This synchronous enabling of CLOCK 2 on CLOCK-OUT prevents a glitch output. Further, internal enable signal EN2 going high causes flip-flops 408 to enter a reset state, which maintains EN1 low. As previously described, operation of selection circuit also depends upon the inputs CLOCKVALID, nReset and StopCK. CLOCK VALID is a signal indicating the clock input CLOCK 2 is good or valid and that switching can proceed. In the <Desc/Clms Page number 13> present embodiment, when the PLL has not achieved lock and, therefore, CLOCK~2 is not valid, CLOCK VALID is low causing SEL2 to be low. This prevents CLOCK2 from being output even if Select is low. Thus, CLOCK-VALID prevents the switching to CLOCK2 while the PLL is not in lock (i. e., while CLOCK 2 is not valid). A similar signal could exist for CLOCK1, or any other clock to be switched. StopCK stops the output on CLOCK OUT and nReset places selection circuit 400 in a reset state. When StopCK goes high, both SEL1 and SEL2 go low, thereby causing both EN1 and EN2 to be low, which stops the output on CLOCK OUT, as shown in figure 4c. When nReset goes low, EN1 and SEL1 are forced high, thereby forcing EN2 and SEL2 low. This results in CLOCK1 being output on CLOCK OUT. As would be apparent to one of skill in the art, arrangements in which the internal enable signals, EN1 and EN2, are not applied to the reset inputs of the flip- flops are possible. This is illustrated in figures 4d and 4e. As can be seen, the embodiment of figure 4d is the same as the embodiment of figure 4a, except the internal enable signals, EN1 and EN2, are not connected to the reset inputs of the opposite flip-flops. In the embodiment of figure 4e, EN1 and EN2 are not connected to the resets of the opposite flip-flops. In this embodiment, however, nReset is connected to the set inputs of flip-flops 408 via inverter 404. Similarly, nReset is connected to the reset inputs of flip-flops 406 via an inverter 450. OR gates 434 and 422 are eliminated, with the output of AND gate 424 as SEL1 going directly to the first flip- flop of flip-flops 408. In this embodiment, nReset going low causes EN1 to go high because of the set inputs, while EN2 goes low because of the reset inputs. Another embodiment of the present invention is designed for control signals (i. e. Select, StopCK, and StopClockout) that change synchronously to the selected clock. This occurs, for example, when a clock selection circuit is used to clock a processor that issues the control signals, as shown in figure 5a. A processor 501 is clocked by the CLOCK OUT signal of a clock selection circuit 500 designed according to the principles of the present invention. Some of the control inputs of clock selection circuit 500, i. e. Select and nReset are provided to selection circuit 500 by processor 501. StopCK is generated as a combination of outputs from processor 501, peripheral logic 503 and system logic 505. Because CLOCK-OUT clocks processor 501, Select, nReset and StopCK change synchronously to whichever clock, <Desc/Clms Page number 14> CLOCK~1 or CLOCK 2, is selected for output on CLOCK OUT. As illustrated in figure 5b, selection circuit 500 generally comprises enable logic 502, synchronization logic 504 clocked by CLOCK1, synchronization logic 506 clocked by CLOCK2, output logic 508 and power control logic 510. Enable logic 502 generates internal select signals SEL1 and SEL2 based upon input signals nReset, Select, and StopCK. Internal select signal SEL1 is input to synchronization logic 504, while internal select signal SEL2 is input to synchronization logic 506. Synchronization logic 504 generates an internal enable signal EN1 synchronously to CLOCK~1 based upon internal select signal SEL1. Likewise, synchronization logic 506 generates an internal enable signal EN2 synchronously to CLOCK2 based upon internal select signal SEL2. Internal enable signals EN1 and EN2 are input to output logic 502 in addition to CLOCK1, CLOCK2 and StopClockout. The states of enable signals EN1 and EN2 determine which clock, CLOCK1 or CLOCK2, is output-by-output logic 502. Enable signals EN1 and EN2 are also input to power control logic 510, in addition to CLOCK1 and CLOCK2. Power control logic 510, as described more fully below, controls the clocking of synchronization logic 504 and 506 based on the states of EN1 and EN2. As shown, enable logic 502 comprises OR gates 512,522 and 520, inverter 516, NAND gate 514, and AND gate 518. The output of NAND gate 514 is SEL1. One of the inputs of NAND gate 514 is the Select signal. The other input of NAND gate 518 is the output of inverter 420, which has the output of AND gate 518 as its input. AND gate 518 receives the nReset signal and StopCK signal as inputs. The output of OR gate 520 is SEL2. OR gate 520 receives the Select signal and the output of AND gate 428 as inputs. The signal nReset is also provided to one of the inputs of OR gate 512. The other input of OR gate 512 is the output of NAND gate 514, i. e. SEL1. The output of OR gate 512 is provided to power control logic 510 and synchronization logic 504 in order to enable the function of nReset. Likewise, the signal nReset. is provided to one of the inputs of OR gate 522. The other input of OR gate 522 is the output of OR gate 514, i. e. SEL 2. The output of OR gate 522 is also provided to power control logic 510 in order to enable the function of nReset. Synchronization logic 504 preferably comprises a plurality of cascaded flip- flops, such as D flip-flops. Each of the flip-flops is clocked by the associated input clock, i. e. CLOCK1, on the positive edge of the input clock. The first flip-flop of <Desc/Clms Page number 15> the cascade receives SEL1 as its input. In addition, SEL1 is applied to the set input of each of the flip-flops. Synchronization logic 504 also comprises AND gate 524 and OR gate 526. The last flip-flop of the cascade has its output connected to AND gate 526, whose other input is the output of OR gate 512. The output of AND gate 526 is input to OR gate 524. The other input of OR gate 524 is SEL 1. The output of OR gate 524 is EN1. Likewise, synchronization logic 506 preferably comprises a plurality of cascaded flip-flops, such as D flip-flops. Each of the flip-flops is clocked by the associated input clock, i. e. CLOCK2, on the positive edge of the input clock. The first flip-flop of the cascade receives SEL2 as its input. In addition, SEL2 is applied to the set input of each of the flip-flops. Synchronization logic 506 also comprises AND gate 530 and OR gate 528. The last flip-flop of the cascade has its output connected to AND gate 530, whose other input is the output of OR gate 522. The output of AND gate 530 is input to OR gate 528. The other input of OR gate 524 is SEL2. The output of OR gate 528 is EN2. It should be noted that, similar to the other embodiments, the use of a plurality of flip-flops rather than a single flip-flop reduces the possibility of a metastable condition, however, use of a single flip-flop is possible. Output logic 508 comprises an AND gate 548 with one input connected to the output of an OR gate 544 and the other input connected to the output of a second OR gate 546. OR gate 544 has one of its inputs connected to CLOCK1 and the other input connected to EN1. Similarly, OR gate 546 has one of its inputs connected to CLOCK2 and the other input connected to EN2. The output of AND gate 548 is input to OR gate 550. The other input of OR gate 550 is the signal StopClockout. The output of OR gate 550 is taken as CLOCK OUT. Power control circuit 510 comprises NAND gate 532, AND gates 536 and 538, and OR gates 540 and 542. NAND gate 532 receives EN1 and EN2 as inputs. The output of NAND gate 532 is input to AND gate 536. AND gate 536 also receives the output of OR gate 512 as an input. The output of AND gate 536 is one of the inputs to OR gate 540. The other input of OR gate 540 is CLOCK1. Each clock input of the flip-flops receives the output of OR gate 540. The output of NAND gate 532 is also input to AND gate 538. AND gate 538 also receives the output of OR gate 522 as an input. The output of AND gate 538 is one of the inputs to OR gate 542. The other input of OR gate 540 is CLOCK1. Each clock input of the flip-flops <Desc/Clms Page number 16> receives the output of OR gate 542. Discussion of the operation of selection circuit 500 for selecting between CLOCK~1 and CLOCK2 will be made in conjunction with the timing diagram in figure 5c and is made starting from a state in which CLOCK1 is output on CLOCK-OUT. In the case of CLOCK1 being output on CLOCK OUT, Select and nReset are high, while StopCK and StopClockout are low. This results in internal selection signal SEL1 being low, while SEL2 is high. Internal enable signal EN1 is consequently low, while enable signal EN2 is consequently high. Because ENl is low and EN2 is high, CLOCK1 is output on CLOCK OUT from output logic 508. As previously described, the enable signals EN1 and EN2 are also input to power control logic 510, in addition to CLOCK~1 and CLOCK~2. Power control logic 510 controls the clocking of synchronization logic 504 and 506 based upon the states of EN1 and EN2 in order to reduce the power usage of selection circuit 500. Therefore, when a clock is enabled for output, power control logic 510 prevents the clocking of synchronization logic 504 and 506, while, during switching between clocks or upon reset, power control logic 510 allows clocking of synchronization logic 504 and 506. As such, power control logic 510 prevents the clocking of synchronization logic 504 and 506 when CLOCK1 is output. When CLOCK2 is to be selected for output on CLOCK OUT, Select is switched low. Switching Select low results in SEL2 going low, while SEL1 and EN1 go high. Because the Select signal is synchronized to the clock currently selected, the output of CLOCK1 on CLOCKOUT can be disabled when Select is changed. That is, because Select is synchronized to the currently selected clock, disabling CLOCK1 when Select changes disables CLOCK1 synchronously to itself. The enabling of CLOCK2 on CLOCK OUT, however, must still be synchronized to CLOCK2 in order to prevent a glitch output. Therefore, synchronization logic 506 maintains EN2 high. As both EN1 and EN2 are high (indicating switching of the clocks), power control circuit 510 allows the clocking of synchronization logic 504 and 506. Therefore, SEL2 is propagated through synchronization logic 506. Synchronization logic 506 is clocked by the positive edge of CLOCK1. This results in EN1 synchronously enabling the CLOCK2 output on CLOCKOUT by going low after a rising edge, but prior to a falling edge, of CLOCK~2. This synchronous enabling of <Desc/Clms Page number 17> CLOCK 2 on CLOCK OUT prevents a glitch output. In addition, EN2 going high cause power control logic 510 to prevent the clocking of synchronization logic 504 and 506. As previously described, operation of selection circuit 500 also depends upon the inputs nReset, StopCK and StopClockout. The signal nReset places selection circuit 500 in a reset state During initialization of the logic, nReset is low and Select is high. This forces the output of AND gate 514 to be low and the output of OR gate 520 to be high. Also the output of OR gate 512 is low. The output of OR gate 520 sets flip-flops synchronization logic 506, while the output of OR gate 512 forces CLOCK1 to clock synchronization logic 504, which will be initialized after a few clock edges (i. e., SEL1 will propagate through the flip-flops). When StopClockout goes high, CLOCKOUT is masked high, effectively preventing the output of either CLOCK1 or CLOCK2 on CLOCK OUT, as shown in figure 5c. StopClockout is typically used by the processor clocked by selection circuit 500 to enter a power down mode in which it is not clocked. When a processor enters a power down mode, however, there must be a manner to wake up the processor. Therefore, secondary circuitry which still receives a clock signal and which can wake up the processor is used. So that the secondary circuitry can still receive a clock signal, preferably, a clock signal, IOCK, is still available from selection circuit 500 while StopClockout is high. As such, when StopClockout is high, CLOCK-OUT remains high, but IOCK continues to function as a clock signal. StopCK completely stops the output of selection circuit 500, including IOCK. As can be seen with reference to figure Se, when CLOCK2 is output on CLOCK OUT, EN1 is high and EN2 is low. When StopCK goes high EN1 goes high. This results in CLOCKOUT and IOCK remaining high. Likewise, as can be seen with reference to figure 5f, when CLOCK 1 is output on CLOCK OUT, EN2 is high and EN1 is low. When StopCK goes high, EN2 goes high. This also results in CLOCK OUT and IO CK remaining high. Although the present invention has been shown and described with respect to several preferred embodiments thereof, various changes, omissions and additions to the form and detail thereof, may be made therein, without departing from the spirit and scope of the invention. What is claimed is:
A computing component is provided with physical layer logic to receive data on a physical link including a plurality of lanes, where the data is received from a particular component on one or more data lanes of the physical link. The physical layer is further to receive a stream signal on a particular one of the plurality of lanes of the physical link, where the stream signal is to identify a type of the data on the one or more data lanes, the type is one of a plurality of different types supported by the particular component, and the stream signal is encoded through voltage amplitude modulation on the particular lane.
CLAIMS:1. An apparatus comprising:physical layer logic to:receive data on a physical link comprising a plurality of lanes, wherein the data is received from a particular component on one or more data lanes of the physical link; andreceive a stream signal on a particular one of the plurality of lanes of the physical link, wherein the stream signal is to identify a type of the data on the one or more data lanes, the type is one of a plurality of different types supported by the particular component, and the stream signal is encoded through voltage amplitude modulation on the particular lane.2. The apparatus of Claim 1, wherein the physical layer logic is further to:send a link state machine management signal over a sideband link in association with a link state transition; andencode a sideband notification signal on the particular lane through voltage amplitude modulation, wherein the sideband notification signal indicates the sending of the link state machine management signal over the sideband link.3. The apparatus of Claim 1, wherein the type comprises a protocol associated with the data, and the protocol is a particular one of a plurality of protocols supported by the particular component.4. The apparatus of Claim 3, wherein the physical layer logic is further to decode the stream signal to identify which of the plurality of different protocols applies to the data.5. The apparatus of Claim 4, wherein the physical layer logic is further to pass the data to upper layer protocol logic corresponding to the particular one of the plurality of protocols identified in the stream signal, and the apparatus further comprises upper layer logic of each of the plurality of protocols.6. The apparatus of Claim 1, wherein the physical layer logic is further to receive a valid signal on the particular lane of the physical link, wherein the valid signal is to identify that valid data is to follow assertion of the valid signal on the one or more data lanes.7. The apparatus of Claim 6, wherein the physical layer logic is further to define a series of data windows in which data is to be sent on the data lanes, the valid signal is sent in a particular one of the series of data windows.8. The apparatus of Claim 7, wherein the valid signal is to be asserted in a window immediately preceding the window in which the data is to be sent.9. The apparatus of Claim 8, wherein data is to be ignored on data lanes in a window immediately following a preceding window in which the valid signal is not asserted.10. The apparatus of Claim 7, wherein the valid signal is to be encoded on the particular lane through voltage amplitude modulation.11. The apparatus of Claim 6, wherein the stream signal is sent in a same one of the series of data windows as the data.12. The apparatus of Claim 1, wherein the particular lane comprises a clock lane, and the steam signal is encoded on top of a clock signal sent on the clock lane.13. The apparatus of Claim 12, wherein the clock comprises a single-ended clock.14. The apparatus of Claim 12, wherein the clock comprises a differential clock.15. The apparatus of Claim 14, wherein the voltage amplitude modulation comprises modulation of a common mode voltage of the differential clock.16. The apparatus of Claim 1, wherein the particular lane comprises a control lane and the stream signal and at least one other control signal are to be sent on the control lane.17. The apparatus of Claim 16, wherein the control lane comprises a lane for data bus inversion (DBI).18. The apparatus of Claim 1, wherein the voltage amplitude modulation comprises pulse amplitude modulation.19. An apparatus comprising:first transaction layer logic to generate first data according to a first one of a plurality of communication protocols;second transaction layer logic to generate second data according to a different, second one of the plurality of communication protocolsphysical layer logic to:send the first data on one or more data lanes of a physical link comprising a plurality of lanes;encode a first stream signal on a particular one of the plurality of lanes to indicate that the first data is of the first communication protocol, wherein the first stream signal is encoded through voltage amplitude modulation on the particular lane; andsend the second data on one or more data lanes of the physical link; and encode a second stream signal on the particular lane to indicate that the first data is of the first communication protocol, wherein the first stream signal is encoded through voltage amplitude modulation on the particular lane.20. The apparatus of Claim 19, wherein the first data and the first stream signal are sent during a first signaling window and the second data and the second stream signal are sent during a second signaling window.21. The apparatus of Claim 20, wherein other information is also sent on the particular lane with the first stream signal during the first signaling window.22. The apparatus of Claim 21, wherein the other information comprises a clock signal and the particular lane comprises a clock lane.23. The apparatus of Claim 21, wherein the other information comprises a valid signal to indicate that valid data is to be sent in the second signaling window, and the valid signal corresponds to the second data.24. A system comprising:a first component; anda second component, connected to the first component by a multi -protocol link, wherein each of the first and second components support data communications in any one of a plurality of communication protocols, and the second component comprises physical layer logic to: send data on the multi -protocol link to the first component, wherein multiprotocol link comprises a plurality of lanes and the data is sent on one or more data lanes of the multi -protocol link; andsend a stream signal on a particular one of the plurality of lanes of the physical link, wherein the stream signal is to identify that a particular one of the plurality ofcommunication protocols applies to the data and the stream signal is encoded through voltage amplitude modulation on the particular lane.25. The system of Claim 24, wherein the particular lane comprises a clock lane and the stream signal is encoded on top of a clock signal sent on the clock lane.26. The system of Claim 24, wherein the stream signal is sent concurrent with the data.27. The system of Claim 24, wherein the data comprises first data, the stream signal comprises a first stream signal, and the physical layer logic is further to:send second data on the multi -protocol link to the first component on one or more of the data lanes of the multi -protocol link; andsend a second stream signal on a particular one of the plurality of lanes of the physical link, wherein the second stream signal is to identify that a second one of the plurality of communication protocols applies to the second data and the second stream signal is encoded through voltage amplitude modulation on the particular lane.
VOLTAGE MODULATED CONTROL LANECROSS-REFERENCE TO RELATED APPLICATION(S)[0001] This application claims the benefit of priority to U.S. Nonprovisional Patent Application No. 15/283,028 filed 30 September 2016 entitled "VOLTAGE MODULATED CONTROL LANE", which is incorporated herein by reference in its entirety.FIELD[0002] This disclosure pertains to computing system, and in particular (but not exclusively) to point-to-point interconnects.BACKGROUND[0003] Advances in semi-conductor processing and logic design have permitted an increase in the amount of logic that may be present on integrated circuit devices. As a corollary, computer system configurations have evolved from a single or multiple integrated circuits in a system to multiple cores, multiple hardware threads, and multiple logical processors present on individual integrated circuits, as well as other interfaces integrated within such processors. A processor or integrated circuit typically comprises a single physical processor die, where the processor die may include any number of cores, hardware threads, logical processors, interfaces, memory, controller hubs, etc.[0004] As a result of the greater ability to fit more processing power in smaller packages, smaller computing devices have increased in popularity. Smartphones, tablets, ultrathin notebooks, and other user equipment have grown exponentially. However, these smaller devices are reliant on servers both for data storage and complex processing that exceeds the form factor. Consequently, the demand in the high-performance computing market (i.e. server space) has also increased. For instance, in modern servers, there is typically not only a single processor with multiple cores, but also multiple physical processors (also referred to as multiple sockets) to increase the computing power. But as the processing power grows along with the number of devices in a computing system, the communication between sockets and other devices becomes more critical.[0005] In fact, interconnects have grown from more traditional multi-drop buses that primarily handled electrical communications to full blown interconnect architectures that facilitate fast communication. Unfortunately, as the demand for future processors to consume at even higher-rates corresponding demand is placed on the capabilities of existing interconnect architectures.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 illustrates an embodiment of a computing system including an interconnect architecture.[0007] FIG. 2 illustrates an embodiment of a interconnect architecture including a layered stack.[0008] FIG. 3 illustrates an embodiment of a request or packet to be generated or received within an interconnect architecture.[0009] FIG. 4 illustrates an embodiment of a transmitter and receiver pair for an interconnect architecture.[0010] FIG. 5A illustrates an embodiment of a multichip package.[0011] FIG. 5B illustrates use of a multiprotocol link (MPL).[0012] FIG. 6 is a simplified block diagram of an example MPL.[0013] FIG. 7 is a representation of example signaling on an example MPL.[0014] FIG. 8 is a simplified block diagram illustrating voltage amplitude modulation.[0015] FIG. 9 is a representation of example signaling on an example MPL utilizing voltage amplitude modulation.[0016] FIGS. 10A-10B are simplified block diagram of systems utilizing an example MPL supporting voltage amplitude modulation.[0017] FIGS. 11A-11C are simplified circuit diagrams for processing example clock signals encoded with control signals.[0018] FIG. 12 is a representation of example signaling on an example MPL utilizing voltage amplitude modulation.[0019] FIG. 13 is a simplified block diagram of an MPL.[0020] FIG. 14 illustrates an embodiment of a block diagram for a computing system including a multicore processor.[0021] FIG. 15 illustrates another embodiment of a block diagram for a computing system including a processor.[0022] FIG. 16 illustrates an embodiment of a block for a computing system including multiple processors. [0023] Like reference numbers and designations in the various drawings indicate like elements.DETAILED DESCRIPTION[0024] In the following description, numerous specific details are set forth, such as examples of specific types of processors and system configurations, specific hardware structures, specific architectural and micro architectural details, specific register configurations, specific instruction types, specific system components, specific measurements/heights, specific processor pipeline stages and operation etc. in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that these specific details need not be employed to practice the present invention. In other instances, well known components or methods, such as specific and alternative processor architectures, specific logic circuits/code for described algorithms, specific firmware code, specific interconnect operation, specific logic configurations, specific manufacturing techniques and materials, specific compiler implementations, specific expression of algorithms in code, specific power down and gating techniques/logic and other specific operational details of computer system haven't been described in detail in order to avoid unnecessarily obscuring the present invention.[0025] Although the following embodiments may be described with reference to energy conservation and energy efficiency in specific integrated circuits, such as in computing platforms or microprocessors, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments described herein may be applied to other types of circuits or semiconductor devices that may also benefit from better energy efficiency and energy conservation. For example, the disclosed embodiments are not limited to desktop computer systems or Ultrabooks™. And may be also used in other devices, such as handheld devices, tablets, other thin notebooks, systems on a chip (SOC) devices, and embedded applications. Some examples of handheld devices include cellular phones, Internet protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications typically include a microcontroller, a digital signal processor (DSP), a system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform the functions and operations taught below. Moreover, the apparatus', methods, and systems described herein are not limited to physical computing devices, but may also relate to software optimizations for energy conservation and efficiency. As will become readily apparent in the description below, the embodiments of methods, apparatus', and systems described herein (whether in reference to hardware, firmware, software, or a combination thereof) are vital to a 'green technology' future balanced with performance considerations.[0026] As computing systems are advancing, the components therein are becoming more complex. As a result, the interconnect architecture to couple and communicate between the components is also increasing in complexity to ensure bandwidth requirements are met for optimal component operation. Furthermore, different market segments demand different aspects of interconnect architectures to suit the market's needs. For example, servers require higher performance, while the mobile ecosystem is sometimes able to sacrifice overall performance for power savings. Yet, it's a singular purpose of most fabrics to provide highest possible performance with maximum power saving. Below, a number of interconnects are discussed, which would potentially benefit from aspects of the invention described herein.[0027] One interconnect fabric architecture includes the Peripheral Component Interconnect (PCI) Express (PCIe) architecture. A primary goal of PCIe is to enable components and devices from different vendors to inter-operate in an open architecture, spanning multiple market segments; Clients (Desktops and Mobile), Servers (Standard and Enterprise), and Embedded and Communication devices. PCI Express is a high performance, general purpose I/O interconnect defined for a wide variety of future computing and communication platforms. Some PCI attributes, such as its usage model, load-store architecture, and software interfaces, have been maintained through its revisions, whereas previous parallel bus implementations have been replaced by a highly scalable, fully serial interface. The more recent versions of PCI Express take advantage of advances in point-to-point interconnects, Switch-based technology, and packetized protocol to deliver new levels of performance and features. Power Management, Quality Of Service (QoS), Hot-Plug/Hot- Swap support, Data Integrity, and Error Handling are among some of the advanced features supported by PCI Express.[0028] Referring to FIG. 1, an embodiment of a fabric composed of point-to-point Links that interconnect a set of components is illustrated. System 100 includes processor 105 and system memory 110 coupled to controller hub 115. Processor 105 includes any processing element, such as a microprocessor, a host processor, an embedded processor, a co-processor, or other processor. Processor 105 is coupled to controller hub 115 through front-side bus (FSB) 106. In one embodiment, FSB 106 is a serial point-to-point interconnect as described below. In another embodiment, link 106 includes a serial, differential interconnect architecture that is compliant with different interconnect standard. [0029] System memory 110 includes any memory device, such as random access memory (RAM), non-volatile (NV) memory, or other memory accessible by devices in system 100. System memory 110 is coupled to controller hub 115 through memory interface 116. Examples of a memory interface include a double-data rate (DDR) memory interface, a dual- channel DDR memory interface, and a dynamic RAM (DRAM) memory interface.[0030] In one embodiment, controller hub 115 is a root hub, root complex, or root controller in a Peripheral Component Interconnect Express (PCIe or PCIE) interconnection hierarchy. Examples of controller hub 115 include a chipset, a memory controller hub (MCH), a northbridge, an interconnect controller hub (ICH), a southbridge, and a root controller/hub. Often the term chipset refers to two physically separate controller hubs, i.e. a memory controller hub (MCH) coupled to an interconnect controller hub (ICH). Note that current systems often include the MCH integrated with processor 105, while controller 115 is to communicate with I/O devices, in a similar manner as described below. In some embodiments, peer-to-peer routing is optionally supported through root complex 115.[0031] Here, controller hub 115 is coupled to switch/bridge 120 through serial link 119. Input/output modules 117 and 121, which may also be referred to as interfaces/ports 117 and 121, include/implement a layered protocol stack to provide communication between controller hub 115 and switch 120. In one embodiment, multiple devices are capable of being coupled to switch 120.[0032] Switch/bridge 120 routes packets/messages from device 125 upstream, i.e. up a hierarchy towards a root complex, to controller hub 115 and downstream, i.e. down a hierarchy away from a root controller, from processor 105 or system memory 110 to device 125. Switch 120, in one embodiment, is referred to as a logical assembly of multiple virtual PCI-to-PCI bridge devices. Device 125 includes any internal or external device or component to be coupled to an electronic system, such as an I/O device, a Network Interface Controller (NIC), an add-in card, an audio processor, a network processor, a hard-drive, a storage device, a CD/DVD ROM, a monitor, a printer, a mouse, a keyboard, a router, a portable storage device, a Firewire device, a Universal Serial Bus (USB) device, a scanner, and other input/output devices. Often in the PCIe vernacular, such as device, is referred to as an endpoint. Although not specifically shown, device 125 may include a PCIe to PCI/PCI-X bridge to support legacy or other version PCI devices. Endpoint devices in PCIe are often classified as legacy, PCIe, or root complex integrated endpoints.[0033] Graphics accelerator 130 is also coupled to controller hub 115 through serial link 132. In one embodiment, graphics accelerator 130 is coupled to an MCH, which is coupled to an ICH. Switch 120, and accordingly I/O device 125, is then coupled to the ICH. I/O modules 131 and 118 are also to implement a layered protocol stack to communicate between graphics accelerator 130 and controller hub 115. Similar to the MCH discussion above, a graphics controller or the graphics accelerator 130 itself may be integrated in processor 105.[0034] Turning to FIG. 2 an embodiment of a layered protocol stack is illustrated. Layered protocol stack 200 includes any form of a layered communication stack, such as a Quick Path Interconnect (QPI) stack, a PCie stack, a next generation high performance computing interconnect stack, or other layered stack. Although the discussion immediately below in reference to FIGS. 1-4 are in relation to a PCIe stack, the same concepts may be applied to other interconnect stacks. In one embodiment, protocol stack 200 is a PCIe protocol stack including transaction layer 205, link layer 210, and physical layer 220. An interface, such as interfaces 117, 118, 121, 122, 126, and 131 in FIG. 1, may be represented as communication protocol stack 200. Representation as a communication protocol stack may also be referred to as a module or interface implementing/including a protocol stack.[0035] PCI Express uses packets to communicate information between components. Packets are formed in the Transaction Layer 205 and Data Link Layer 210 to carry the information from the transmitting component to the receiving component. As the transmitted packets flow through the other layers, they are extended with additional information necessary to handle packets at those layers. At the receiving side the reverse process occurs and packets get transformed from their Physical Layer 220 representation to the Data Link Layer 210 representation and finally (for Transaction Layer Packets) to the form that can be processed by the Transaction Layer 205 of the receiving device.[0036] Transaction Layer[0037] In one embodiment, transaction layer 205 is to provide an interface between a device's processing core and the interconnect architecture, such as data link layer 210 and physical layer 220. In this regard, a primary responsibility of the transaction layer 205 is the assembly and disassembly of packets (i.e., transaction layer packets, or TLPs). The translation layer 205 typically manages credit-base flow control for TLPs. PCIe implements split transactions, i.e. transactions with request and response separated by time, allowing a link to carry other traffic while the target device gathers data for the response.[0038] In addition PCIe utilizes credit-based flow control. In this scheme, a device advertises an initial amount of credit for each of the receive buffers in Transaction Layer 205. An external device at the opposite end of the link, such as controller hub 115 in FIG. 1, counts the number of credits consumed by each TLP. A transaction may be transmitted if the transaction does not exceed a credit limit. Upon receiving a response an amount of credit is restored. An advantage of a credit scheme is that the latency of credit return does not affect performance, provided that the credit limit is not encountered.[0039] In one embodiment, four transaction address spaces include a configuration address space, a memory address space, an input/output address space, and a message address space. Memory space transactions include one or more of read requests and write requests to transfer data to/from a memory-mapped location. In one embodiment, memory space transactions are capable of using two different address formats, e.g., a short address format, such as a 32-bit address, or a long address format, such as 64-bit address. Configuration space transactions are used to access configuration space of the PCIe devices. Transactions to the configuration space include read requests and write requests. Message space transactions (or, simply messages) are defined to support in-band communication between PCIe agents.[0040] Therefore, in one embodiment, transaction layer 205 assembles packet header/payload 206. Format for current packet headers/payloads may be found in the PCIe specification at the PCIe specification website.[0041] Quickly referring to FIG. 3, an embodiment of a PCIe transaction descriptor is illustrated. In one embodiment, transaction descriptor 300 is a mechanism for carrying transaction information. In this regard, transaction descriptor 300 supports identification of transactions in a system. Other potential uses include tracking modifications of default transaction ordering and association of transaction with channels.[0042] Transaction descriptor 300 includes global identifier field 302, attributes field 304 and channel identifier field 306. In the illustrated example, global identifier field 302 is depicted comprising local transaction identifier field 308 and source identifier field 310. In one embodiment, global transaction identifier 302 is unique for all outstanding requests.[0043] According to one implementation, local transaction identifier field 308 is a field generated by a requesting agent, and it is unique for all outstanding requests that require a completion for that requesting agent. Furthermore, in this example, source identifier 310 uniquely identifies the requestor agent within a PCIe hierarchy. Accordingly, together with source ID 310, local transaction identifier 308 field provides global identification of a transaction within a hierarchy domain.[0044] Attributes field 304 specifies characteristics and relationships of the transaction. In this regard, attributes field 304 is potentially used to provide additional information that allows modification of the default handling of transactions. In one embodiment, attributes field 304 includes priority field 312, reserved field 314, ordering field 316, and no-snoop field 318. Here, priority sub-field 312 may be modified by an initiator to assign a priority to the transaction. Reserved attribute field 314 is left reserved for future, or vendor-defined usage. Possible usage models using priority or security attributes may be implemented using the reserved attribute field.[0045] In this example, ordering attribute field 316 is used to supply optional information conveying the type of ordering that may modify default ordering rules. According to one example implementation, an ordering attribute of "0" denotes default ordering rules are to apply, wherein an ordering attribute of " 1 " denotes relaxed ordering, wherein writes can pass writes in the same direction, and read completions can pass writes in the same direction. Snoop attribute field 318 is utilized to determine if transactions are snooped. As shown, channel ID Field 306 identifies a channel that a transaction is associated with.[0046] Link Layer[0047] Link layer 210, also referred to as data link layer 210, acts as an intermediate stage between transaction layer 205 and the physical layer 220. In one embodiment, a responsibility of the data link layer 210 is providing a reliable mechanism for exchanging Transaction Layer Packets (TLPs) between two components a link. One side of the Data Link Layer 210 accepts TLPs assembled by the Transaction Layer 205, applies packet sequence identifier 211, i.e. an identification number or packet number, calculates and applies an error detection code, i.e. CRC 212, and submits the modified TLPs to the Physical Layer 220 for transmission across a physical to an external device.[0048] Physical Layer[0049] In one embodiment, physical layer 220 includes logical sub block 221 and electrical sub-block 222 to physically transmit a packet to an external device. Here, logical sub- block 221 is responsible for the "digital" functions of Physical Layer 221. In this regard, the logical sub-block includes a transmit section to prepare outgoing information for transmission by physical sub-block 222, and a receiver section to identify and prepare received information before passing it to the Link Layer 210.[0050] Physical block 222 includes a transmitter and a receiver. The transmitter is supplied by logical sub-block 221 with symbols, which the transmitter serializes and transmits onto to an external device. The receiver is supplied with serialized symbols from an external device and transforms the received signals into a bit-stream. The bit-stream is de-serialized and supplied to logical sub-block 221. In one embodiment, an 8b/10b transmission code is employed, where ten-bit symbols are transmitted/received. Here, special symbols are used to frame a packet with frames 223. In addition, in one example, the receiver also provides a symbol clock recovered from the incoming serial stream.[0051] As stated above, although transaction layer 205, link layer 210, and physical layer 220 are discussed in reference to a specific embodiment of a PCIe protocol stack, a layered protocol stack is not so limited. In fact, any layered protocol may be included/implemented. As an example, an port/interface that is represented as a layered protocol includes: (1) a first layer to assemble packets, i.e. a transaction layer; a second layer to sequence packets, i.e. a link layer; and a third layer to transmit the packets, i.e. a physical layer. As a specific example, a common standard interface (CSI) layered protocol is utilized.[0052] Referring next to FIG. 4, an embodiment of a PCIe serial point to point fabric is illustrated. Although an embodiment of a PCIe serial point-to-point link is illustrated, a serial point-to-point link is not so limited, as it includes any transmission path for transmitting serial data. In the embodiment shown, a basic PCIe link includes two, low-voltage, differentially driven signal pairs: a transmit pair 406/411 and a receive pair 412/407. Accordingly, device 405 includes transmission logic 406 to transmit data to device 410 and receiving logic 407 to receive data from device 410. In other words, two transmitting paths, i.e. paths 416 and 417, and two receiving paths, i.e. paths 418 and 419, are included in a PCIe link.[0053] A transmission path refers to any path for transmitting data, such as a transmission line, a copper line, an optical line, a wireless communication channel, an infrared communication link, or other communication path. A connection between two devices, such as device 405 and device 410, is referred to as a link, such as link 415. A link may support one lane - each lane representing a set of differential signal pairs (one pair for transmission, one pair for reception). To scale bandwidth, a link may aggregate multiple lanes denoted by xN, where N is any supported Link width, such as 1, 2, 4, 8, 12, 16, 32, 64, or wider.[0054] A differential pair refers to two transmission paths, such as lines 416 and 417, to transmit differential signals. As an example, when line 416 toggles from a low voltage level to a high voltage level, i.e. a rising edge, line 417 drives from a high logic level to a low logic level, i.e. a falling edge. Differential signals potentially demonstrate better electrical characteristics, such as better signal integrity, i.e. cross-coupling, voltage overshoot/undershoot, ringing, etc. This allows for better timing window, which enables faster transmission frequencies.[0055] FIGS. 5A-5B are simplified block diagrams 500a-b illustrating example systems implementing a link to enable signaling windows in which data of different protocols may be sent. In the example of FIG. 5A, an example multi-chip package 505 is represented that includes two or more chips, or dies, (e.g., 510, 515) communicatively connected using an example multi -protocol link (MPL) 520a. While FIG. 5A, illustrates an example of two (or more) dies that are interconnected using an example MPL 520a, it should be appreciated that the principles and features described herein regarding implementations of an MPL can be applied to any interconnect or link connecting a die (e.g., 510) and other components, including connecting two or more dies (e.g., 510, 515), connecting a die (or chip) to another component off-die, connecting a die to another device or die off-package (e.g., 505), connecting die to a BGA package, implementation of a Patch on Interposer (POINT), among potentially other examples.[0056] Generally, a multichip package (e.g., 505) can be an electronic package where multiple integrated circuits (ICs), semiconductor dies or other discrete components (e.g., 510, 515) are packaged onto a unifying substrate (e.g., silicon or other semiconductor substrate), facilitating the combined components' use as a single component (e.g., as though a larger IC). In some instances, the larger components (e.g., dies 510, 515) can themselves be IC systems, such as systems on chip (SoC), multiprocessor chips, or other components that include multiple components (e.g., 525-530 and 540-545) on the device, for instance, on a single die (e.g., 510, 515). Multichip packages 505 can provide flexibility for building complex and varied systems from potentially multiple discrete components and systems. For instance, each of dies 510, 515 may be manufactured or otherwise provided by two different entities, with the silicon substrate of the package 505 provided by yet a third entity, among many other examples. Further, dies and other components within a multichip package 505 can themselves include interconnect or other communication fabrics (e.g., 535, 550) providing the infrastructure for communication between components (e.g., 525-530 and 540-545) within the device (e.g., 510, 515 respectively). The various components and interconnects (e.g., 535, 550) may potentially support or use multiple different protocols. Further, communication between dies (e.g., 510, 515) can potentially include transactions between the various components on the dies over multiple different protocols. Designing mechanisms to provide communication between chips (or dies) on a multichip package can be challenging, with traditional solutions employing highly specialized, expensive, and package-specific solutions based on the specific combinations of components (and desired transactions) sought to be interconnected.[0057] The examples, systems, algorithms, apparatus, logic, and features described within this Specification can address at least some of the issues identified above, including potentially many others not explicitly mentioned herein. For instance, in some implementations, a high bandwidth, low power, low latency interface can be provided to connect a host device (e.g., a CPU) or other device to a companion chip that sits in the same package as the host. Such an multi -protocol link (MPL) can support multiple package options, multiple I/O protocols, as well as Reliability, Availability, and Serviceability (RAS) features. Further, the physical layer (PHY) can include an electrical layer and logic layer and can support longer channel lengths, including channel lengths up to, and in some cases exceeding, approximately 45mm. In some implementations, an example MPL can operate at high data rates, including data rates exceeding 8-10Gb/s.[0058] While the example of FIG. 5 A shows the employment of an MPL within a multi- chip package, which may include multiple feature rich components (with large shorelines to support multiple pins, which may be used to support multi-lane MPL links with potentially multiple data lanes and multiple control lanes), other implementations of an MPL may be employed in simpler devices with smaller or user-centered form factors, such as smartphones, internet of things (IoT) devices, etc. For instance, FIG. 5B illustrates an example implementation of a simplified system 560, which may include one or more processors (e.g., 565) and other components (e.g., a sensor component 570), which may be interconnected using a simplified version of an MPL link (e.g., 520b). Components (e.g., 565, 570) of the device 560 may have fewer functionality and smaller footprints than the on-die-systems 510, 515 of the multi-chip package 505 example of FIG. 5 A. Accordingly, an example device 560 may support fewer pins, which may at the same time be suitable to simpler components (e.g., 565, 570), which may operate at relatively lower speeds, support fewer protocols, send smaller datagrams ( at potentially lower speeds), and support fewer features, than more complex, high-speed components (e.g., 525-530, 540-545) of an example multi-chip packager 505. A simplified MPL link 520b may be provided, in some implementations, to offer the same multi -protocol functionality, but with lower pin count, such as discussed in the example bellow.[0059] In one example implementation of an MPL, a PHY electrical layer can improve upon traditional multi-channel interconnect solutions (e.g., multi-channel DRAM I/O), extending the data rate and channel configuration, for instance, by a number of features including, as examples, regulated mid-rail termination, low power active crosstalk cancellation, circuit redundancy, per bit duty cycle correction and deskew, line coding, and transmitter equalization, among potentially other examples.[0060] In one example implementation of an MPL, a PHY logical layer can be implemented that can further assist (e.g., electrical layer features) in extending the data rate and channel configuration while also enabling the interconnect to route multiple protocols across the electrical layer. Such implementations can provide and define a modular common physical layer that is protocol agnostic and architected to work with potentially any existing or future interconnect protocol. [0061] Turning to FIG. 6, a simplified block diagram 600 is shown representing at least a portion of a system including an example implementation of a multi -protocol link (MPL). In one example, an MPL configured to support a more complex connection (e.g., between multi- chip packages) may be implemented using physical electrical connections (e.g., wires implemented as lanes) connecting a first device 605 (e.g., a first die including one or more subcomponents) with a second device 610 (e.g., a second die including one or more other subcomponents). In the particular example shown in the high-level representation of diagram 600, all signals (in channels 615, 620) can be unidirectional and lanes can be provided for the data signals to have both an upstream and downstream data transfer. While the block diagram 600 of FIG. 6, refers to the first component 605 as the upstream component and the second component 610 as the downstream components, and physical lanes of the MPL used in sending data as a downstream channel 615 and lanes used for receiving data (from component 610) as an upstream channel 620, it should be appreciated that the MPL between devices 605, 610 can be used by each device to both send and receive data between the devices.[0062] In one example implementation, an MPL can provide a physical layer (PHY) including the electrical MPL PHY 625a,b (or, collectively, 625) and executable logic implementing MPL logical PHY 630a,b (or, collectively, 630). Electrical, or physical, PHY 625 can provide the physical connection over which data is communicated between devices 605, 610. Signal conditioning components and logic can be implemented in connection with the physical PHY 625 in order to establish high data rate and channel configuration capabilities of the link, which in some applications can involve tightly clustered physical connections at lengths of approximately 45mm or more. The logical PHY 630 can include logic for facilitating clocking, link state management (e.g., for link layers 635a, 635b), and protocol multiplexing between potentially multiple, different protocols used for communications over the MPL.[0063] In one example implementation, physical PHY 625 can include, for each channel (e.g., 615, 620) a set of data lanes, over which in-band data can be sent. In this particular example, 50 data lanes are provided in each of the upstream and downstream channels 615, 620, although any other number of lanes can be used as permitted by the layout and power constraints, desired applications, device constraints, etc. Each channel can further include one or more dedicated lanes for a strobe, or clock, signal for the channel, one or more dedicated lanes for a valid signal for the channel, one or more dedicated lanes for a stream signal, and one or more dedicated lanes for a link state machine management or sideband signal. The physical PHY can further include a sideband link 640, which, in some examples, can be a bi-directional lower frequency control signal link used to coordinate state transitions and other attributes of the MPL connecting devices 605, 610, among other examples.[0064] As noted above, multiple protocols can be supported using MPL. Indeed, in some implementations, multiple, independent transaction layers 650a, 650b can be provided at each device 605, 610. For instance, each device 605, 610 may support and utilize two or more protocols, such as PCI, PCIe, QPI, Intel In-Die Interconnect (DDI), Ultra Path Interconnect (UPI), among others. Other protocols can also be supported including Ethernet protocol, Infiniband protocols, and PCIe fabric-based protocols. For IoT and other interconnects involving special purpose, low frequency, or other more simplified systems, the multiple protocols may include one or more protocols such as Universal Asynchronous Receiver/Transmitter (UART), Serial Digital Interface (SDI), I2C, among other potential examples. The combination of the Logical PHY and physical PHY can also be used as a die-to-die interconnect to connect a SerDes PHY (PCIe, Ethernet, Infiniband or other high speed SerDes) on one Die to its upper layers that are implemented on the other die, among other examples.[0065] Logical PHY 630 can support multiplexing between these multiple protocols on an MPL. For instance, the dedicated stream lane can be used to assert an encoded stream signal that identifies which protocol is to apply to data sent substantially concurrently on the data lanes of the channel. Further, logical PHY 630 can be used to negotiate the various types of link state transitions that the various protocols may support or request. In some instances, LSM SB signals sent over the channel's dedicated LSM SB lane can be used, together with side band link 640 to communicate and negotiate link state transitions between the devices 605, 610. Further, link training, error detection, skew detection, de-skewing, and other functionality of traditional interconnects can be replaced or governed, in part using logical PHY 630. For instance, valid signals sent over one or more dedicated valid signal lanes in each channel can be used to signal link activity, detect skew, link errors, and realize other features, among other examples. In the particular example of FIG. 6, in implementations of an MPL providing larger numbers of data lanes (e.g., 50 lanes in each direction), multiple valid lanes may be provided per channel. For instance, data lanes within a channel can be bundled or clustered (physically and/or logically) and a valid lane can be provided for each cluster. Further, multiple strobe lanes can be provided, in some cases, also to provide a dedicated strobe signal for each cluster in a plurality of data lane clusters in a channel, among other examples.[0066] As noted above, logical PHY 630 can be used to negotiate and manage link control signals sent between devices connected by the MPL. In some implementations, logical PHY 630 can include link layer packet (LLP) generation logic 660 that can be used to send link layer control messages over the MPL (i.e., in band). Such messages can be sent over data lanes of the channel, with the stream lane identifying that the data is link layer-to-link layer messaging, such as link layer control data, among other examples. Link layer messages enabled using LLP module 660 can assist in the negotiation and performance of link layer state transitioning, power management, loopback, disable, re-centering, scrambling, among other link layer features between the link layers 635a, 635b of devices 605, 610 respectively.[0067] Turning to FIG. 7, a diagram 700 is shown representing example signaling using a set of lanes (e.g., 615, 620) in a particular channel of a first example implementation MPL. The example of FIG. 7 may present an MPL implementation appropriate to a system interconnecting components (e.g., multi-component chips) with larger available pin counts. For instance, in the example of FIG. 7, two clusters of twenty-five (25) data lanes are provided for fifty (50) total data lanes in the channel. A portion of the lanes are shown, while others (e.g., DATA[4-46] and a second strobe signal lane (STRB)) are omitted (e.g., as redundant signals) for convenience in illustrating the particular example. When the physical layer is in an active state (e.g., not powered off or in a low power mode (e.g., an LI state)), strobe lanes (STRB) can be provided with a synchronous clock signal. In some implementations, data can be sent on both the rising and falling edges of the strobe. Each edge (or half clock cycle) can demarcate a unit interval (UI). Accordingly, in this example, a bit (e.g., 705) can be sent on each lane, allowing for a byte to be sent every 8UI. A byte time period, or byte window, 710 can be defined as 8UI, or the time for sending a byte on a single one of the data lanes (e.g., DATA[0- 49]).[0068] In some implementations, a valid signal, sent on one or more dedicated valid signal channels (e.g., VALID0, VALIDl), can serve as a leading indicator for the receiving device to identify, when asserted (high), to the receiving device, or sink, that data is being sent from the sending device, or source, on data lanes (e.g., DATA[0-49]) during the following time period, such as a byte time period 710. Alternatively, when the valid signal is low, the source indicates to the sink that the sink will not be sending data on the data lanes during the following time period. Accordingly, when the sink logical PHY detects that the valid signal is not asserted (e.g., on lanes VALIDO and VALIDl), the sink can disregard any data that is detected on the data lanes (e.g., DATA[0-49]) during the following time period. For instance, cross talk noise or other bits may appear on one or more of the data lanes when the source, in fact, is not sending any data. By virtue of a low, or non-asserted, valid signal during the previous time period (e.g., the previous byte time period), the sink can determine that the data lanes are to be disregarded during the following time period. [0069] Data sent on each of the lanes of the MPL can be aligned to a strobe or other clock signal. A time period can be defined based on the strobe, such as a byte time period, and each of these periods can correspond to a defined window in which signals are to be sent on the data lanes (e.g., DATA[0-49]), the valid lanes (e.g., VALIDl, VALID2), and stream lane (e.g., STREAM). Accordingly, alignment of these signals can enable identification that a valid signal in a previous time period window applies to data in the following time period window, and that a stream signal applies to data in the same time period window. The stream signal can be an encoded signal (e.g., 1 byte of data for a byte time period window), that is encoded to identify the protocol that applies to data being sent during the same time period window.[0070] To illustrate, in the particular example of FIG. 7, a byte time period window is defined. A valid is asserted at a time period window n (715), before any data is injected on data lanes DATA[0-49]. At the following time period window n+1 (720) data is sent on at least some of the data lanes. In this case, data is sent on all fifty data lanes during n+1 (720). Because a valid was asserted for the duration of the preceding time period window n (715), the sink device can validate the data received on data lanes DATA[0-49] during time period window n+1 (720). Additionally, the leading nature of the valid signal during time period window n (715) allows the receiving device to prepare for the incoming data. Continuing with the example of FIG. 7, the valid signal remains asserted (on VALIDl and VALID2) during the duration of time period window n+1 (720), causing the sink device to expect the data sent over data lanes DATA[0-49] during time period window n+2 (725). If the valid signal were to remain asserted during time period window n+2 (725), the sink device could further expect to receive (and process) additional data sent during an immediately subsequent time period window n+3 (730). In the example of FIG. 7, however, the valid signal is de-asserted during the duration of time period window n+2 (725), indicating to the sink device that no data will be sent during time period window n+3 (730) and that any bits detected on data lanes DATA[0-49] should be disregarded during time period window n+3 (730).[0071] As noted above, in some cases, an MPL link may be implemented with multiple valid lanes and strobe lanes per channel. In some systems, this can assist, among other advantages, with maintaining circuit simplicity and synchronization amid the clusters of relatively lengthy physical lanes connecting the two devices. In some implementations, a set of data lanes can be divided into clusters of data lanes. For instance, in the example of FIG. 7, data lanes DATA[0-49] can be divided into two twenty-five lane clusters and each cluster can have a dedicated valid and strobe lane. For instance, valid lane VALIDl can be associated with data lanes DATA[0-24] and valid lane VALID2 can be associated with data lanes DATA[25-49]. The signals on each "copy" of the valid and strobe lanes for each cluster can be identical.[0072] As introduced above, in cases supporting one or more dedicated STREAM lanes, data on stream lane STREAM can be used to indicate to the receiving logical PHY what protocol is to apply to corresponding data being sent on data lanes data lanes DATA[0-49]. In the example of FIG. 7, a stream signal is sent on STREAM during the same time period window as data on data lanes DATA[0-49] to indicate the protocol of the data on the data lanes. In alternative implementations, the stream signal can be sent during a preceding time period window, such as with corresponding valid signals, among other potential modifications. However, continuing with the example of FIG. 7, a stream signal 735 is sent during time period window n+1 (720) that is encoded to indicate the protocol (e.g., PCIe, PCI, DDI, QPI, etc.) that is to apply to the bits sent over data lanes DATA[0-49] during time period window n+1 (720). Similarly, another stream signal 740 can be sent during the subsequent time period window n+2 (725) to indicate the protocol that applies to the bits sent over data lanes DATA[0-49] during time period window n+2 (725), and so on. In some cases, such as the example of FIG. 7 (where both stream signals 735, 740 have the same encoding, binary FF), data in sequential time period windows (e.g., n+1 (720) and n+2 (725)) can belong to the same protocol. However, in other cases, data in sequential time period windows (e.g., n+1 (720) and n+2 (725)) can be from different transactions to which different protocols are to apply, and stream signals (e.g., 735, 740) can be encoded accordingly to identify the different protocols applying to the sequential bytes of data on the data lanes (e.g., DATA[0-49]), among other examples.[0073] In some implementations, a low power or idle state can be defined for the MPL. For instance, when neither device on the MPL is sending data, the physical layer (electrical and logical) of MPL can go to an idle or low power state. For instance, in the example of FIG. 7, at time period window n-2 (745), the MPL is in a quiet or idle state and the strobe is disabled to save power. The MPL can transition out of low-power or idle mode, awaking the strobe at time period window time period window n-1 (e.g., 705). The strobe can complete a transmission preamble (e.g., to assist in waking and synchronizing each of the lanes of the channel, as well as the sink device), beginning the strobe signal prior to any other signaling on the other non-strobe lanes. Following this time period window n-1 (705), the valid signal can be asserted at time period window n (715) to notify the sink that data is forthcoming in the following time period window n+1 (720), as discussed above.[0074] The MPL may re-enter a low power or idle state (e.g., an LI state) following the detection of idle conditions on the valid lanes, data lanes, and/or other lanes of the MPL channel. For instance, no signaling may be detected beginning at time period window n+3 (730) and going forward. Logic on either the source or sink device can initiate transition back into a low power state leading again (e.g., time period window n+5 (755)) to the strobe going idle in a power savings mode, among other examples and principles (including those discussed later herein).[0075] Electrical characteristics of the physical PHY can include one or more of single- ended signaling, half-rate forwarded clocking, matching of interconnect channel as well as on- chip transport delay of transmitter (source) and receiver (sink), optimized electrostatic discharge (ESD) protection, pad capacitance, among other features. Further, an MPL can be implemented to achieve higher data rate (e.g., approaching 16 Gb/s) and energy efficiency characteristics than traditional package I/O solutions.[0076] While the examples of FIGS. 6 and 7 illustrate an example implementation of a MPL as an interconnect to facilitate a high speed, high bandwidth channel between devices supporting more complex features and having shoreline to support a wide array of pins, including pins for dedicated clock, valid, stream, sideband notification, and data lanes, and even further control lanes such as data bus inversion (DBI) lanes, other implementations may be ill- suited or simply unable to support such MPL configurations. In some implementations, the control functionality of MPL (e.g., provided through the valid, stream, sideband notification, and other signals) may be provided in implementations of MPL which greatly reduce or even forego at least some of the dedicated control lanes provided for in the examples of FIGS. 6 and 7 above. For instance, in some implementations, pulse-amplitude modulation (PAM) may be utilized to encode additional data on one or more of the lanes of the MPL, such as one of the control lanes, the clock (strobe) lane, or even one or more of the data lanes, to implement control signals sent using dedicated lanes in the examples of FIGS. 6 and 7 above.[0077] For instance, FIG. 8 illustrates a representation 800 showing encoding of data through pulse amplitude modulation. In the particular example of FIG. 8, a PAM-4 modulation scheme is illustrated, through which three non-zero voltage levels (e.g., Vi, V2, V3) are defined, beyond the typical binary amplitude found in conventional digital signals. Each of the voltage levels may then be associated with a corresponding data value (represented in this case through one of four 2-bit digital values). For instance, when voltage is zero during a unit interval (or half clock cycle) in which a pulse may be sent, the zero pulse may be interpreted as binary "00". If, instead, a pulse with voltage Vi is received, the pulse may be interpreted as binary "01", voltage V2may correspond to binary " 10", and V3may correspond to binary "11." Accordingly, in a PAM-4 scheme, over a byte window that includes 8 unit intervals, potentially 4A8 possible encodings may be sent through a pulse amplitude modulated signal. Other PAM schemes may be used, including PAM-3 modulation (for two possible non-zero pulse amplitudes), among other examples.[0078] Turning to FIG. 9, an example is shown of signaling on another implementation of an MPL. In this example, the signals sent on dedicated lanes STREAM and VALID0 in the example of MPL shown in FIG. 7 are replaced by voltage modulation on the strobe/clock lane CLK. The clock signal, when not encoded with additional control data may resemble a typical binary clock (effectively modulated in a PAM-2 pattern). In this example, PAM-4 modulation is applied to augment the standard PAM-2 clock signal, such that the higher voltage levels (i.e., on top of the standard clock signal) may be utilized to encode control messages on the clock lane. For instance, these control signals encoded on the clock may be used to support the multiprotocol communication functionality of the MPL. Indeed, the example of FIG. 9 corresponds to the example of FIG. 7 as a valid signal is encoded (at 905) on the clock signal to indicate that valid data can be expected on the data lanes (e.g., DATA[0-2]) in a next 8UI byte window. In one example, a receiving component may determine the decode signals received in each 8UI byte window.[0079] In one implementation, UIs encoded to indicate a valid signal may be the first UIs of a byte window. Continuing with the example of FIG. 9, following the valid signal 905, data may be sent in the next byte window on the data lanes and a stream identifier corresponding to the data may be sent in the same byte window (at 910), but encoded on the clock signal. In some instances, a valid (for the next byte window) may also be sent in the same byte window as the stream identifier. For instance, in the example of FIG. 9, PAM may be applied to 8UI of clock signal to encode both a valid signal and stream signal (to indicate a protocol of the data sent in the same byte window) in the same byte window (at 910). For instance, in one example, the first four UI may be used to indicate a valid (or not) and the last four UI of the same window 910 may be used to indicate a stream ID of the data currently being sent in the same byte window 910. In the particular example of FIG. 9, a combined valid/stream ID signal may be encoded on the clock to indicate that data (e.g., 920) of a protocol corresponding to code "FF" is being sent on data lanes DATA[0-2]. In the next window 915, the clock signal may be further encoded through PAM to indicate both a no valid (i.e., that no valid data will be sent in the next byte window) and a stream ID corresponding to another interconnect protocol, indicating that this byte window is being used to send data (e.g., 925) of a different protocol or stream type (e.g., physical layer packet (PLP), PCIe, IDI, UPI, etc.). [0080] It should be appreciated that the encodings shown in the example of FIG. 9 are provided as non-limiting examples only. Indeed, a variety of encodings may be defined, allowing one or more control signals to be overlaid on a clock signal to replace one or more dedicated control lanes and support multi -protocol capabilities of the MPL. Indeed, depending on the number of pins available to facilitate an MPL between components in a device, PAM- based encodings may be defined to replace one or more dedicated control lanes by encoding control signals on a clock lane or on another control lane. Further, the available encodings may also be based on the number of protocols to be supported in communications over an MPL connecting two components. For instance, if only two different protocols are to be sent over the MPL, PAM codes may be limited to codes to identify these two protocols (in stream signals), with more codes being defined for implementations where more than two protocols are to be identified in stream signals. Based on the PAM scheme utilized, and the number of UIs in which a PAM-based code may be sent on a particular lane, an encoding space may be defined. For instance, in an implementation where 4UI of a clock lane may be used to encode control signals (such as in the example of FIG. 9), in a PAM-3 scheme one of 24, or 16, different codes may be selected to be sent on top of the clock signal in a single byte window. In a PAM-4 scheme, however, an encoding space of 34, or 81, different codes may be sent on top of the clock signal in a single byte window. In other cases, such as where each UI of a control lane may be encoded with PAM-based signals, much larger encoding spaces may be available (e.g., 38(6,561) available codes in a PAM-3 scheme and 48(65,536) available codes in a PAM-4 scheme), allowing for more detailed and/or varied signaling to be achieved over a single lane (e.g., for a wide variety of stream, DBI, and valid signaling, including codes communicating a combination of stream information, DBI information, valid, and even data on a single lane in the same byte window), among other examples.[0081] Turning to FIGS. 10A-10B, simplified block diagrams lOOOa-b are shown illustrating example systems including a simplified MPL coupling two components that utilizes voltage amplitude modulation to overlay multi -protocol control information on lanes in the MPL. For instance, in the example of FIG. 10A, a system 1000a is shown including an upstream component 1005a and a downstream component 1010a. The upstream and downstream components 1005a, 1010a may be interconnected using an MPL that includes a downstream channel 1015a and an upstream channel 1020a. In contrast to the example MPL illustrated in FIG. 6, in this example, a limited number of physical lanes are provided in each of the downstream and upstream channels. For example, rather than a two valid lanes, 1 stream lane, 1 sideband control lane, and 50 data lanes being provided per channel, in the example of FIG. 10A, a clock lane and four data lanes are provided per channel 1015a, 1020a, but the multiprotocol windowing of data sent on the channel (as in the examples of FIGS. 6 and 7) may be preserved. For instance, the clock lane in each of the downstream and upstream channels 1015a, 1020a, may be utilized to not only send a clock signal corresponding to the link, but PAM encodings may be added on top of the clock signal so as to preserve the clock signal while also communicating additional data, including stream ID information for each window of data (e.g., each byte window) sent over the data lanes of each channel 1015, 1020a, similar to the example illustrated in FIG. 9. Indeed, encodings may be provided to identify each of multiple different protocols which may be adopted in distinct data windows on the MPL. Each component 1005, 1010a may be provided with hardware and/or software based logic to implement a communication stack to facilitate communications using potentially multiple different protocols. For instance, 1005, 1010a may include each MPL PHY logic (e.g., 1025a,b) and MPL logical PHY logic 1030a, 1030b to support the implementation of the MPL and enable the PAM-based encoding and decoding at each component, in addition to the windowing of data sent over the link and signaling (e.g., valid and stream) controlling the multi -protocol support within the defined windows. MPL PHY and logical PHY logic may be based on the same PHY hardware and code used to implement more complex implementations of the MPL, such as shown in FIG. 6, where dedicated control lanes are provided. To support the multiple protocols, link layer (e.g., 1035a,b) and transaction layer logic (e.g., 1050a,b) may also be provided, as well as LLP generation logic 1060a,b, such as also discussed in connection with the example of FIG. 6.[0082] Further, additional control information may also be encoded on the clock to identify a valid (or "pre-valid") indicator, identify a sideband notification (e.g., an LSM sideband indicator), DBI information, among other examples. In the example of FIG. 10A, a sideband channel 1070 may be provided over which state transitions and handshakes and out-of- band control information may be sent. As in the example of FIG. 6, to indicate that data is or will be sent over the sideband channel 1070, a sideband notification signal may be sent in-band and encoded on top of the clock signal on the clock lane of either the downstream or upstream channel (in addition to the encoding provided to enable stream identification), among other example features. In one example, a sideband notification signal may be sent as a pulse spanning an entire byte window in which sideband data is to be sent (over the sideband channel 1070).[0083] Turning to the example of FIG. 10B, another implementation of a simplified MPL is shown. In this example, one or more control lanes may be provided in each of the downstream and upstream channels (e.g., 1015b, 1020b). For instance, a control lane may be provided that is not dedicated to providing multi -protocol support for an MPL. Instead, the control lane may be provided for another purpose, such for DBI signaling. Accordingly, in some implementations, rather than encoding additional MPL control signaling on top of a clock signal, additional control signals may be provided, through amplitude modulation, on top of or in addition to control signals to be sent on a dedicated control lane. Further, multiple dedicated MPL control lanes (e.g., both a valid lane, stream lane, and/or sideband notification lane) may be replaced through a single control lane. Additionally, because the control lane does not carry a clock signal, control data may be encoded in each UI of the signaling window (rather than on only those UIs involving a risking clock edge), among other examples. For instance, as shown in the example of FIG. 10B, some implementations may not make use of, and eliminate a sideband channel (with the control encodings similarly omitting encodings dedicated to identifying signaling on such sideband channels). In yet another example, while FIGS. 10A- 10B illustrate dedicated data lanes, in some implementations, data lanes may also be omitted, replaced, or combined with control or clock lanes, such as in cases (e.g., IoT sensors), where low bandwidth data is sent with limited potential values or at relatively infrequent intervals. For instance, a sensor may only communicate one of a limited set of values over an MPL and this data may be likewise encoded, through amplitude modulation (e.g., PAM-4), on a physical lane that is also used to signal control codes relating to multi-protocol support (e.g., stream identification) on that lane, among other examples.[0084] Turning to the examples of FIGS. 11A-11C, as noted above, in some implementations, a control signal used to identify a stream type of data sent or to be sent over an MPL may be encoded on top of a clock signal on a dedicated clock lane of the MPL. FIGS. 11A-11C illustrate various implementations where control information may be encoded with various types of clock signals (e.g., single-ended, differential, etc.). For instance, as illustrated in FIG. 11 A, a single-ended 2UI clock signal 1105 may be encoded with additional information through a PAM-based scheme (e.g., PAM-3). The signal 1105 may be provided to a circuit (e.g., 1110) that includes, as inputs, reference voltages (e.g., Vrefi and YKn), which may correspond to the voltage levels used in the amplitude modulation encodings (as shown). Comparators may be provided within the circuit to separate the clock signal (e.g., pulses greater than Vrefi) from the overlaid control data (e.g., pulses greater than Vref2) to provide a clock output 1115 and control signal output 1120 to be processed by a receiver of the combined clock (e.g., CLK in) signal 1105. [0085] Turning to the example of FIG. 1 IB, a clock signal may be implemented as s differential clock (e.g., 1 125). As with a single-ended clock, a differential clock signal may be amplitude modulated (e.g., at 1125) to encode MPL control information on the same lane as the differential clock signal. An example decoding circuit (e.g., 1130, with four-input differential amplifier 1135 (e.g., provided for offset correction)) may be provided to again take the clock signal as an input, and separate control data 1140 from the clock (output at 1145). In some cases, amplitude modulation of a higher speed clock signals may introduce clock duty cycle jitter. Accordingly, in some implementations, jitter may be reduced for a differential clock by encoding MPL control information in the clock signal through common mode modulation, rather than clock swing, such as shown in the example common-mode-modulated differential signal 1150 in FIG. 11C. The example of FIG. 11C further illustrates circuits 1155 through which a control signal 1160 and the differential clock 1165 may be isolated from the modulated clock signal 1150.[0086] As noted above, in some implementations, MPL control lanes may be replaced by encoding MPL control signals (e.g., stream ID signals) on another control lane, rather than a clock lane. For instance, a simple forwarded clock may be used and one or more control lanes already provided for in an implementation (e.g., for data bus inversion) may be extended to carry PAM-based codes. In one example, a lane may be provided for coordinating DBI on an MPL. DBI is a technique that may be used to strategically invert the databus during transmissions in order to reduce the number of lane transitions and thereby reduce power. Accordingly, a DBI value may change from UI to UI and may be advantageously sent on a separate control lane. This DBI lane may be further used for control signal encoding. For instance, as shown in the example of FIG. 12, a control lane may be extended through PAM- based encoding, allowing more information to be sent in a signaling window than on another implementation that may use a clock lane to carry MPL control signals. In some cases, the control lane may be encoded according to an alignment clock, such that an alignment signal or value (e.g., a zero) is sent at a particular frequency. For instance, in the example of FIG. 12, every 4th UI may be a 00 followed by a non-00 value which allows for easy alignment clock recovery. In such an example (using PAM-4 encoding), in each 4 UI window, the first 3 UI are non-00 values allowing for 3A3 (27) codes. In one example, 16 of the 27 possible codes can be used for DBI and the remaining for MPL control information (and perhaps also data). Through such an example, alignment events may be encoded onto the lane to allow symbol boundaries (e.g., an 8UI 'symbol' or window boundary) to be readily recovered. In some implementations, where comparatively few protocol are to be supported, a single UI (or two consecutive UIs, etc.) may be designated for the encoding of stream ID information. For instance, in one example, stream ID may be encoded at UI 3, UI 7, or some other instance. Symbol boundaries may still be encoded and derived into the preceding control codes (e.g., EarlyValid, preamble, postamble), among other example features. Further, various DBI algorithms may be utilized. Indeed, the algorithm could be changed at the transmitter and communicated to the receiver by an additional control code at the start of transfer. Such implementations may be thereby used as an example mechanism to implement a forward subchannel to change physical link settings dynamically, among other examples.[0087] Additional features can also be optionally implemented in some examples of a MPL to enhance the performance characteristics of the physical link. For instance, line coding can be provided. While mid-rail terminations, such as described above, can allow for DC data bus inversion (DBI) to be omitted, AC DBI can still be used to reduce the dynamic power. More complicated coding can also be used to eliminate the worst case difference of l 's and 0's to reduce, for instance, the drive requirement of mid-rail regulator, as well as limit I/O switching noise, among other example benefits. Further, transmitter equalization can also be optionally implemented. For instance, at very high data rates, insertion loss can be a significant for an in- package channel. A two-tap weight transmitter equalization (e.g., performed during an initial power-up sequence) can, in some cases, be sufficient to mitigate some of these issues, among others.[0088] Turning to FIG. 13, a simplified block diagram 1300 is shown illustrating an example logical PHY of an example MPL. A physical PHY 1305 can connect to a component that includes logical PHY 1310 and additional logic supporting a link layer of the MPL. The die, in this example, can further include logic to support multiple different protocols on the MPL. For instance, in the example of FIG. 13, PCIe logic 1315 can be provided as well as UPI logic 1320, such that the component can communicate using either PCIe or UPI over the same MPL connecting the two components, among potentially many other examples, including examples where more than two protocols or protocols other than PCIe and UPI are supported over the MPL. Various protocols supported between the components can offer varying levels of service and features.[0089] Logical PHY 1310 can include link state machine management logic 1325 for negotiating link state transitions in connection with requests of upper layer logic of the component (e.g., received over PCIe or UPI). Logical PHY 1310 can further include link testing and debug logic (e.g., 1330) ion some implementations. As noted above, an example MPL can support control signals that are sent between components over the MPL to facilitate protocol agnostic, high performance, and power efficiency features (among other example features) of the MPL. For instance, logical PHY 1310 can support the generation and sending, as well as the receiving and processing of valid signals, stream signals, and LSM sideband signals in connection with the sending and receiving of data over data lanes, such as described in examples above.[0090] In some implementations, multiplexing (e.g., 1335) and demultiplexing (e.g., 1340) logic can be included in, or be otherwise accessible to, logical PHY 1310. For instance, multiplexing logic (e.g., 1335) can be used to identify data (e.g., embodied as packets, messages, etc.) that is to be sent out onto the MPL. The multiplexing logic 1335 can identify the protocol governing the data and generate a stream signal that is encoded to identify the protocol. For instance, in one example implementation, the stream signal can be encoded as a byte of two hexadecimal symbols (e.g., UPI: FFh; PCIe: FOh; LLP: AAh; sideband: 55h; etc.) or simply a binary value and can be sent during the same window (e.g., a byte time period window) of the data governed by the identified protocol. Similarly, demultiplexing logic 1340 can be employed to interpret incoming stream signals to decode the stream signal and identify the protocol that is to apply to data concurrently received with the stream signal on the data lanes. The demultiplexing logic 1340 can then apply (or ensure) protocol-specific link layer handling and cause the data to be handled by the corresponding protocol logic (e.g., PCIe logic 1315 or UPI logic 1320).[0091] Logical PHY 1310 can further include link layer packet logic 1350 that can be used to handle various link control functions, including power management tasks, loopback, disable, re-centering, scrambling, etc. LLP logic 1350 can facilitate link layer-to-link layer messages over MPL, among other functions. Data corresponding to the LLP signaling can be also be identified by a stream signal that is encoded to identify that the data lanes LLP data. Multiplexing and demultiplexing logic (e.g., 1335, 1340) can also be used to generate and interpret the stream signals corresponding to LLP traffic, as well as cause such traffic to be handled by the appropriate component logic (e.g., LLP logic 1350). Likewise, as some implementations of an MCLP can include a dedicated sideband (e.g., sideband 1355 and supporting logic), such as an asynchronous and/or lower frequency sideband channel, among other examples.[0092] Logical PHY logic 1310 can further include link state machine management logic that can generate and receive (and use) link state management messaging over a dedicated link state management (LSM) sideband lane. For instance, an LSM sideband lane can be used to perform handshaking to advance link training state, exit out of power management states (e.g., an LI state), among other potential examples. The LSM sideband signal can be an asynchronous signal, in that it is not aligned with the data, valid, and stream signals of the link, but instead corresponds to signaling state transitions and align the link state machine between the two component or chips connected by the link, among other examples. Providing a dedicated LSM sideband lane can, in some examples, allow for traditional squelch and received detect circuits of an analog front end (AFE) to be eliminated, among other example benefits. Sideband handshakes can be used to facilitate link state machine transitions between components or chips in a multi-chip package. For instance, signals on the LSM sideband lanes of an MPL can be used to synchronize the state machine transitions across the component. For example, when the conditions to exit a state (e.g., Reset.Idle) are met, the side that met those conditions can assert, on its outbound LSM SB lane, an LSM sideband signal and wait for the other remote component to reach the same condition and assert an LSM sideband signal on its LSM SB lane. When both LSM SB signals are asserted the link state machine of each respective component can transition to the next state (e.g., a Reset.Cal state). A minimum overlap time can be defined during which both LSM SB signals should be kept asserted prior to transitioning state. Further, a minimum quiesce time can be defined after LSM SB is de-asserted to allow for accurate turnaround detection. In some implementations, every link state machine transition can be conditioned on and facilitated by such LSM SB handshakes.[0093] Note that the apparatus', methods', and systems described above may be implemented in any electronic device or system as aforementioned. As specific illustrations, the figures below provide exemplary systems for utilizing the invention as described herein. As the systems below are described in more detail, a number of different interconnects are disclosed, described, and revisited from the discussion above. And as is readily apparent, the advances described above may be applied to any of those interconnects, fabrics, or architectures.[0094] Referring to FIG. 14, an embodiment of a block diagram for a computing system including a multicore processor is depicted. Processor 1400 includes any processor or processing device, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, a handheld processor, an application processor, a co-processor, a system on a chip (SOC), or other device to execute code. Processor 1400, in one embodiment, includes at least two cores— core 1401 and 1402, which may include asymmetric cores or symmetric cores (the illustrated embodiment). However, processor 1400 may include any number of processing elements that may be symmetric or asymmetric.[0095] In one embodiment, a processing element refers to hardware or logic to support a software thread. Examples of hardware processing elements include: a thread unit, a thread slot, a thread, a process unit, a context, a context unit, a logical processor, a hardware thread, a core, and/or any other element, which is capable of holding a state for a processor, such as an execution state or architectural state. In other words, a processing element, in one embodiment, refers to any hardware capable of being independently associated with code, such as a software thread, operating system, application, or other code. A physical processor (or processor socket) typically refers to an integrated circuit, which potentially includes any number of other processing elements, such as cores or hardware threads.[0096] A core often refers to logic located on an integrated circuit capable of maintaining an independent architectural state, wherein each independently maintained architectural state is associated with at least some dedicated execution resources. In contrast to cores, a hardware thread typically refers to any logic located on an integrated circuit capable of maintaining an independent architectural state, wherein the independently maintained architectural states share access to execution resources. As can be seen, when certain resources are shared and others are dedicated to an architectural state, the line between the nomenclature of a hardware thread and core overlaps. Yet often, a core and a hardware thread are viewed by an operating system as individual logical processors, where the operating system is able to individually schedule operations on each logical processor.[0097] Physical processor 1400, as illustrated in FIG. 14, includes two cores— core 1401 and 1402. Here, core 1401 and 1402 are considered symmetric cores, i.e. cores with the same configurations, functional units, and/or logic. In another embodiment, core 1401 includes an out-of-order processor core, while core 1402 includes an in-order processor core. However, cores 1401 and 1402 may be individually selected from any type of core, such as a native core, a software managed core, a core adapted to execute a native Instruction Set Architecture (ISA), a core adapted to execute a translated Instruction Set Architecture (ISA), a co-designed core, or other known core. In a heterogeneous core environment (i.e. asymmetric cores), some form of translation, such a binary translation, may be utilized to schedule or execute code on one or both cores. Yet to further the discussion, the functional units illustrated in core 1401 are described in further detail below, as the units in core 1402 operate in a similar manner in the depicted embodiment.[0098] As depicted, core 1401 includes two hardware threads 1401a and 1401b, which may also be referred to as hardware thread slots 1401a and 1401b. Therefore, software entities, such as an operating system, in one embodiment potentially view processor 1400 as four separate processors, i.e., four logical processors or processing elements capable of executing four software threads concurrently. As alluded to above, a first thread is associated with architecture state registers 1401a, a second thread is associated with architecture state registers 1401b, a third thread may be associated with architecture state registers 1402a, and a fourth thread may be associated with architecture state registers 1402b. Here, each of the architecture state registers (1401a, 1401b, 1402a, and 1402b) may be referred to as processing elements, thread slots, or thread units, as described above. As illustrated, architecture state registers 1401a are replicated in architecture state registers 1401b, so individual architecture states/contexts are capable of being stored for logical processor 1401a and logical processor 1401b. In core 1401, other smaller resources, such as instruction pointers and renaming logic in allocator and renamer block 1430 may also be replicated for threads 1401a and 1401b. Some resources, such as reorder buffers in reorder/retirement unit 1435, ILTB 1420, load/store buffers, and queues may be shared through partitioning. Other resources, such as general purpose internal registers, page- table base register(s), low-level data-cache and data-TLB 1415, execution unit(s) 1440, and portions of out-of-order unit 1435 are potentially fully shared.[0099] Processor 1400 often includes other resources, which may be fully shared, shared through partitioning, or dedicated by/to processing elements. In FIG. 14, an embodiment of a purely exemplary processor with illustrative logical units/resources of a processor is illustrated. Note that a processor may include, or omit, any of these functional units, as well as include any other known functional units, logic, or firmware not depicted. As illustrated, core 1401 includes a simplified, representative out-of-order (OOO) processor core. But an in-order processor may be utilized in different embodiments. The OOO core includes a branch target buffer 1420 to predict branches to be executed/taken and an instruction-translation buffer (I- TLB) 1420 to store address translation entries for instructions.[00100] Core 1401 further includes decode module 1425 coupled to fetch unit 1420 to decode fetched elements. Fetch logic, in one embodiment, includes individual sequencers associated with thread slots 1401a, 1401b, respectively. Usually core 1401 is associated with a first ISA, which defines/specifies instructions executable on processor 1400. Often machine code instructions that are part of the first ISA include a portion of the instruction (referred to as an opcode), which references/specifies an instruction or operation to be performed. Decode logic 1425 includes circuitry that recognizes these instructions from their opcodes and passes the decoded instructions on in the pipeline for processing as defined by the first ISA. For example, as discussed in more detail below decoders 1425, in one embodiment, include logic designed or adapted to recognize specific instructions, such as transactional instruction. As a result of the recognition by decoders 1425, the architecture or core 1401 takes specific, predefined actions to perform tasks associated with the appropriate instruction. It is important to note that any of the tasks, blocks, operations, and methods described herein may be performed in response to a single or multiple instructions; some of which may be new or old instructions. Note decoders 1426, in one embodiment, recognize the same ISA (or a subset thereof). Alternatively, in a heterogeneous core environment, decoders 1426 recognize a second ISA (either a subset of the first ISA or a distinct ISA).[00101] In one example, allocator and renamer block 1430 includes an allocator to reserve resources, such as register files to store instruction processing results. However, threads 1401a and 1401b are potentially capable of out-of-order execution, where allocator and renamer block 1430 also reserves other resources, such as reorder buffers to track instruction results. Unit 1430 may also include a register renamer to rename program/instruction reference registers to other registers internal to processor 1400. Reorder/retirement unit 1435 includes components, such as the reorder buffers mentioned above, load buffers, and store buffers, to support out-of-order execution and later in-order retirement of instructions executed out-of-order.[00102] Scheduler and execution unit(s) block 1440, in one embodiment, includes a scheduler unit to schedule instructions/operation on execution units. For example, a floating point instruction is scheduled on a port of an execution unit that has an available floating point execution unit. Register files associated with the execution units are also included to store information instruction processing results. Exemplary execution units include a floating point execution unit, an integer execution unit, a jump execution unit, a load execution unit, a store execution unit, and other known execution units.[00103] Lower level data cache and data translation buffer (D-TLB) 1450 are coupled to execution unit(s) 1440. The data cache is to store recently used/operated on elements, such as data operands, which are potentially held in memory coherency states. The D-TLB is to store recent virtual/linear to physical address translations. As a specific example, a processor may include a page table structure to break physical memory into a plurality of virtual pages.[00104] Here, cores 1401 and 1402 share access to higher-level or further-out cache, such as a second level cache associated with on-chip interface 1410. Note that higher-level or further-out refers to cache levels increasing or getting further way from the execution unit(s). In one embodiment, higher-level cache is a last-level data cache— last cache in the memory hierarchy on processor 1400— such as a second or third level data cache. However, higher level cache is not so limited, as it may be associated with or include an instruction cache. A trace cache— a type of instruction cache— instead may be coupled after decoder 1425 to store recently decoded traces. Here, an instruction potentially refers to a macro-instruction (i.e. a general instruction recognized by the decoders), which may decode into a number of micro-instructions (micro-operations).[00105] In the depicted configuration, processor 1400 also includes on-chip interface module 1410. Historically, a memory controller, which is described in more detail below, has been included in a computing system external to processor 1400. In this scenario, on-chip interface 1410 is to communicate with devices external to processor 1400, such as system memory 1475, a chipset (often including a memory controller hub to connect to memory 1475 and an I/O controller hub to connect peripheral devices), a memory controller hub, a northbridge, or other integrated circuit. And in this scenario, bus 1405 may include any known interconnect, such as multi-drop bus, a point-to-point interconnect, a serial interconnect, a parallel bus, a coherent (e.g. cache coherent) bus, a layered protocol architecture, a differential bus, and a GTL bus.[00106] Memory 1475 may be dedicated to processor 1400 or shared with other devices in a system. Common examples of types of memory 1475 include DRAM, SRAM, non-volatile memory (NV memory), and other known storage devices. Note that device 1480 may include a graphic accelerator, processor or card coupled to a memory controller hub, data storage coupled to an I/O controller hub, a wireless transceiver, a flash device, an audio controller, a network controller, or other known device.[00107] Recently however, as more logic, components, and devices are being integrated on a single die, such as SOC, each of these devices may be incorporated on processor 1400. For example in one embodiment, a memory controller hub is on the same package and/or die with processor 1400. Here, a portion of the core (an on-core portion) 1410 includes one or more controller(s) for interfacing with other devices such as memory 1475 or a graphics device 1480. The configuration including an interconnect and controllers for interfacing with such devices is often referred to as an on-core (or un-core configuration). As an example, on-chip interface 1410 includes a ring interconnect for on-chip communication and a high-speed serial point-to- point link 1405 for off-chip communication. Yet, in the SOC environment, even more devices, such as the network interface, co-processors, memory 1475, graphics processor 1480, and any other known computer devices/interface may be integrated on a single die or integrated circuit to provide small form factor with high functionality and low power consumption.[00108] In one embodiment, processor 1400 is capable of executing a compiler, optimization, and/or translator code 1477 to compile, translate, and/or optimize application code 1476 to support the apparatus and methods described herein or to interface therewith. A compiler often includes a program or set of programs to translate source text/code into target text/code. Usually, compilation of program/application code with a compiler is done in multiple phases and passes to transform hi-level programming language code into low-level machine or assembly language code. Yet, single pass compilers may still be utilized for simple compilation. A compiler may utilize any known compilation techniques and perform any known compiler operations, such as lexical analysis, preprocessing, parsing, semantic analysis, code generation, code transformation, and code optimization.[00109] Larger compilers often include multiple phases, but most often these phases are included within two general phases: (1) a front-end, i.e. generally where syntactic processing, semantic processing, and some transformation/optimization may take place, and (2) a back-end, i.e. generally where analysis, transformations, optimizations, and code generation takes place. Some compilers refer to a middle, which illustrates the blurring of delineation between a front- end and back end of a compiler. As a result, reference to insertion, association, generation, or other operation of a compiler may take place in any of the aforementioned phases or passes, as well as any other known phases or passes of a compiler. As an illustrative example, a compiler potentially inserts operations, calls, functions, etc. in one or more phases of compilation, such as insertion of calls/operations in a front-end phase of compilation and then transformation of the calls/operations into lower-level code during a transformation phase. Note that during dynamic compilation, compiler code or dynamic optimization code may insert such operations/calls, as well as optimize the code for execution during runtime. As a specific illustrative example, binary code (already compiled code) may be dynamically optimized during runtime. Here, the program code may include the dynamic optimization code, the binary code, or a combination thereof.[00110] Similar to a compiler, a translator, such as a binary translator, translates code either statically or dynamically to optimize and/or translate code. Therefore, reference to execution of code, application code, program code, or other software environment may refer to: (1) execution of a compiler program(s), optimization code optimizer, or translator either dynamically or statically, to compile program code, to maintain software structures, to perform other operations, to optimize code, or to translate code; (2) execution of main program code including operations/calls, such as application code that has been optimized/compiled; (3) execution of other program code, such as libraries, associated with the main program code to maintain software structures, to perform other software related operations, or to optimize code; or (4) a combination thereof.[00111] Turning to FIG. 15, a block diagram of an exemplary computer system formed with a processor that includes execution units to execute an instruction, where one or more of the interconnects implement one or more features in accordance with one embodiment of the present invention is illustrated. System 1500 includes a component, such as a processor 1502 to employ execution units including logic to perform algorithms for process data, in accordance with the present invention, such as in the embodiment described herein. System 1500 is representative of processing systems based on the PENTIUM III™, PENTIUM 4™, Xeon™, Itanium, XScale™ and/or StrongARM™ microprocessors, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 1500 executes a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present invention are not limited to any specific combination of hardware circuitry and software.[00112] Embodiments are not limited to computer systems. Alternative embodiments of the present invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.[00113] In this illustrated embodiment, processor 1502 includes one or more execution units 1508 to implement an algorithm that is to perform at least one instruction. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments may be included in a multiprocessor system. System 1500 is an example of a 'hub' system architecture. The computer system 1500 includes a processor 1502 to process data signals. The processor 1502, as one illustrative example, includes a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 1502 is coupled to a processor bus 1510 that transmits data signals between the processor 1502 and other components in the system 1500. The elements of system 1500 (e.g. graphics accelerator 1512, memory controller hub 1516, memory 1520, I/O controller hub 1524, wireless transceiver 1526, Flash BIOS 1528, Network controller 1534, Audio controller 1536, Serial expansion port 1538, I/O controller 1540, etc.) perform their conventional functions that are well known to those familiar with the art.[00114] In one embodiment, the processor 1502 includes a Level 1 (LI) internal cache memory 1504. Depending on the architecture, the processor 1502 may have a single internal cache or multiple levels of internal caches. Other embodiments include a combination of both internal and external caches depending on the particular implementation and needs. Register file 1506 is to store different types of data in various registers including integer registers, floating point registers, vector registers, banked registers, shadow registers, checkpoint registers, status registers, and instruction pointer register.[00115] Execution unit 1508, including logic to perform integer and floating point operations, also resides in the processor 1502. The processor 1502, in one embodiment, includes a microcode (ucode) ROM to store microcode, which when executed, is to perform algorithms for certain macroinstructions or handle complex scenarios. Here, microcode is potentially updateable to handle logic bugs/fixes for processor 1502. For one embodiment, execution unit 1508 includes logic to handle a packed instruction set 1509. By including the packed instruction set 1509 in the instruction set of a general-purpose processor 1502, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general -purpose processor 1502. Thus, many multimedia applications are accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This potentially eliminates the need to transfer smaller units of data across the processor's data bus to perform one or more operations, one data element at a time.[00116] Alternate embodiments of an execution unit 1508 may also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 1500 includes a memory 1520. Memory 1520 includes a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 1520 stores instructions and/or data represented by data signals that are to be executed by the processor 1502.[00117] Note that any of the aforementioned features or aspects of the invention may be utilized on one or more interconnect illustrated in FIG. 15. For example, an on-die interconnect (ODI), which is not shown, for coupling internal units of processor 1502 implements one or more aspects of the invention described above. Or the invention is associated with a processor bus 1510 (e.g. other known high performance computing interconnect), a high bandwidth memory path 1518 to memory 1520, a point-to-point link to graphics accelerator 1512 (e.g. a Peripheral Component Interconnect express (PCIe) compliant fabric), a controller hub interconnect 1522, an I/O or other interconnect (e.g. USB, PCI, PCIe) for coupling the other illustrated components. Some examples of such components include the audio controller 1536, firmware hub (flash BIOS) 1528, wireless transceiver 1526, data storage 1524, legacy I/O controller 1510 containing user input and keyboard interfaces 1542, a serial expansion port 1538 such as Universal Serial Bus (USB), and a network controller 1534. The data storage device 1524 can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[00118] Referring now to FIG. 16, shown is a block diagram of a second system 1600 in accordance with an embodiment of the present invention. As shown in FIG. 16, multiprocessor system 1600 is a point-to-point interconnect system, and includes a first processor 1670 and a second processor 1680 coupled via a point-to-point interconnect 1650. Each of processors 1670 and 1680 may be some version of a processor. In one embodiment, 1652 and 1654 are part of a serial, point-to-point coherent interconnect fabric, such as a high-performance architecture. As a result, the invention may be implemented within the QPI architecture.[00119] While shown with only two processors 1670, 1680, it is to be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.[00120] Processors 1670 and 1680 are shown including integrated memory controller units 1672 and 1682, respectively. Processor 1670 also includes as part of its bus controller units point-to-point (P-P) interfaces 1676 and 1678; similarly, second processor 1680 includes P-P interfaces 1686 and 1688. Processors 1670, 1680 may exchange information via a point-to- point (P-P) interface 1650 using P-P interface circuits 1678, 1688. As shown in FIG. 16, EVICs 1672 and 1682 couple the processors to respective memories, namely a memory 1632 and a memory 1634, which may be portions of main memory locally attached to the respective processors.[00121] Processors 1670, 1680 each exchange information with a chipset 1690 via individual P-P interfaces 1652, 1654 using point to point interface circuits 1676, 1694, 1686, 1698. Chipset 1690 also exchanges information with a high-performance graphics circuit 1638 via an interface circuit 1692 along a high-performance graphics interconnect 1639.[00122] A shared cache (not shown) may be included in either processor or outside of both processors; yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. [00123] Chipset 1690 may be coupled to a first bus 1616 via an interface 1696. In one embodiment, first bus 1616 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.[00124] As shown in FIG. 16, various I/O devices 1614 are coupled to first bus 1616, along with a bus bridge 1618 which couples first bus 1616 to a second bus 1620. In one embodiment, second bus 1620 includes a low pin count (LPC) bus. Various devices are coupled to second bus 1620 including, for example, a keyboard and/or mouse 1622, communication devices 1627 and a storage unit 1628 such as a disk drive or other mass storage device which often includes instructions/code and data 1630, in one embodiment. Further, an audio I/O 1624 is shown coupled to second bus 1620. Note that other architectures are possible, where the included components and interconnect architectures vary. For example, instead of the point-to- point architecture of FIG. 16, a system may implement a multi-drop bus or other such architecture.[00125] While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.[00126] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may store on a tangible, machine- readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present invention.[00127] A module as used herein refers to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a micro-controller, associated with a non-transitory medium to store code adapted to be executed by the micro-controller. Therefore, reference to a module, in one embodiment, refers to the hardware, which is specifically configured to recognize and/or execute the code to be held on a non-transitory medium. Furthermore, in another embodiment, use of a module refers to the non-transitory medium including the code, which is specifically adapted to be executed by the microcontroller to perform predetermined operations. And as can be inferred, in yet another embodiment, the term module (in this example) may refer to the combination of the microcontroller and the non- transitory medium. Often module boundaries that are illustrated as separate commonly vary and potentially overlap. For example, a first and a second module may share hardware, software, firmware, or a combination thereof, while potentially retaining some independent hardware, software, or firmware. In one embodiment, use of the term logic includes hardware, such as transistors, registers, or other hardware, such as programmable logic devices.[00128] Use of the phrase 'configured to,' in one embodiment, refers to arranging, putting together, manufacturing, offering to sell, importing and/or designing an apparatus, hardware, logic, or element to perform a designated or determined task. In this example, an apparatus or element thereof that is not operating is still 'configured to' perform a designated task if it is designed, coupled, and/or interconnected to perform said designated task. As a purely illustrative example, a logic gate may provide a 0 or a 1 during operation. But a logic gate 'configured to' provide an enable signal to a clock does not include every potential logic gate that may provide a 1 or 0. Instead, the logic gate is one coupled in some manner that during operation the 1 or 0 output is to enable the clock. Note once again that use of the term 'configured to' does not require operation, but instead focus on the latent state of an apparatus, hardware, and/or element, where in the latent state the apparatus, hardware, and/or element is designed to perform a particular task when the apparatus, hardware, and/or element is operating.[00129] Furthermore, use of the phrases 'to,' 'capable of/to,' and or 'operable to,' in one embodiment, refers to some apparatus, logic, hardware, and/or element designed in such a way to enable use of the apparatus, logic, hardware, and/or element in a specified manner. Note as above that use of to, capable to, or operable to, in one embodiment, refers to the latent state of an apparatus, logic, hardware, and/or element, where the apparatus, logic, hardware, and/or element is not operating but is designed in such a manner to enable use of an apparatus in a specified manner.[00130] A value, as used herein, includes any known representation of a number, a state, a logical state, or a binary logical state. Often, the use of logic levels, logic values, or logical values is also referred to as l 's and 0's, which simply represents binary logic states. For example, a 1 refers to a high logic level and 0 refers to a low logic level. In one embodiment, a storage cell, such as a transistor or flash cell, may be capable of holding a single logical value or multiple logical values. However, other representations of values in computer systems have been used. For example the decimal number ten may also be represented as a binary value of 1010 and a hexadecimal letter A. Therefore, a value includes any representation of information capable of being held in a computer system.[00131] Moreover, states may be represented by values or portions of values. As an example, a first value, such as a logical one, may represent a default or initial state, while a second value, such as a logical zero, may represent a non-default state. In addition, the terms reset and set, in one embodiment, refer to a default and an updated value or state, respectively. For example, a default value potentially includes a high logical value, i.e. reset, while an updated value potentially includes a low logical value, i.e. set. Note that any combination of values may be utilized to represent any number of states.[00132] The embodiments of methods, hardware, software, firmware or code set forth above may be implemented via instructions or code stored on a machine-accessible, machine readable, computer accessible, or computer readable medium which are executable by a processing element. A non-transitory machine-accessible/readable medium includes any mechanism that provides (i.e., stores and/or transmits) information in a form readable by a machine, such as a computer or electronic system. For example, a non-transitory machine- accessible medium includes random-access memory (RAM), such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage medium; flash memory devices; electrical storage devices; optical storage devices; acoustical storage devices; other form of storage devices for holding information received from transitory (propagated) signals (e.g., carrier waves, infrared signals, digital signals); etc., which are to be distinguished from the non- transitory mediums that may receive information there from.[00133] Instructions used to program logic to perform embodiments of the invention may be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[00134] The following examples pertain to embodiments in accordance with this Specification. Example 1 is an apparatus including physical layer logic to: receive data on a physical link including a plurality of lanes, where the data is received from a particular component on one or more data lanes of the physical link, and receive a stream signal on a particular one of the plurality of lanes of the physical link, where the stream signal is to identify a type of the data on the one or more data lanes, the type is one of a plurality of different types supported by the particular component, and the stream signal is encoded through voltage amplitude modulation on the particular lane.[00135] Example 2 may include the subject matter of example 1, where the physical layer logic is further to send a link state machine management signal over a sideband link in association with a link state transition, and encode a sideband notification signal on the particular lane through voltage amplitude modulation, where the sideband notification signal indicates the sending of the link state machine management signal over the sideband link.[00136] Example 3 may include the subject matter of any one of examples 1-2, where the type includes a protocol associated with the data, and the protocol is a particular one of a plurality of protocols supported by the particular component.[00137] Example 4 may include the subject matter of example 3, where the physical layer logic is further to decode the stream signal to identify which of the plurality of different protocols applies to the data.[00138] Example 5 may include the subject matter of example 4, where the physical layer logic is further to pass the data to upper layer protocol logic corresponding to the particular one of the plurality of protocols identified in the stream signal, and the apparatus further includes upper layer logic of each of the plurality of protocols. [00139] Example 6 may include the subject matter of example 5, where the physical layer logic is further to receive a valid signal on the particular lane of the physical link, where the valid signal is to identify that valid data is to follow assertion of the valid signal on the one or more data lanes.[00140] Example 7 may include the subject matter of example 6, where the physical layer logic is further to define a series of data windows in which data is to be sent on the data lanes, the valid signal is sent in a particular one of the series of data windows.[00141] Example 8 may include the subject matter of example 7, where the valid signal is to be asserted in a window immediately preceding the window in which the data is to be sent.[00142] Example 9 may include the subject matter of example 8, where data is to be ignored on data lanes in a window immediately following a preceding window in which the valid signal is not asserted.[00143] Example 10 may include the subject matter of any one of examples 7-9, where the valid signal is to be encoded on the particular lane through voltage amplitude modulation.[00144] Example 1 1 may include the subject matter of any one of examples 6-10, where the stream signal is sent in a same one of the series of data windows as the data.[00145] Example 12 may include the subject matter of any one of examples 1-11, where the particular lane includes a clock lane, and the steam signal is encoded on top of a clock signal sent on the clock lane.[00146] Example 13 may include the subject matter of example 12, where the clock includes a single-ended clock.[00147] Example 14 may include the subject matter of example 12, where the clock includes a differential clock.[00148] Example 15 may include the subject matter of example 14, where the voltage amplitude modulation includes modulation of a common mode voltage of the differential clock.[00149] Example 16 may include the subject matter of any one of examples 1-15, where the particular lane includes a control lane and the stream signal and at least one other control signal are to be sent on the control lane.[00150] Example 17 may include the subject matter of example 16, where the control lane includes a lane for data bus inversion (DBI).[00151] Example 18 may include the subject matter of any one of examples 1-17, where the voltage amplitude modulation includes pulse amplitude modulation.[00152] Example 19 is an apparatus including a first transaction layer logic to generate first data according to a first one of a plurality of communication protocols, a second transaction layer logic to generate second data according to a different, second one of the plurality of communication protocols, and physical layer logic. The physical layer logic can send the first data on one or more data lanes of a physical link including a plurality of lanes, encode a first stream signal on a particular one of the plurality of lanes to indicate that the first data is of the first communication protocol, where the first stream signal is encoded through voltage amplitude modulation on the particular lane, send the second data on one or more data lanes of the physical lane, and encode a second stream signal on the particular lane to indicate that the first data is of the first communication protocol, where the first stream signal is encoded through voltage amplitude modulation on the particular lane.[00153] Example 20 may include the subject matter of example 19, where the first data and the first stream signal are sent during a first signaling window and the second data and the second stream signal are sent during a second signaling window.[00154] Example 21 may include the subject matter of example 20, where other information is also sent on the particular lane with the first stream signal during the first signaling window.[00155] Example 22 may include the subject matter of example 21, where the other information includes a clock signal and the particular lane includes a clock lane.[00156] Example 23 may include the subject matter of example 21, where the other information includes a valid signal to indicate that valid data is to be sent in the second signaling window, and the valid signal corresponds to the second data.[00157] Example 24 is a system including a first component and a second component, connected to the first component by a multi -protocol link, where each of the first and second components support data communications in any one of a plurality of communication protocols, and the second component includes physical layer logic to send data on the multi-protocol link to the first component, where multi -protocol link includes a plurality of lanes and the data is sent on one or more data lanes of the multi -protocol link, and send a stream signal on a particular one of the plurality of lanes of the physical link, where the stream signal is to identify that a particular one of the plurality of communication protocols applies to the data and the stream signal is encoded through voltage amplitude modulation on the particular lane.[00158] Example 25 may include the subject matter of example 24, where the particular lane includes a clock lane and the stream signal is encoded on top of a clock signal sent on the clock lane.[00159] Example 26 may include the subject matter of any one of examples 24-25, where the stream signal is sent concurrent with the data. [00160] Example 27 may include the subject matter of any one of examples 24-26, where the data includes first data, the stream signal includes a first stream signal, and the physical layer logic is further to: send second data on the multi-protocol link to the first component on one or more of the data lanes of the multi -protocol link, and send a second stream signal on a particular one of the plurality of lanes of the physical link, where the second stream signal is to identify that a second one of the plurality of communication protocols applies to the second data and the second stream signal is encoded through voltage amplitude modulation on the particular lane[00161] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.[00162] In the foregoing specification, a detailed description has been given with reference to specific exemplary embodiments. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense. Furthermore, the foregoing use of embodiment and other exemplarily language does not necessarily refer to the same embodiment or the same example, but may refer to different and distinct embodiments, as well as potentially the same embodiment.
The invention allows the implementation of common wide logic functions using only two function generators of a field programmable gate array. One embodiment of the invention provides a structure for implementing a wide AND-gate in an FPGA configurable logic element (CLE) or portion thereof that includes no more than two function generators. First and second function generators are configured as AND-gates, the output signals (first and second AND signals) being combined in a 2-to-1 multiplexer controlled by the first AND signal, "0" selecting the first AND signal and "1" selecting the second AND signal. Therefore, a wide AND-gate is provided having a number of input signals equal to the total number of input signals for the two function generators. In another embodiment, a wide OR-gate is provided by configuring the function generators as OR-gates and controlling the multiplexer using the second OR signal.
What is claimed is: 1. A multiplexer for implementing a logic function in a programmable logic device, the multiplexer comprising:a first input terminal; a second input terminal; a select terminal; and a line selectively coupling the select terminal to the first input terminal and further selectively coupling the select terminal to the second input terminal. 2. The multiplexer of claim 1, wherein the first input terminal is a "0" terminal and the second input terminal is a "1" terminal, and wherein if the multiplexer implements an AND gate, then the first input terminal is coupled to the select terminal via the line.3. The multiplexer of claim 1, wherein if the first input terminal is coupled to the select terminal via the line, andif the first input terminal receives a logic one signal, and the multiplexer outputs a signal received on the second input terminal, and if the first input terminal receives a logic zero signal, and the multiplexer outputs the logic zero signal, then the multiplexer implements an AND function. 4. The multiplexer of claim 1, wherein if the first input terminal is coupled to the select terminal via the line, andif the first input terminal receives a first logic signal, and the multiplexer outputs a signal received on the second input terminal, and if the first input terminal receives a second logic signal opposite of the first logic signal, and the multiplexer outputs the second logic signal, then the multiplexer implements an AND function. 5. The multiplexer of claim 1, wherein the first input terminal is coupled to a first function generator.6. The multiplexer of claim 5, wherein the second input terminal is coupled to a second function generator.7. The multiplexer of claim 1, wherein the first input terminal is coupled to a first look-up table.8. The multiplexer of claim 7, wherein the second input terminal is coupled to a second look-up table.9. The multiplexer of claim 1, wherein the line is an interconnect line in the programmable logic device.10. The multiplexer of claim 1, wherein the first input terminal is connected to a first means for implementing logic.11. The multiplexer of claim 10, wherein the second input terminal is connected to a second means for implementing logic.12. A multiplexer for implementing a logic function in a programmable logic device, the multiplexer comprising:a first input terminal; a second input terminal; a select terminal; and a line for selectively coupling the select terminal to one of the first input terminal and the second input terminal, wherein the first input terminal is a "0" terminal and the second input terminal is a "1" terminal, and wherein if the multiplexer implements an OR gate, then the second input terminal is coupled to the select terminal via the line. 13. A multiplexer for implementing a logic function in a programmable logic device, the multiplexer comprising:a first input terminal; a second input terminal; a select terminal; and a line for selectively coupling the select terminal to one of the first input terminal and the second input terminal, wherein if the second input terminal is coupled to the select terminal via the line, and if the second input terminal receives a logic one signal, and the multiplexer outputs the logic one signal, and if the second input terminal receives a logic zero signal, and the multiplexer outputs a signal received on the first input terminal, then the multiplexer implements an OR function. 14. A multiplexer for implementing a logic function in a programmable logic device, the multiplexer comprising:a first input terminal; a second input terminal; a select terminal; and a line for selectively coupling the select terminal to one of the first input terminal and the second input terminal, wherein if the second input terminal is coupled to the select terminal via the line, and if the second input terminal receives a first logic signal, and the multiplexer outputs the first logic signal, and if the second input terminal receives a second logic signal opposite that of the first logic signal, and the multiplexer outputs a signal received on the first input terminal, then the multiplexer implements an OR function. 15. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmably selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal. 16. The method of claim 15, wherein the multiplexer is configured to output a first signal received on the first input terminal if the select terminal receives a logic zero signal and to output a second signal received on the second input terminal if the select terminal receives a logic one signal.17. The method of claim 16, wherein the multiplexer implements an AND function if the first input terminal is coupled to the select terminal.18. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmably selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal, wherein the multiplexer is configured to output a first signal received on the first input terminal if the select terminal receives a logic zero signal and to output a second signal received on the second input terminal if the select terminal receives a logic one signal; and wherein the multiplexer implements an OR function if the second input terminal is coupled to the select terminal. 19. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmably selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal, wherein the first input terminal receives a first output signal from a first AND function, the second input terminal receives a second output signal from a second AND function, and an output terminal of the multiplexer provides a wide AND function. 20. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmably selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal, wherein the multiplexer is configured to output a first signal received on the first input terminal if the select terminal receives a logic zero signal and to output a second signal received on the second input terminal if the select terminal receives a logic one signal; and wherein the first input terminal receives a first output signal from a first AND function, the second input terminal receives a second output signal from a second AND function, and an output terminal of the multiplexer provides an AND-OR function. 21. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmably selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal, wherein the multiplexer is configured to output a first signal received on the first input terminal if the select terminal receives a logic zero signal and to output a second signal received on the second input terminal if the select terminal receives a logic one signal; and wherein the first input terminal receives a first output signal from a first OR function, the second input terminal receives a second output signal from a second OR function, and an output terminal of the multiplexer provides a wide OR function. 22. A method of implementing a logic function in a programmable logic device using a multiplexer, the method including:providing the multiplexer comprising a first input terminal, a second input terminal, and a select terminal; and programmable selecting one of the first input terminal and the second input terminal and coupling the selected one of the first input terminal and the second input terminal to the select terminal, wherein the first input terminal receives a first output signal from a first OR function, the second input terminal receives a second output signal from a second OR function, and an output terminal of the multiplexer provides an OR-AND function. 23. A method of implementing a logic function in a programmable logic device using a multiplexer having a first input terminal, a second input terminal, and a select terminal, the method comprising:implementing a first logic function by programmably coupling the first input terminal to the select terminal; and implementing a second logic function by programmably coupling the second input terminal to the select terminal. 24. A method of providing multiple logic functions in a programmable logic device, the method comprising:providing a multiplexer including a first input terminal, a second input terminal, and a select terminal, wherein a logic function is implemented by programmably coupling the select terminal to the first input terminal; and changing the logic function by programmable coupling the select terminal to the second input terminal. 25. The method of claim 24, wherein the programmable coupling is implemented by using an interconnect in the programmable logic device.26. A logic element in a programmable logic device, the logic element comprising:a first function generator having an output terminal; a second function generator having an output terminal; a multiplexer comprising: a first input terminal coupled to the output terminal of the first function generator; a second input terminal coupled to the output terminal of the second function generator; and a select terminal; and a line programmably coupling the select terminal of the multiplexer to the output terminal of the first function generator. 27. The logic element of claim 26, wherein the first input terminal is a "0" terminal and the second input terminal is a "1" terminal, and wherein if the multiplexer implements an AND gate, then the first input terminal is coupled to the select terminal via the line.28. The logic element of claim 26, wherein the first input terminal is a "0" terminal and the second input terminal is a "1" terminal, and wherein if the multiplexer implements an OR gate, then the second input terminal is coupled to the select terminal via the line.29. The logic element of claim 26, wherein if the first input terminal is coupled to the select terminal via the line, andif the first input terminal receives a logic one signal, and the multiplexer outputs a signal received on the second input terminal, and if the first input terminal receives a logic zero signal, and the multiplexer outputs the logic zero signal, then the multiplexer implements an AND function. 30. The logic element of claim 26, wherein if the second input terminal is coupled to the select terminal via the line, andif the second input terminal receives a logic one signal, and the multiplexer outputs the logic one signal, and if the second input terminal receives a logic zero signal, and the multiplexer outputs a signal received on the first input terminal, then the multiplexer implements an OR function. 31. The logic element of claim 26, wherein if the first input terminal is coupled to the select terminal via the line, andif the first input terminal receives a first logic signal, and the multiplexer outputs a signal received on the second input terminal, and if the first input terminal receives a second logic signal opposite of the first logic signal, and the multiplexer outputs the second logic signal, then the multiplexer implements an AND function. 32. The logic element of claim 26, wherein if the second input terminal is coupled to the select terminal via the line, andif the second input terminal receives a first logic signal, and the multiplexer outputs the first logic signal, and if the second input terminal receives a second logic signal opposite that of the first logic signal, and the multiplexer outputs a signal received on the first input terminal, then the multiplexer implements an OR function. 33. The logic element of claim 26, wherein the first function generator is a look-up table.34. The logic element of claim 26, wherein the second function generator is a look-up table.35. The logic element of claim 26, wherein the line is an interconnect line in the programmable logic device.
CROSS-REFERENCE TO RELATED APPLICATIONSThis application is a divisional application of commonly assigned, U.S. patent application Ser. No. 09/374,470, invented by Bernard J. New et al., entitled "WIDE LOGIC GATE IMPLEMENTED IN AN FPGA CONFIGURABLE LOGIC ELEMENT" and filed Aug. 13, 1999; U.S. Pat. No. 6,201,410,which is a continuation-in-part Ser. No. 09/283,472 application of U.S. Pat. No. 6,051,992, filed Apr. 1, 1999 and inventd by Steven P. Young et al., entitled "CONFIGURABLE LOGIC ELEMENT WITH ABILITY TO EVALUATE FIVE AND SIX INPUT FUNCTIONS";which is a divisional Ser. No. 08/835,088 application of U.S. Pat. No. 5,920,202, filed Apr. 4, 1997 and invented by Steven P. Young et al., entitled "CONFIGURABLE LOGIC ELEMENT WITH ABILITY TO EVALUATE FIVE AND SIX INPUT FUNCTIONS";which is a continuation-in-part of Ser. No. 08/806,997 U.S. Pat. No. 5,914,616, filed Feb. 26, 1997 and invented by Steven P. Young et al., entitled "FPGA REPEATABLE INTERCONNECT STRUCTURE WITH HIERARCHICAL INTERCONNECT LINES",all of which are incorporated herein by reference.FIELD OF THE INVENTIONThe invention relates to programmable integrated circuit devices, more particularly to wide logic gates implemented in a single configurable logic element in a field programmable logic device.BACKGROUND OF THE INVENTIONField Programmable Gate Arrays (FPGAS) typically include an array of tiles. Each tile includes a Configurable Logic Element (CLE) connectable to CLEs in other tiles through programmable interconnect lines. The interconnect lines typically provide for connecting each CLE to each other CLE. Interconnect delays on signals using these interconnect lines, even between adjacent CLEs, are typically much larger than delays on signals that remain within a single CLE. Therefore, it is desirable to implement a logic function in a single CLE whenever possible, rather than spreading out the logic into two or more CLEs.CLEs typically include combinatorial function generators, which are often implemented as 4-input lookup tables. Some CLEs can also implement any 5-input function, and some wider functions, by selecting between the output signals of two 4-input function generators with another CLE input signal. One such CLE, implemented in the Xilinx XC4000(TM)-Series FPGAs, is described in pages 4-11 through 4-23 of the Xilinx Sep. 1996 Data Book entitled "The Programmable Logic Data Book", available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference. (Xilinx, Inc., owner of the copyright, has no objection to copying these and other pages referenced herein but otherwise reserves all copyright rights whatsoever.) A portion of an XC4000-Series CLE that can implement any 5-input function is shown in FIG. 1. The output signals F' and G' of the two function generators F and G can be optionally combined with a third input signal H1 in a third function generator 3H to form output signal 3H'. (In the present specification, the same reference characters are used to refer to terminals, signal lines, and their corresponding signals.) The 3H function generator can implement any function of the three input signals (256 functions), including a 2-to-1 multiplexer that can be used when a 5-input function is desired. When function generators F and G share the same four input signals (F1/G1, F2/G2, F3/G3, F4/G4) and function generator 3H is programmed to function as a 2-to-1 multiplexer, output signal 3H' can represent any function of up to five input signals (F1/G1, F2/G2, F3/G3, F4/G4, H1). When the input signals driving function generators F and G are independent, output signal 3H' can represent some functions of up to nine input signals (F1, F2, F3, F4, G1, G2, G3, G4, H1).For example, to implement a wide AND-gate in an XC4000-Series FPGA, all the function generators F, G, 3H can be configured as AND-gates, as shown in FIG. 2A. Function generators F, G are configured as two 4-input AND-gates, while function generator 3H is configured as a 3-input AND-gate. The resulting output signal 3H' is the 9-input AND-function of input signals F1-G4, H1, and F1-F4.Similarly, as shown in FIG. 2B, a 9-input OR-gate can be implemented by configuring all the function generators F, G, 3H as OR-gates. The resulting output signal 3H' is the 9-input OR-function of input signals G1-G4, H1, and F1-F4.Many other 9-input functions can be implemented in an XC4000-Series CLE. These wide logic functions are made possible only by the 3-input function generator 3H. Without the third function generator, the logic functions that can be implemented in a single CLE are much more limited. However, a 3-input function generator requires a great deal more silicon to implement than a more limited function such as, for example, a 2-to-1 multiplexer. Therefore, many CLEs do not include a third function generator as a supplement to a pair of 4-input function generators.Function generator 3H can be replaced by a 2-to-1 multiplexer, with signal H1 selecting between output signals F' and F1. Replacing function generator 3H of FIG. 1 with a 2-to-1 multiplexer reduces the number of supported functions with up to nine input signals, but still provides any function of up to five input signals and reduces the silicon area required to implement a 5-input function generator. An FPGA using two 4-input function generators and a 2-to-1 multiplexer to implement a 5-input function generator is the XC3000(TM) family of products from Xilinx, Inc. The XC3000 CLE is described in pages 4-294 through 4-295 of the Xilinx September 1996 Data Book entitled "The Programmable Logic Data Book", available from Xilinx, Inc., which pages are incorporated herein by reference.It would be advantageous to be able to implement certain wide logic gates using only two function generators. It is therefore desirable to provide structures and methods for implementing wide logic functions such as wide AND and OR-gates in a CLE while using only two function generators.SUMMARY OF THE INVENTIONA first aspect of the invention provides a structure and method for implementing a wide AND-gate in an FPGA configurable logic element (CLE) or portion thereof that includes no more than two function generators. First and second function generators are configured as AND-gates. The output signals from the function generators (first and second AND signals) are combined in a 2-to-1 multiplexer controlled by the first AND signal. If all input signals to the first function generator are high, then the first AND signal is high, and the multiplexer passes the second AND signal. If at least one of the input signals to the first function generator is low, then the first AND signal is low, and the multiplexer passes the first AND signal, thereby providing a low multiplexer output signal. Therefore, a wide AND-gate is provided having a number of input signals equal to the total number of input signals for the two function generators.Another embodiment of the invention provides a structure for generating other wide logic functions incorporating an AND function. For example, an OR-AND structure can be provided by configuring the first and second function generators as OR-gates, then using the 2-to-1 multiplexer (coupled as described above for the wide AND-gate) to AND together the output signals from the function generators.A second aspect of the invention provides a structure and method for implementing a wide OR-gate in an FPGA CLE or portion thereof that includes no more than two function generators. First and second function generators are configured as OR-gates. The output signals from the function generators (first and second OR signals) are combined in a 2-to-1 multiplexer controlled by the second OR signal. If all input signals to the second function generator are low, then the second OR signal is low, and the multiplexer passes the first OR signal. If at least one of the input signals to the second function generator is high, then the second OR signal is high, and the multiplexer passes the second OR signal, thereby providing a high multiplexer output signal. Therefore, a wide OR-gate is provided having a number of input signals equal to the total number of input signals for the two function generators.Another embodiment of the invention provides a structure for generating other wide logic functions incorporating an OR function. For example, an AND-OR structure can be provided by configuring the first and second function generators as AND-gates, then using the 2-to-1 multiplexer (coupled as described above for the wide OR-gate) to OR together the output signals from the function generators.The invention allows the implementation of common wide logic functions in a single CLE. This efficient use of resources is advantageous. For example, two independent 8-input AND-gates can be implemented in a single CLE of a Virtex(TM) FPGA from Xilinx, Inc.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures, in which like reference numerals refer to similar elements.FIG. 1 shows a simplified diagram of a prior art XC4000-Series CLE.FIG. 2A shows a 9-input AND-gate implemented in an XC4000-Series CLE.FIG. 2B shows a 9-input OR-gate implemented in an XC4000-Series CLE.FIG. 3 shows a simplified diagram of a Virtex CLE.FIG. 4 shows an 8-input AND-gate implemented in one slice of a Virtex CLE.FIG. 5 shows an 8-input OR-gate implemented in one slice of a CLE similar to the Virtex CLE.FIG. 6 shows a 16-input AND-gate implemented in two slices of a CLE similar to the Virtex CLE.DETAILED DESCRIPTION OF THE DRAWINGSIn the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced without these specific details.Young et al, in U.S. Pat. Nos. 5,914,616 and 5,920,202 (referenced above), describe a Virtex FPGA wherein a Configurable Logic Element (CLE) is preferably included in each of an array of identical tiles. Young et al's CLE includes four 4-input function generators, as shown in FIG. 3. The output signals from first and second function generators J, H are combined with an independent input signal BHH in a five-input-function multiplexer F5A to produce an output signal F5A' that can be any function of five input signals. Additionally, multiplexer F5A can provide some functions of up to nine input signals, since none of the input signals are shared between the function generators J, H. The output signals from the third and fourth function generators G, F are similarly combined in five-input-function multiplexer F5B to generate an output signal F5B' that can also be any function of five input signals.The output signals F5A', F5B' of the two five-input-function multiplexers F5A, F5B are then combined with a sixth independent input signal BJJ in a first six-input-function multiplexer F6A, and with a different sixth independent input signal BGG in a second six-input-function multiplexer F6B. The two six-input-function multiplexers F6A, F6B provide two output signals F6A', F6B', respectively. One of the output signals can be any function of six inputs; the other output signal can be a related function of six input signals, where five input signals and two five-input-function multiplexers are shared between the two 6-input functions. When the sixth input signal is also shared, the two 6-input functions are the same, and output signals F6A' and F6B' are the same.Clearly, either of the six-input-function multiplexers F6A, F6B can be combined with the other elements of the CLE to provide either a 6-input AND-gate or a 6-input OR-gate. However, this obvious implementation uses all four of the function generators F, G, H, J, as well as both five-input-function multiplexers F5A, F5B and one of the six-input-function multiplexers F6A, F6B, while providing only a 6-input wide AND or OR-gate. It is desirable to provide an implementation that uses the CLE resources more efficiently.The CLE of FIG. 3 can be viewed as two slices SA, SB. Each slice SA, SB comprises two 4-input function generators (H and J, F and G, respectively), one five-input-function multiplexer (F5A, F5B, respectively), and one six-input-function multiplexer (F6A, F6B, respectively). The two slices are symmetric, and in one embodiment are laid out as mirror images of each other. In one embodiment, the invention provides a structure for implementing an 8-input AND-gate in a single slice of a Virtex CLE, using only two 4-input function generators and one five-input-function multiplexer, as described below in connection with FIG. 4. A similar structure for implementing an 8-input OR-gate is also provided, as described in connection with FIG. 5. Because each of these circuits is implemented in only one slice of the CLE, any combination of two independent such functions can be implemented in a single CLE. Thus, the invention provides a 2-to-1 savings of resources over the implementation that uses the six-input-function multiplexer as described above, while accommodating (in this example) two additional input signals.FIG. 4 shows an 8-input AND-gate implemented in a single slice of a Virtex CLE according to one embodiment of the invention. In this embodiment, first and second function generators J, H are configured as 4-input AND-gates. The output signals from function generators J, H (AND signals AND4J, AND4H, respectively) are combined in a 2-to-1 multiplexer F5A controlled by AND signal AND4J. AND signal AND4J is passed by multiplexer F5A if the select signal (also AND4J) is low; while AND signal AND4H is passed if the select signal is high.AND signal AND4J is preferably fed back from the CLE output terminal J' to the multiplexer select terminal BHH using the fastest available interconnect path. For example, AND signal AND4J can exit the CLE at terminal "V" in Young et al's FIG. 6A of U.S. Pat. No. 5,914,616, and re-enter the CLE at terminal "BH". In another embodiment, the AND signal AND4J is provided to the multiplexer select terminal BHH via a "fast feedback path", i.e., a high-speed path that bypasses the general interconnect lines, thereby further improving the performance of the wide logic function.If all four input signals to function generator J are high, then AND signal AND4J is high, and multiplexer F5A passes AND signal AND4H to provide output signal ANDOUT. If at least one of the input signals to function generator J is low, then AND signal AND4J is low, and multiplexer F5A passes AND signal AND4J, thereby providing a low output signal ANDOUT. Therefore, an 8-input AND-gate is provided using only two function generators and only half of the Virtex CLE.Note that although this embodiment is described in terms of a Virtex CLE, this embodiment can actually be implemented in any CLE having the elements and output terminals used in the embodiment of FIG. 4.In another embodiment of the invention (not shown) other structures are provided for generating other wide logic functions incorporating an AND function. For example, an OR-AND structure can be provided by configuring function generators H and J as OR-gates, then using multiplexer F5A (coupled as shown in FIG. 4) to AND together the output signals from the function generators.FIG. 5 shows an 8-input OR-gate implemented in a single slice of a CLE according to another embodiment of the invention. In this embodiment, first and second function generators J, H are configured as 4-input OR-gates. The output signals from function generators J, H (OR signals OR4J, OR4H, respectively) are combined in 2-to-1 multiplexer F5A controlled by OR signal OR4H. OR signal OR4J is passed by multiplexer F5A if the select signal (OR4H) is low; while OR signal OR4H is passed if the select signal is high.OR signal OR4H is preferably fed back from the CLE output terminal H' to the multiplexer select terminal BHH using the fastest available interconnect path.If all four input signals to function generator H are low, then OR signal OR4H is low, and multiplexer F5A passes OR signal OR4J to provide output signal OROUT. If at least one of the input signals to function generator H is high, then OR signal OR4H is high, and multiplexer F5A passes OR signal OR4H, thereby providing a high output signal OROUT. Therefore, an 8-input OR-gate is provided using only two function generators.Note that the embodiment of FIG. 5 cannot be implemented exactly as shown in a Virtex CLE, because the second function generator and multiplexer output terminals H' and F5A' are shared: both signals exit the CLE on terminal "V" of the CLE, as shown in Young et al's FIG. 6A. However, this embodiment is applicable to a CLE wherein the two output terminals are not shared, and in particular to any CLE having the elements and output terminals used in the embodiment of FIG. 5. Further, a similar 8-input OR-gate can be implemented in a Virtex CLE simply by using the connection scheme shown in FIG. 4 (i.e., by coupling the output signal J' from the first function generator to the multiplexer select terminal), and inverting the sense of the select signal.In another embodiment of the invention (not shown) other structures are provided for generating other wide logic functions incorporating an OR function. For example, an AND-OR structure can be provided by configuring function generators H and J as AND-gates, then using multiplexer F5A (coupled as shown in FIG. 5) to OR together the output signals from the function generators.Clearly, in order to implement the invention, both the multiplexer output signal and at least one of the function generator output signals must have access to the interconnect structure (e.g., either a fast feedback path or the general interconnect) at the same time. Note that if the sense of the multiplexer select signal is programmable (i.e., programmably either a high or low select signal passes the output signal from the first function generator to the multiplexer output terminal) then either an AND or an OR-function can be implemented, even if only one of the function generator output signals has access to the interconnect structure at the same time as the multiplexer output signal.In another embodiment of the invention, each slice of a CLE is used to implement a wide function as previously described (e.g., each slice is used to implement an 8-input AND gate). The two 8-input signals are then combined in an additional 2-to-1 multiplexer to generate an even wider logic function (e.g., a 16-input AND gate). The additional multiplexer is controlled by a select signal provided by either of the two slices (e.g., by either of the two 8-input AND signals). According to this embodiment, a 16-input AND-gate is implemented as shown in FIG. 6. Note that in order to implement this embodiment one function generator output signal from each slice, plus three 2-to-1 multiplexer output signals, must have access to the interconnect structure at the same time.In another embodiment, the F6A multiplexer is controlled as previously described to combine the signals generated by multiplexers F5A and F5B, where multiplexers F5A and F5B are conventionally used as simple multiplexers.Using de Morgan's law, an OR-gate can be interpreted as a NAND-gate with inverted input signals, and an AND-gate can be interpreted as a NOR-gate with inverted input signals. Therefore, by using the function generators to provide inversion to the NAND or NOR function input signals, NAND and NOR-gates can also be implemented using the 2-to-1 multiplexer similar to the OR and AND-gates described above.In one embodiment, not all of the function generator input terminals are used; for example, an AND-gate or OR-gate with less than eight input signals is generated from two 4-input function generators. In another embodiment, the two function generators have fewer or more than four input signals. In yet another embodiment, the two function generators have or are configured to have different numbers of input signals.Those having skill in the relevant arts of the invention will now perceive various modifications and additions which may be made as a result of the disclosure herein of the preferred embodiment. For example, the above text describes the structures and methods of the invention in the context of the Virtex CLE from Xilinx, Inc. However, the invention can also be applied to other FPGAs and other programmable logic devices. Further, CLEs, function generators, and multiplexers other than those described herein can be used to implement the invention. Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication may often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art. Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents.
A semiconductor device comprising a substrate is provided. The device further comprises a through-substrate via (TSV) extending into the substrate, and a substantially helical conductor disposed around the TSV. The substantially helical conductor can be configured to generate a magnetic field in the TSV in response to a current passing through the helical conductor. More than one TSV can be included, and/or more than one substantially helical conductor can be provided.
CLAIMSI/We claim:1. A semiconductor device, comprising:a substrate,a through-substrate via (TSV) extending into the substrate, anda substantially helical conductor disposed around the TSV.2. The semiconductor device of claim 1, wherein the substantially helical conductor is configured to generate a magnetic field in the TSV in response to a current passing through the substantially helical conductor.3. The semiconductor device of claim 2, wherein the substantially helical conductor is configured to induce a change in the magnetic field in the TSV in response to a changing current in the substantially helical conductor.4. The semiconductor device of claim 1, wherein the TSV comprises a ferromagnetic or a ferrimagnetic material.5. The semiconductor device of claim 1, wherein the TSV is separated from the substantially helical conductor by an insulating material.6. The semiconductor device of claim 1, wherein the substantially helical conductor comprises more than one turn around the TSV.7. The semiconductor device of claim 1, wherein the substantially helical conductor is coaxially aligned with the TSV.8. A semiconductor device, comprising:a substrate,a through-substrate via (TSV) extending into the substrate, a first substantially helical conductor disposed around the TSV, and a second substantially helical conductor disposed around the TSV.9. The semiconductor device of claim 8, wherein the first substantially helical conductor is configured to induce a change in a magnetic field in the TSV in response to a first changing current in the first substantially helical conductor, and wherein the second substantially helical conductor is configured to have a second changing current induced therein in response to the change in the magnetic field.10. The semiconductor device of claim 9, further comprising:a third substantially helical conductor disposed around the TSV,wherein the third substantially helical conductor is configured to have a third changing current induced therein in response to the change in the magnetic field.10. The semiconductor device of claim 8, wherein the TSV comprises a ferromagnetic or a ferrimagnetic material.11. The semiconductor device of claim 8, wherein the TSV is separated from the first and second substantially helical conductors by an insulating material.12. The semiconductor device of claim 8, wherein the first substantially helical conductor comprises a different number of turns around the TSV than the second substantially helical conductor.13. The semiconductor device of claim 8, wherein the first and second substantially helical conductors comprise a same number of turns around the TSV.14. The semiconductor device of claim 8, wherein one of the first and second substantially helical conductors comprises more than one turn around the TSV.15. The semiconductor device of claim 8, wherein the first and second substantially helical conductors are coaxially aligned with the TSV.16. The semiconductor device of claim 8, wherein the first substantially helical conductor and the second substantially helical conductor are electrically isolated from each other and from the TSV.17. The semiconductor device of claim 8, wherein the first substantially helical conductor is electrically connected to a power supply and the second substantially helical conductor is electrically connected to a load.18. A semiconductor device, comprising :a substrate,a first through-substrate via (TSV) extending into the substrate,a second TSV extending into the substrate,a first substantially helical conductor disposed around the first TSV, anda second substantially helical conductor disposed around the second TSV.19. The semiconductor device of claim 18, wherein the first substantially helical conductor is configured to induce a change in a magnetic field in the first TSV and the second TSV in response to a first changing current in the first substantially helical conductor, and wherein the second substantially helical conductor is configured to have a second changing current induced therein in response to the change in the magnetic field in the second TSV.20. The semiconductor device of claim 19wherein the first TSV is coupled to the second TSV by an upper coupling member above the first and second substantially helical conductors, and by a lower coupling member below the first and second substantially helical conductors, such that the first and second TSVs and the upper and lower coupling members form a closed path for the magnetic field.21. The semiconductor device of claim 18, wherein each of the first and second TSVs comprise a ferromagnetic or a ferrimagnetic material.22. The semiconductor device of claim 18, wherein the first TSV is electrically insulated from the first substantially helical conductor, and the second TSV is electrically insulated from the second substantially helical conductor.23. The semiconductor device of claim 18, wherein the first substantially helical conductor and the second substantially helical conductor are electrically isolated from each other and from the first and second TSVs.24. The semiconductor device of claim 18, wherein the first substantially helical conductor comprises a different number of turns around the first TSV than the second substantially helical conductor comprises around the second TSV.25. The semiconductor device of claim 18, wherein the first substantially helical conductor comprises a same number of turns around the first TSV as the second substantially helical conductor comprises around the second TSV.26. The semiconductor device of claim 18, wherein the first substantially helical conductor comprises more than one turn around the first TSV.27. The semiconductor device of claim 18, wherein the second substantially helical conductor comprises more than one turn around the second TSV.28. The semiconductor device of claim 18, wherein the first substantially helical conductor is coaxially aligned with the first TSV, and the second substantially helical conductor is coaxially aligned with the second TSV.29. The semiconductor device of claim 18, wherein the first substantially helical conductor is electrically connected to a power supply and the second substantially helical conductor is electrically connected to a load.30. A semiconductor device, comprising:a substrate,a through-substrate via (TSV) extending into the substrate, anda non-planar spiral conductor disposed around the TSV.31. The semiconductor device of claim 30, wherein the non-planar spiral conductor is configured to generate a magnetic field in the TSV in response to a current passing through the non-planar spiral conductor.32. The semiconductor device of claim 31, wherein the non-planar spiral conductor is configured to induce a change in the magnetic field in the TSV in response to a changing current in the non-planar spiral conductor.33. The semiconductor device of claim 30, wherein the TSV comprises a ferromagnetic or a ferrimagnetic material.
INDUCTORS WITH THROUGH-SUBSTRATE VIA CORESCROSS-REFERENCE TO RELATED APPLICATION(S)[0001] This application contains subject matter related to a concurrently-filed U.S. Patent Application by Kyle K. Kirby, entitled "SEMICONDUCTOR DEVICES WITH BACK-SIDE COILS FOR WIRELESS SIGNAL AND POWER COUPLING." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9206.US00.[0002] This application contains subject matter related to a concurrently-filed U.S. Patent Application by Kyle K. Kirby, entitled "SEMICONDUCTOR DEVICES WITH THROUGH- SUBSTRATE COILS FOR WIRELESS SIGNAL AND POWER COUPLING." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9207. US00.[0003] This application contains subject matter related to a concurrently-filed U.S. Patent Application by Kyle K. Kirby, entitled "MULTI-DIE INDUCTORS WITH COUPLED THROUGH-SUBSTRATE VIA CORES." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9220.US00.[0004] This application contains subject matter related to a concurrently-filed U.S. Patent Application by Kyle K. Kirby, entitled "3D INTERCONNECT MULTI-DIE INDUCTORS WITH THROUGH-SUBSTRATE VIA CORES." The related application, of which the disclosure is incorporated by reference herein, is assigned to Micron Technology, Inc., and is identified by attorney docket number 10829-9221.US00.TECHNICAL FIELD[0005] The present disclosure generally relates to semiconductor devices, and more particularly relates to semiconductor devices including inductors with through-substrate via cores, and methods of making and using the same.BACKGROUND[0006] As the need for miniaturization of electronic circuits continues to increase, the need to minimize various circuit elements, such as inductors, increases apace. Inductors are an important component in many discrete element circuits, such as impedance-matching circuits, linear filters, and various power circuits. Since traditional inductors are bulky components, successful miniaturization of inductors presents a challenging engineering problem.[0007] One approach to miniaturizing an inductor is to use standard integrated circuit building blocks, such as resistors, capacitors, and active circuitry, such as operational amplifiers, to design an active inductor that simulates the electrical properties of a discrete inductor. Active inductors can be designed to have a high inductance and a high Q factor, but inductors fabricated using these designs consume a great deal of power and generate noise. Another approach is to fabricate a spiral -type inductor using conventional integrated circuit processes. Unfortunately, spiral inductors in a single level (e.g., plane) occupy a large surface area, such that the fabrication of a spiral inductor with high inductance can be cost- and size -prohibitive. Accordingly, there is a need for other approaches to the miniaturization of inductive elements in semiconductor devices.BRIEF DESCRIPTION OF THE DRAWINGS[0008] Figure 1 is a simplified cross-sectional view of a semiconductor device having an inductor with a TSV core configured in accordance with an embodiment of the present technology.[0009] Figure 2 is a simplified perspective view of a substantially helical conductor disposed around a through-substrate via configured in accordance with an embodiment of the present technology.[0010] Figure 3 is a simplified cross-sectional view of coupled inductors sharing a through-substrate via core configured in accordance with an embodiment of the present technology.[001 1] Figure 4 is a simplified cross-sectional view of coupled inductors sharing a through-substrate via core configured in accordance with an embodiment of the present technology.[0012] Figure 5 is a simplified cross-sectional view of a semiconductor device having an inductor with a closed core comprising multiple through-substrate vias configured in accordance with an embodiment of the present technology.[0013] Figure 6 is a simplified cross-sectional view of coupled inductors with through- substrate via cores configured in accordance with an embodiment of the present technology. [0014] Figure 7 is a simplified cross-sectional view of coupled inductors with through- substrate via cores configured in accordance with an embodiment of the present technology.[0015] Figure 8 is a simplified cross-sectional view of coupled inductors with through- substrate via cores configured in accordance with an embodiment of the present technology.[0016] Figure 9 is a simplified cross-sectional view of an inductor with a through- substrate via core configured in accordance with an embodiment of the present technology.[0017] Figure 10 is a simplified cross-sectional view of an inductor with a through- substrate via core configured in accordance with an embodiment of the present technology.[0018] Figure 1 1 is a simplified perspective view of a substantially helical conductor disposed around a through-substrate via configured in accordance with an embodiment of the present technology.[0019] Figures 12A through 12D are simplified cross-sectional views of an inductor with a through-substrate via core at various stages of a manufacturing process in accordance with an embodiment of the present technology.[0020] Figures 12E and 12F are simplified perspective views of an inductor with a through-substrate via core at various stages of a manufacturing process in accordance with an embodiment of the present technology.[0021] Figure 13 is a flow chart illustrating a method of manufacturing an inductor with a through-substrate via core in accordance with an embodiment of the present technology.DETAILED DESCRIPTION[0022] In the following description, numerous specific details are discussed to provide a thorough and enabling description for embodiments of the present technology. One skilled in the relevant art, however, will recognize that the disclosure can be practiced without one or more of the specific details. In other instances, well-known structures or operations often associated with semiconductor devices are not shown, or are not described in detail, to avoid obscuring other aspects of the technology. In general, it should be understood that various other devices, systems, and methods in addition to those specific embodiments disclosed herein may be within the scope of the present technology.[0023] As discussed above, semiconductor devices are continually designed with ever greater needs for inductors with high inductance that occupy a small area. Accordingly, several embodiments of semiconductor devices in accordance with the present technology can provide inductors with through-substrate via cores, which can provide high inductance while consuming only a small area.[0024] Several embodiments of the present technology are directed to semiconductor devices, systems including semiconductor devices, and methods of making and operating semiconductor devices. In one embodiment, a semiconductor device comprises a substrate (e.g. , of silicon, glass, gallium arsenide, organic material, etc.), a through-substrate via (TSV) extending into the silicon substrate, and a substantially helical conductor disposed around the TSV. The substantially helical conductor can be a non-planar spiral configured to generate a magnetic field in the TSV in response to a current passing through the substantially helical conductor. More than one TSV can be included (e.g. , to provide a closed core), and/or more than one substantially helical conductor can be provided (e.g. , to provide coupled inductors).[0025] Figure 1 is a simplified cross-sectional view of a semiconductor device 10 having an inductor 100 with a TSV core configured in accordance with an embodiment of the present technology. The device 10 has a substrate material 101a and an insulating material 101b, and the inductor 100 has one portion in the substrate material 101a and another portion in the insulating material 101b. For example, the inductor 100 can include a TSV 102 having a first portion in the substrate material 101a and a second portion in the insulating material 101b. The TSV 102 accordingly extends out of the substrate material 101a and into the insulating material 102b. The inductor 100 can further include a substantially helical conductor 103 ("conductor 103") around at least a section of the second portion of the TSV 102 in the insulating material 101b. In the embodiment shown in Figure 1, the conductor 103 is illustrated schematically with five complete turns (103a, 103b, 103c, 103d and 103e) around the TSV 102. The conductor 103 is configured to induce a magnetic field in the TSV 102 in response to a current passing through the conductor 103. The conductor 103 can be operably connected to other circuit elements (not shown) by leads 120a and 120b.[0026] According to one embodiment of the present technology, the substrate material 101a can be any one of a number of substrate materials suitable for semiconductor processing methods, including silicon, glass, gallium arsenide, gallium nitride, organic laminates, molding compounds (e.g. , for reconstituted wafers for fan-out wafer-level processing) and the like. As will be readily understood by those skilled in the art, a through-substrate via, such as the TSV 102, can be made by etching a high-aspect-ratio hole into the substrate material 101a and filling it with one or more materials in one or more deposition and/or plating steps. Accordingly, the TSV 102 extends at least substantially into the substrate material 101a, which is unlike other circuit elements that are additive ly constructed on top of the substrate material 101a. For example, the substrate material 101a can be a silicon wafer of about 800 μιη thickness, and the TSV 102 can extend from 30 to 100 μιη into the substrate material 101a. In other embodiments, a TSV may extend even further into a substrate material (e.g. , 150 μιη, 200 μιη, etc.), or may extend into a substrate material by as little as 10 μιη.[0027] The TSV 102 can also include an outer layer 102a and a magnetic material 102b within the outer layer 102a. The outer layer 102a can be a dielectric or insulating material (e.g. , silicon oxide, silicon nitride, polyimide, etc. ) that electrically isolates the magnetic material 102b from the conductor 103. In accordance with one embodiment of the present technology, the magnetic material 102b of the TSV 102 can be a material with a higher magnetic permeability than the substrate material 101a and/or the insulating material 101b to increase the magnetic field in the TSV 102 when current flows through the conductor 103. The magnetic material 102b can be, for example, ferromagnetic, ferrimagnetic, or a combination thereof. For example, the TSV 102 can include nickel, iron, cobalt, niobium, or an alloy thereof. The TSV 102 can include more than one material, either in a bulk material of a single composition, or in discrete regions of different materials (e.g. , coaxial laminate layers). The TSV 102 can include a bulk material with desirable magnetic properties (e.g. , elevated magnetic permeability provided by nickel, iron, cobalt, niobium, or an alloy thereof), or can include multiple discrete layers, only some of which are magnetic, in accordance with an embodiment of the present technology.[0028] For example, following a high-aspect ratio etch and a deposition of an insulator (e.g. , insulator 102a), the TSV 102 can be provided in a single metallization step by filling in the insulated opening with a magnetic material. In another embodiment, the TSV 102 can be formed in multiple steps to provide coaxial layers (e.g. , two or more magnetic layers separated by one or more non-magnetic layers). For example, multiple conformal plating operations can be performed before a bottom-up fill operation to provide a TSV with a coaxial layer of nonmagnetic material separating a core of magnetic material and an outer coaxial layer of magnetic material. In this regard, a first conformal plating step can partially fill and narrow the etched opening with a magnetic material (e.g. , nickel, iron, cobalt, niobium, or an alloy thereof), a second conformal plating step can further partially fill and further narrow the opening with a non-magnetic material (e.g. , polyimide or the like), and a subsequent bottom-up plating step (e.g. , following the deposition of a seed material at the bottom of the narrowed opening) can completely fill the narrowed opening with another magnetic material (e.g., nickel, iron, cobalt, niobium, or an alloy thereof). Such a structure with laminated coaxial layers of magnetic and non-magnetic material can help to reduce eddy current losses in a TSV through which a magnetic flux is passing.[0029] The turns 103a-103e of the conductor 103 are electrically insulated from one another and from the TSV 102. In one embodiment, the insulating material 101b electrically isolates the conductor 103 from the TSV 102. In another embodiment, the conductor 103 can have a conductive inner region 110a covered (e.g. , coated) by a dielectric or insulating outer layer 1 10b. For example, the outer layer 1 10b of the conductor 103 can be an oxide layer, and the inner region 110a can be copper, gold, tungsten, or alloys thereof. One aspect of the conductor 103 is that the individual turns 103a- 103e define a non-planar spiral with respect to the longitudinal dimension "L" of the TSV 102. Each subsequent turn 103a-103e is at a different elevation along the longitudinal dimension L of the TSV 102 in the non-planar spiral of the conductor 103.[0030] A conductive winding (e.g. , the conductor 103) of an inductor disposed around a TSV magnetic core (e.g., the TSV 102) need not be smoothly helical, in accordance with one embodiment of the present technology. Although the conductor 103 is illustrated schematically and functionally in Figure 1 as having turns that, in cross section, appear to gradually increase in distance from a surface of the substrate, it will be readily understood by those skilled in the art that fabricating a smooth helix with an axis perpendicular to a surface of a substrate presents a significant engineering challenge. Accordingly, a "substantially helical" conductor, as used herein, describes a conductor having turns that are separated along the longitudinal dimension L of the TSV (e.g. , the z-dimension perpendicular to the substrate surface), but which are not necessarily smoothly varying in the z-dimension (e.g. , the substantially helical shape does not possess arcuate, curved surfaces and a constant pitch angle). Rather, an individual turn of the conductor can have a pitch of zero degrees and the adjacent turns can be electrically coupled to each other by steeply-angled or even vertical connectors (e.g., traces or vias) with a larger pitch, such that a "substantially helical" conductor can have a stepped structure. Moreover, the planar shape traced out by the path of individual turns of a substantially helical conductor need not be elliptical or circular. For the convenience of integration with efficient semiconductor processing methodologies (e.g. , masking with cost-effective reticles), individual turns of a substantially helical conductor can trace out a polygonal path in a planar view (e.g. , a square, a hexagon, an octagon, or some other regular or irregular polygonal shape around the TSV 102). Accordingly, a "substantially helical" conductor, as used herein, describes a non-planar spiral conductor having turns that trace out any shape in a planar view (e.g. , parallel to the plane of the substrate surface) surrounding a central axis, including circles, ellipses, regular polygons, irregular polygons, or some combination thereof.[0031] Figure 2 is a simplified perspective view of a substantially helical conductor 204 ("conductor 204") disposed around a through-substrate via 202 configured in accordance with an embodiment of the present technology. For more easily illustrating the substantially helical shape of the conductor 204 illustrated in Figure 2, the substrate material, insulating materials, and other details of the device in which the conductor 204 and the TSV 202 are disposed have been eliminated from the illustration. As can be seen with reference to Figure 2, the conductor 204 is disposed coaxially around the TSV 202. The conductor 204 makes three turns (204a, 204b, and 204c) about the TSV 202. As described above, rather than having a single pitch angle, the conductor 204 has a stepped structure, whereby turns with a pitch angle of 0 (e.g. , turns laying in a plane of the device 200) are connected by vertical connecting portions that are staggered circumferentially around the turns. In this regard, planar turns 204a and 204b are connected by a vertical connecting portion 206, and planar turns 204b and 204c are connected by a vertical connecting portion 208. This stepped structure facilitates fabrication of the conductor 204 using simpler semiconductor processing techniques (e.g. , planar metallization steps for the turns and via formation for the vertical connecting portions). Moreover, as shown in Figure 2, the turns 204a, 204b, and 204c of the conductor 204 trace a rectangular shape around the TSV 202 when oriented in a planar view.[0032] In accordance with one embodiment, the TSV 202 can optionally (e.g., as shown with dotted lines) include a core material 202a surrounded by one or more coaxial layers, such as layers 202b and 202c. For example, the core 202a and the outer coaxial layer 202c can include magnetic materials, while the middle coaxial layer 202b can include a non-magnetic material, to provide a laminate structure that can reduce eddy current losses. Although the TSV 202 is illustrated in Figure 2 as optionally including a three-layer structure (e.g., a core 202a surrounded by two coaxially laminated layers 202b and 202c), in other embodiments any number of coaxial laminate layers can be used to fabricate a TSV.[0033] As shown in the foregoing examples of Figure 1 and Figure 2, the number of turns made by a substantially helical conductor about a TSV can vary in accordance with different embodiments of the technology. Providing more turns can increase the inductance of an inductor compared to having fewer turns, but at an increase in the cost and complexity of fabrication (e.g., more fabrication steps). The number of turns can be as low as one, or as high as is desired. As is shown in the example embodiment of Figure 2, a substantially helical conductor need not make an integer number of turns about a TSV (e.g., the top and/or bottom turn may not be a complete turn).[0034] Although the foregoing embodiments shown in Figures 1 and 2 have illustrated a single inductor with a single substantially helical conductor disposed around a single TSV, other embodiments of the present technology can be configured with more than one substantially helical conductor and/or TSV. For example, Figure 3 is a simplified cross-sectional view of two coupled inductors sharing a common through-substrate via core, configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 3, a device 300 includes a substrate material 301a, an insulating material 301b, and a TSV 302. The TSV 302 extends out of the substrate material 301a and into the insulating material 301b. The device 300 also includes a first substantially helical conductor 303 ("conductor 303") disposed around a first portion of the TSV 302 and a second substantially helical conductor 304 ("conductor 304") disposed around a second portion of the TSV 302. In the illustrated embodiment, the first conductor 303 has two complete turns (303a and 303b) around the TSV 302, and the second conductor 304 has three complete turns (304a, 304b, and 304c) around the TSV 302. The first conductor 303 is operably connected to device pads 330a and 330b by leads 320a and 320b, respectively. The second conductor 304 can be operably connected by leads 321a and 321b to other circuit elements (not shown), including one or more rectifiers to revert a coupled alternating current to DC and one or more capacitors or other filter elements to provide steady current.[0035] According to one embodiment, the first conductor 303 is configured to induce a magnetic field in the TSV 302 in response to a current passing through the conductor 303 (e.g., provided by a voltage applied across the pads 330a and 330b). By changing the current passing through the first conductor 303 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the TSV 302, which in turn induces a changing current in the second conductor 304. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 303 and another comprising the second conductor 304 (e.g., operating the device 300 as a power transformer). [0036] As is shown in Figure 3, the first conductor 303 and the second conductor 304 have different numbers of turns. As will be readily understood by one skilled in the art, this arrangement allows the device 300 to be operated as a step-up or step-down transformer (depending upon which substantially helical conductor is utilized as the primary winding and which the secondary winding). For example, the application of a first changing current (e.g. , 2 V of alternating current) to the first conductor 303 will induce a second changing current with a higher voltage (e.g., 3V of alternating current) in the second conductor 304, given the 2:3 ratio of turns between the primary and secondary windings in this configuration. When operated as a step-down transformer (e.g. , by utilizing the second conductor 304 as the primary winding, and the first conductor 303 as the secondary winding), the application of a first changing current (e.g., 3V of alternating current) to the second conductor 304 will induce a changing current with a lower voltage (e.g., 2V of alternating current) in the first conductor 303, given the 3 :2 ratio of turns between the primary and secondary windings in this configuration.[0037] Although Figure 3 illustrates two substantially helical conductors or windings disposed around a TSV at two different heights (e.g. , coaxially but not concentrically), in other embodiments, multiple substantially helical conductors with different diameters can be provided at the same height (e.g., with radially-spaced conductive turns in the same layers). As the inductance of a substantially helical conductor depends, at least in part, on its diameter and radial spacing from the TSV around which it is disposed, such an approach can be used where a reduction in the number of layer processing steps is more desirable than an increase in the inductance of the substantially helical conductor so radially spaced.[0038] Although in the example of Figure 3 a pair of coupled inductors are shown with different numbers of turns in their windings, in other embodiments of the present technology coupled inductors can be provided with the same number of windings (e.g., to couple two electrically isolated circuits without stepping up or down the voltage from the primary winding). For example, Figure 4 is a simplified cross-sectional view of coupled inductors sharing a through-substrate via core configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 4, a device 400 includes a substrate material 401a, an insulating material 401b, and a TSV 402. The TSV 402 extends out of the substrate material 401a and into the insulating material 402b. The device 400 also includes a first substantially helical conductor 403 ("conductor 403") disposed around a first portion of the TSV 402, a second substantially helical conductor 404 ("conductor 404") disposed around a second portion of the TSV 402, and a third substantially helical conductor 405 ("conductor 405") disposed around a third portion of the TSV 402. In the present embodiment, each of the conductors 403, 404 and 405 is shown to include two complete turns (403a and 403b, 404a and 404b, and 405a and 405b, respectively) around the TSV 402. The first conductor 403 is operably connected to device pads 430a and 430b by leads 420a and 420b, respectively. The second conductor 404 can be operably connected to other circuit elements (not shown) by leads 421a and 421b, as can the third conductor 405 by corresponding leads 422a and 422b.[0039] According to one embodiment, the first conductor 403 is configured to induce a magnetic field in the TSV 402 in response to a current passing through the first conductor 403 (e.g., provided by a voltage differential applied across pads 430a and 430b). By changing the current passing through the first conductor 403 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the TSV 402, which in turn induces a changing current in both the second conductor 404 and the third conductor 405. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 403 and others comprising the second and third conductors 404 and 405.[0040] The foregoing example embodiments illustrated in Figures 1 through 4 include inductors having an open core (e.g. , a core wherein the magnetic field passes through a higher magnetic permeability material for only part of the path of the magnetic field), but embodiments of the present technology can also be provided with a closed core (e.g. , a core in which a substantially continuous path of high magnetic permeability material passes through the middle of a conductive winding). For example, Figure 5 is a simplified cross-sectional view of a semiconductor device 50 having an inductor 500 with a closed core comprising multiple through-substrate vias configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 5, the device 50 includes a substrate material 501a and an insulating material 501b, and the inductor 500 has one portion in the substrate material 501a and another portion in the insulating material 501b. For example, the inductor 500 can include a first TSV 502a and a second TSV 502b, each having a first portion in the substrate material 501a and a second portion in the insulating material 501b. The TSVs 502a and 502b accordingly extend out of the substrate material 501a and into the insulating material 501b. The inductor can further include a substantially helical conductor 503 ("conductor 503") around at least a section of the second portion of the TSV 502a in the insulating material 501b. In the embodiment shown in Figure 5, the conductor 503 has five turns (503a, 503b, 503c, 503d, and 503e) around the TSV 502a. The TSVs 502a and 502b are coupled above the conductor 503 by an upper coupling member 550a, and are coupled below the conductor 503 by a lower coupling member 550b.[0041] The upper coupling member 550a and the lower coupling member 550b can include a magnetic material having a magnetic permeability higher than that of the substrate material 501a and/or the insulating material 501b. The magnetic material of the upper and lower coupling members 550a and 550b can be either the same material as the TSVs 502a and 502b, or a different material. The magnetic material of the upper and lower coupling members 550a and 550b can be a bulk material (e.g. , nickel, iron, cobalt, niobium, or an alloy thereof), or a laminated material with differing layers (e.g., of magnetic material and non-magnetic material). Laminated layers of magnetic and non-magnetic material can help to reduce eddy current losses in the upper and lower coupling members 550a and 550b. In accordance with one aspect of the present technology, the first TSV 502a, the second TSV 502b, the upper coupling member 550a and the lower coupling member 550b can together provide a closed path for the magnetic field induced by the conductor 503 (illustrated with magnetic field lines, such as magnetic field line 560), such that the inductance of the inductor 500 is greater than it would be if only the first TSV 502a were provided.[0042] Although in the example embodiment illustrated in Figure 5 an inductor with a completely closed core is illustrated, in other embodiments one or both of the upper and lower coupling members could be omitted. In such an embodiment, a second TSV with elevated magnetic permeability could be situated near a first TSV around which a winding is disposed to provide an open core embodiment with improved inductance over the single-TSV embodiments illustrated in Figures 1 through 4.[0043] According to one embodiment, a closed magnetic core as illustrated by way of example in Figure 5 can provide additional space in which one or more additional windings can be disposed (e.g., to provide a transformer or power couple). For example, Figure 6 is a simplified cross-sectional view of coupled inductors with through-substrate via cores configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 6, a device 600 includes a substrate material 601a, an insulating material 601b, and two TSVs 602a and 602b. The TSVs 602a and 602b extend out of the substrate material 601a and into the insulating material 601b. The device 600 also includes a first substantially helical conductor 603 ("conductor 603") with six turns disposed around the first TSV 602a, and a second substantially helical conductor 604 ("conductor 604"), also with six turns, disposed around the second TSV 602b. The first conductor 603 is connected to other circuit elements (not shown) by leads 620a and 620b. The second conductor 604 is connected to pads 631a and 631b on a top surface of the device 600 by leads 621a and 621b, respectively. The TSVs 602a and 602b are coupled (a) above the first and second conductors 603 and 604 by an upper coupling member 650a, and (b) below the first and second conductors 603 and 604 by a lower coupling member 650b.[0044] According to one embodiment, the first conductor 603 is configured to induce a magnetic field in the first and second TSVs 602a and 602b (as well as in the upper and lower coupling members 650a and 650b) in response to a current passing through the first conductor 603 (e.g., provided by a voltage applied across leads 620a and 620b). By changing the current passing through the first conductor 603 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the first and second TSVs 602a and 602b (as well as in the upper and lower coupling members 650a and 650b), which in turn induces a changing current in the second conductor 604. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 603 (e.g., in a device electrically coupled to leads 620a and 620b) and another circuit comprising the second conductor 604 (e.g., in a device in another die electrically coupled via the pads 631a and 631b).[0045] Although in the embodiment illustrated in Figure 6 two coupled inductors on proximate TSVs are shown with the same number of turns, in other embodiments of the present technology different numbers of windings can be provided on similarly-configured inductors. For example, Figure 7 is a simplified cross-sectional view of coupled inductors with through- substrate via cores configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 7, the device 700 includes a substrate material 701a, an insulating material 701b, and two TSVs 702a and 702b. The TSVs 702a and 702b extend out of the substrate material 701a and into the insulating material 701b. The device 700 also includes a first substantially helical conductor 703 ("conductor 703") with six turns disposed around the first TSV 702a, and a second substantially helical conductor 704 ("conductor 704") with four turns disposed around the second TSV 702b. The first conductor 703 can be operably connected to other circuit elements (not shown) by leads 720a and 720b. The second conductor 704 can be operably connected to other circuit elements (not shown) by leads 721a and 721b. The first and second TSVs 702a and 702b are coupled above the first and second conductors 703 and 704 by an upper coupling member 750a, and are coupled below the first and second conductors 703 and 704 by a lower coupling member 750b.[0046] According to one embodiment, the first conductor 703 is configured to induce a magnetic field in the first and second TSVs 702a and 702b (as well as in the upper and lower coupling members 750a and 750b) in response to a current passing through the first conductor 703. By changing the current passing through the first conductor 703 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the first and second TSVs 702a and 702b (as well as in the upper and lower coupling members 750a and 750b, as shown above with reference to Figure 5), which in turn induces a changing current in the second conductor 704. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 703 (e.g. , in a device electrically coupled via leads 720a and 720b) and another circuit comprising the second conductor 704 (e.g. , in a device electrically coupled via leads 721a and 721b).[0047] The first conductor 703 and the second conductor 704 shown in Figure 7 have different numbers of turns. As will be readily understood by one skilled in the art, this arrangement allows the device 700 to be operated as a step-up or step-down transformer (depending upon which conductor is utilized as the primary winding and which the secondary winding). For example, the application of a first changing current (e.g. , 3V of alternating current) to the second conductor 704 will induce a second changing current with lower voltage (e.g., 2V of alternating current) in the first conductor 703, given the 6:4 ratio of turns between the primary and secondary windings in this configuration.[0048] Although in the embodiments illustrated in Figures 6 and 7 devices with an equal number of TSVs and windings are illustrated, other embodiments of the present technology can provide more than one winding on either or both of a pair of proximate or coupled TSVs. For example, Figure 8 is a simplified cross-sectional view of coupled inductors with through- substrate via cores configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 8, the device 800 includes a substrate material 801a, an insulating material 801b, and two TSVs 802a and 802b. The TSVs 802a and 802b extend out of the substrate material 801a and into the insulating material 801b. The device 800 also includes a first substantially helical conductor 803 ("conductor 803") with three turns around a first portion of the first TSV 802a, and a second substantially helical conductor 804 ("conductor 804") with two turns around a second portion of the first TSV 802a. The device further includes a third substantially helical conductor 805 ("conductor 805") with six turns around the second TSV 802b. The first conductor 803 can be operably connected to other circuit elements (not shown) by leads 820a and 820b, the second conductor 804 can be operably connected to other circuit elements (not shown) by leads 821a and 821b, and the third conductor 805 can be operably connected to other circuit elements (not shown) by leads 822a and 822b. The first and second TSVs 802a and 802b are coupled (a) above the three conductors 803, 804 and 805 by an upper coupling member 850a, and (b) below the three conductors 803, 804 and 805 by a lower coupling member 850b.[0049] According to one embodiment, the first conductor 803 is configured to induce a magnetic field in the first and second TSVs 802a and 802b (as well as in the upper and lower coupling members 850a and 850b) in response to a current passing through the first conductor. By changing the current passing through the first conductor 803 (e.g., by applying an alternating current, or by repeatedly switching between high and low voltage states), a changing magnetic field can be induced in the first TSV 802a and the second TSV 802b (as well as in the upper and lower coupling members 850a and 850b), which in turn induces a second changing current in the second conductor 804 and a third changing current in the third conductor 805. In this fashion, signals and/or power can be coupled between a circuit comprising the first conductor 803 and other circuits comprising the second conductor 804 and the third conductor 805.[0050] Although in the embodiments illustrated in Figures 5 through 8 a single additional TSV is provided to enhance the magnetic permeability of the return path for the magnetic field generated by a primary winding around a TSV, in other embodiments of the present technology multiple return path TSVs can be provided to further improve the inductance of the inductors so configured. For example, Figure 9 is a simplified cross-sectional view of a semiconductor device 90 including an inductor 900 with a closed core configured in accordance with an embodiment of the present technology. Referring to Figure 9, the device 900 includes a substrate material 901a and an insulating material 901b, and the inductor 900 has one portion in the substrate material 901a and another portion in the insulating material 901b. For example, the inductor 900 can include three TSVs 902a, 902b and 902c, each having a first portion in the substrate material 901a and a second portion in the insulating material 901b. The three TSVs 902a, 902b and 902c accordingly extend out of the substrate material 901a and into the insulating material 901b. The inductor 900 can further include a substantially helical conductor 903 ("conductor 903") with five turns around the first TSV 902a. The three TSVs 902a, 902b and 902c are coupled (a) above the conductor 903 by an upper coupling member 950a, and (b) below the conductor 903 by a lower coupling member 950b. In accordance with one aspect of the present technology, the three TSVs 902a, 902b and 902c, together with the upper and lower coupling members 950a and 950b, provide a closed path for the magnetic field generated by the conductor 903 such that the inductance of the inductor 900 is greater than it would be if only the first TSV 902a were provided.[0051] Although in the example embodiment illustrated in Figure 9 an inductor with a completely closed core (e.g. , a core in which a continuous path of high magnetic permeability material passes through the middle of a winding) is illustrated, in other embodiments one or both of upper and lower coupling members could be omitted. In such an embodiment, multiple additional TSVs (e.g., in addition to the TSV around which the winding is disposed) with elevated magnetic permeability could be situated near the TSV around which the winding is disposed to provide an open core embodiment with improved inductance.[0052] For example, Figure 10 is a simplified cross-sectional view of a semiconductor device 1010 including an inductor 1000 with a through-substrate via core configured in accordance with an embodiment of the present technology. In this embodiment, the device 1010 includes a substrate material 1001a and an insulating material, and the inductor 1000 has one portion in the substrate material 1001a and another portion in the insulating material 1001b. For example, the inductor 1000 can include three TSVs 1002a, 1002b and 1002c that each have a first portion in the substrate material 1001a and a second portion in the insulating material 1001b. The three TSVs 1002a, 1002b and 1002c accordingly extend out of the substrate material 1001a and into the insulating material 1001b. The inductor 1000 can further include a substantially helical conductor 1003 ("conductor 1003") with five turns around the first TSV 1002a. In accordance with one aspect of the present technology, the additional TSVs 1002b and 1002c contribute to a high magnetic permeability path for the magnetic field induced by the conductor 1003 (and illustrated with magnetic field lines, such as magnetic field line 1060) such that the inductance of the inductor 1000 is greater than it would be if only the first TSV 1002a were provided.[0053] Although in the foregoing examples set forth in Figures 1 to 10 each substantially helical conductor has been illustrated as having a single turn about a TSV at a given distance from the surface of the substrate, in other embodiments a substantially helical conductor can have more than one turn about a TSV at the same distance from the substrate surface (e.g., multiple turns arrange coaxially at each level). For example, Figure 1 1 is a simplified perspective view of a substantially helical conductor 1 104 ("conductor 1 104") disposed around a through-substrate via 1 102 configured in accordance with an embodiment of the present technology. As can be seen with reference to Figure 1 1, the conductor 1 104 includes a first substantially helical conductor 1 104a ("conductor 1104a") disposed around the TSV 1 102, which is connected to a second coaxially-aligned substantially helical conductor 1 104b ("conductor 1104b"), such that a single conductive path winds downward around TSV 1102 at a first average radial distance, and winds back upward around TSV 1 102 at a second average radial distance. Accordingly, the conductor 1 104 includes two turns about the TSV 1 102 (e.g. , the topmost turn of conductor 1 104a and the topmost turn of conductor 1 104b) at the same position along the longitudinal dimension "L" of the TSV 1102. In another embodiment, a substantially helical conductor could make two turns about a TSV at a first level (e.g. , spiraling outward), two turns about a TSV at a second level (e.g. , spiraling inward), and so on in a similar fashion for as many turns as were desired.[0054] Figures 12A- 12F are simplified views of a device 1200 having an inductor with a through-substrate via core in various states of a manufacturing process in accordance with an embodiment of the present technology. In Figure 12A, a substrate 1201 is provided in anticipation of further processing steps. The substrate 1201 may be any one of a number of substrate materials, including silicon, glass, gallium arsenide, gallium nitride, an organic laminate, or the like. In Figure 12B, a first turn 1203 of a substantially helical conductor has been disposed in a layer of the insulating material 1202 over the substrate 1201. The insulating material 1202 can be any one of a number of insulating materials which are suitable for semiconductor processing, including silicon oxide, silicon nitride, polyimide, or the like. The first turn 1203 can be any one of a number of conducting materials which are suitable for semiconductor processing, including copper, gold, tungsten, alloys thereof, or the like.[0055] In Figure 12C, a second turn 1204 of the substantially helical conductor has been disposed in the now thicker layer of the insulating material 1202, and spaced from the first turn 1203 by a layer of the insulating material 1202. The second turn 1204 is electrically connected to the first turn 1203 by a first via 1205. A second via 1206 has also been provided to route an end of the first turn 1203 to an eventual higher layer of the device 1200. In Figure 12D, a third turn 1207 of the substantially helical conductor has been disposed in the now thicker layer of the insulating material 1202, and spaced from the second turn 1204 by a layer of the insulating material 1202. The third turn 1207 is electrically connected to the second turn 1204 by a third via 1208. The second via 1206 has been further extended to continue routing an end of the first turn 1203 to an eventual higher layer of the device 1200.[0056] Turning to Figure 12E, the device 1200 is illustrated in a simplified perspective view after an opening 1209 has been etched through the insulating material 1202 and into the substrate 1201. The opening 1209 is etched substantially coaxially with the turns 1203, 1204 and 1207 of the substantially helical conductor using any one of a number of etching operations capable of providing a substantially vertical opening with a high aspect ratio. For example, deep reactive ion etching, laser drilling, or the like can be used to form the opening 1209. In Figure 12F, a TSV 1210 has been disposed in the opening 1209. The TSV 1210 can include a magnetic material (e.g., a material with a higher magnetic permeability than the substrate 1201 and/or the insulating material 1202) to increase the magnetic field in the TSV 1210 when current is flowing through the substantially helical conductor. The magnetic material can be ferromagnetic, ferrimagnetic, or a combination thereof. The TSV 1210 can include more than one material, either in a bulk material of a single composition, or in discrete regions of different materials (e.g., coaxial laminate layers). For example, the TSV 1210 can include nickel, iron, cobalt, niobium, or an alloy thereof. Laminated layers of magnetic and non-magnetic material can help to reduce eddy current losses in the TSV 1210. The TSV 1210 can be provided in a single metallization step filling in the opening 1209, or in multiple steps of laminating layers (e.g., multiple magnetic layers separated by non-magnetic layers). In one embodiment, to provide a TSV with a multiple layer structure, a mixture of conformal and bottom-up fill plating operations can be utilized (e.g., a conformal plating step to partially fill and narrow the etched opening with a first material, and a subsequent bottom-up plating step to completely fill the narrowed opening with a second material).[0057] Figure 13 is a flow chart illustrating a method of manufacturing an inductor with a through-substrate via core in accordance with an embodiment of the present technology. The method begins in step 1310, in which a substrate is provided. In step 1320, a substantially helical conductor is disposed in an insulating material over the substrate. In step 1330, an opening is etched through the insulating material and into the substrate along an axis of the substantially helical conductor. In step 1340, a TSV is disposed into the opening.[0058] From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
Dynamic interrupt steering remaps the handling of interrupts away from processor units executing important workloads. During the operation of a computing system, important workload utilization rates for processor units handling interrupts are determined and those processor units with utilization rates about a threshold value are made unavailable for handling interrupts. Interrupts are dynamically remapped to processor units available for interrupt handling based on processor unit idle state and, in the case of heterogeneous computing systems, processor unit type. Processor units are capable of idle state demotion by, in response to receiving a request to enter into a deep idle state, determining if its interrupt handling rate is greater than a threshold value, and if so, placing itself into a shallower idle state than requested. This avoids the computing system from incurring the expensive idle state exit latency and power costs associated with exiting from a deep idle state.
CLAIMSWe claim:1. A method comprising: during operation of a computing device: determining that an important workload utilization rate for a first processor unit exceeds an important workload utilization threshold value, the first processor unit designated to handle one or more interrupt types; and remapping handling of the one or more interrupt types from the first processor unit to a second processor unit.2. The method of claim 1, further comprising: receiving an interrupt having an interrupt type of one of the interrupt types; and handling the interrupt by the second processor unit.3. The method of claim 1 or 2, wherein the remapping comprises modifying an interrupt remapping table.4. The method of any one of claims 1-3, further comprising determining the important workload utilization rate for the first processor unit.5. The method of claim 4, wherein the important workload utilization rate is determined based on a process priority or a thread priority associated with instructions performed by the first processor unit over a time interval for which the important workload utilization rate is determined.6. The method of claim 4, wherein the important workload utilization rate is determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a foreground application.7. The method of any one of claims 1-6, wherein the second processor unit belongs to a plurality of processor units, the method further comprising selecting the second processor unit from the plurality of processor units.8. The method of claim 7 wherein the second processor unit is selected from the plurality of processor units based on an idle state of the second processor unit being shallower than an idle state of at least one other processor unit of the plurality of processor units.9. The method of claim 7, wherein the second processor unit is selected based on a processor unit type of the second processor unit.10. The method of any one of claims 1-9, wherein the important workload utilization rate is a first important workload utilization rate determined over a first time interval, the method further comprising: determining that a second important workload utilization rate for the first processor unit determined over a second time interval does not exceed the important workload utilization threshold value, the second time interval occurring later than the first time interval; and remapping handling of the one or more interrupt types from the second processor unit back to the first processor unit.11. A computing device comprising: one or more processor units; and one or more non-transitory computer-readable media having instructions stored thereon that, when executed, cause the one or more processor units to: during operation of the computing device: determine that an important workload utilization rate for a first processor unit exceeds an important workload utilization threshold value, the first processor unit designated to handle one or more interrupt types; and remap handling of the one or more interrupt types from the first processor unit to a second processor unit, wherein the one or more processor units comprise the first processor unit.12. The computing device of claim 11, wherein the instructions are to further cause the one or more processor units to: receive an interrupt having an interrupt type of one of the interrupt types; and handle the interrupt by the second processor unit.13. The computing device of claim 11 or 12, wherein to remap handling of the one or more interrupt types comprises to modify an interrupt remapping table.14. The computing device of any one of claims 11-13, wherein the instructions are to further cause the one or more processor units to determine the important workload utilization rate for the first processor unit.15. The computing device of claim 14, wherein the important workload utilization rate is to be determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a user-initiated task.16. The computing device of claim 14, wherein the important workload utilization rate is to be determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are operating at an elevated privilege.17. The computing device of any one of claims 11-16, wherein the second processor unit belongs to a plurality of processor units, wherein the instructions are to further cause the one or more processor units to select the second processor unit from the plurality of processor units.18. The computing device of claim 17, wherein the second processor unit is to be selected from the plurality of processor units based on an idle state of the second processor unit being shallower than an idle state of at least one other processor unit of the plurality of processor units.19. The computing device of any one of claims 11-18, wherein the important workload utilization rate is a first important workload utilization rate determined over a first time interval, the instructions are to further cause the one or more processor units to: determine that a second important workload utilization rate for the first processor unit determined over a second time interval does not exceed the important workload utilization threshold value, the second time interval occurring later than the first time interval; and remap handling of the one or more interrupt types from the second processor unit back to the first processor unit.20. The computing device of any one of claims 11-19, wherein the first processor unit and the second processor unit have different processor unit types.21. The computing device of any one of claims 11-20, wherein the first processor unit and the second processor unit are part of an integrated circuit component.22. One or more non-transitory computer-readable storage media having instructions stored thereon that, when executed, cause one or more processor units of a computing device to: during operation of the computing device: determine that an important workload utilization rate for a first processor unit exceeds an important workload utilization threshold value, the first processor unit designated to handle one or more interrupt types; and remap handling of the one or more interrupt types from the first processor unit to a second processor unit, wherein the first processor unit and the second processor unit belong to the one or more processor units.23. The one or more non-transitory computer-readable storage media of claim 22, wherein the instructions are to further cause the one or more processor units to: receive an interrupt having an interrupt type of one of the interrupt types; and handle the interrupt by the second processor unit.24. The one or more non-transitory computer-readable storage media of claim 22 or 23, wherein to remap handling of the one or more interrupt types comprises to modify an interrupt remapping table.25. The one or more non-transitory computer-readable storage media of any one of claims 22-24, wherein the important workload utilization rate is a first important workload utilization rate determined over a first time interval, the instructions are to further cause the one or more processor units to: determine that a second important workload utilization rate for the first processor unit determined over a second time interval does not exceed the important workload utilization threshold value, the second time interval occurring later than the first time interval; and remap handling of the one or more interrupt types from the second processor unit back to the first processor unit.
DYNAMIC INTERRUPT STEERING AND PROCESSOR UNIT IDLE STATE DEMOTIONBACKGROUND[0001] In some existing computing devices, particular processor units are designated to handle particular interrupt types. If a processor unit designated to handle a particular interrupt type is in an idle state when an interrupt of that type occurs, the processor unit transitions from the idle state to an active state before handling the interrupt. In some existing computing devices, a processor unit can be placed into one of a variety of idle states. A processor unit transitioning from a deep idle state to an active state can incur greater latency and power idle state exit costs than if transitioning from a shallow idle state to an active state.BRIEF DESCRIPTION OF THE DRAWINGS [0002] FIG. l is a block diagram of a heterogeneous integrated circuit component.[0003] FIG. 2 is a simplified block diagram of an example computing system capable of performing dynamic interrupt steering.[0004] FIG. 3 is a flowchart of a first example dynamic interrupt steering method.[0005] FIG. 4 is a flowchart of a second example dynamic interrupt steering method.[0006] FIG. 5 is a flowchart of an example processor unit idle state demotion method.[0007] FIG. 6 is a block diagram of an example computing device capable of implement technologies described herein.[0008] FIG. 7 is a block diagram of an example processor unit that can execute instructions as part of implementing technologies described herein.DETAILED DESCRIPTION[0009] In some existing computing systems, system software steers interrupts to target processor units via IOMMU (Input-Output Memory Management Unit) interrupt remapping. Various policies are used in existing interrupt steering approaches. Interrupt steering can be dictated by platform or computing system capabilities through the programing of data structures referenced by the IOMMU (e.g., interrupt remapping table), through interrupt distribution policies (e.g., round-robin), or interrupt affinity. In an interrupt affinity approach, a device driver determines which processor unit handles interrupts originating from the device associated with the device driver. The device driver may designate processor units for handling the device interrupts without accounting for operating system or integrated circuit component architecture. In some cases, legacy device drivers assign interrupts to be handled by the processor unit responsible for booting up the computing system upon power-up (boot processor unit).[0010] These existing interrupt approaches have various drawbacks. First, they are typically static approaches - processor units are assigned to handle various interrupts types at computing system startup and these assignments do not change during operation of the computing system. Second, they can scale the performance of a processor unit designated for handling interrupts based on DPC (deferred procedure calls) and ISR (interrupt service routine) load with a bias to routing interrupts toward the boot processor. This can preempt critical threads or processes executing on the processor unit, which can negatively impact computing system responsiveness and the user experience.[0011] These existing approaches fail to account for the capabilities and real-time characteristics of the processor units available for handling interrupts, such as their latency, power consumption, idle state (e.g., C-state), and utilization associated with important workloads.[0012] Computing systems employing the dynamic interrupt steering and idle state demotion technologies disclosed herein can provide improved system performance, battery life, and user experience. FIG. l is a block diagram of a heterogeneous integrated circuit component. The integrated circuit component 100 comprises two high-performance cores 104 and 108 and eight high-efficiency cores 112-119. If an operating system schedules critical rendering processes (the terms “critical” and “important” are used herein interchangeably) on the high-performance core 104 and the core 104 also happens to be the bootstrap processor, the core 104 may also be the processor unit to which the operating system steers interrupts by default. The preemption of critical rendering processes by the core 104 handling (or servicing) interrupts can result in frame rate degradation, which can lead to a reduced user experience, particularly in gaming and streaming scenarios. Further, interrupt steering to the high- performance cores 104 and 108 can result in battery life degradation if the cores 104 and 108 are repeatedly woken from a deep idle state for interrupt handling. In integrated circuit component architectures where high-performance cores are placed into deep idle states for power savings, the power cost of repeatedly waking these high-performance cores to handle interrupts can be significant. [0013] The technologies disclosed herein can steer interrupts away from high-performance processor units to high-efficiency processor units to avoid impacting critical threads or processes being performed on the high-performance core. Analyses of the interrupt steering technologies disclosed herein show that steering interrupts from the processor unit that is designated for handling interrupts by default (such as the bootstrap processor unit) can increase the frames per second (FPS) of a popular gaming application by up to 2%. This corresponds to increasing the frequency of a processor unit by 1-2 bins. Emulation results also show performance gains of up to 15-20% in high interrupt rate scenarios (e.g. gaming with multiple streams) on next-generation platforms using the technologies described herein.[0014] In addition to steering interrupt away from processor units performing important workloads, the technologies disclosed herein can also perform idle state demotion to prevent processor units that handle a high rate of interrupts from going into a deep idle state to avoid the expensive idle state exit costs when the processor unit exists from the deep idle state for interrupt handling.[0015] As used herein, the term “integrated circuit component” refers to a packaged or unpacked integrated circuit product. A packaged integrated circuit component comprises one or more integrated circuit dies mounted on a package substrate with the integrated circuit dies and package substrate encapsulated in a casing material, such as a metal, plastic, glass, or ceramic. In one example, a packaged integrated circuit component contains one or more processor units mounted on a substrate with an exterior surface of the substrate comprising a solder ball grid array (BGA). In one example of an unpackaged integrated circuit component, a single monolithic integrated circuit die comprises solder bumps attached to contacts on the die. The solder bumps allow the die to be directly attached to a printed circuit board. An integrated circuit component can comprise one or more of any computing system component described or referenced herein or any other computing system component, such as a processor unit (e.g., system-on-a-chip (SoC), processor core, graphics processor unit (GPU), accelerator, chipset processor), EO controller, memory, or network interface controller. As illustrated in FIG. 1 and discussed further in regard to FIG. 6 below, an integrated circuit component can be a heterogeneous integrated circuit component comprising multiple integrated circuit dies of different types.[0016] As used herein, the terms “operating”, “executing”, or “running” as they pertain to software or firmware in relation to a system, device, platform, or resource are used interchangeably and can refer to software or firmware stored in one or more computer-readable storage media accessible by the system, device, platform or resource, even though the software or firmware instructions are not actively being executed by the system, device, platform, or resource.[0017] As used herein, the term “active state” when referring to the state of a processor unit refers to a state in which the processor unit is executing instructions. As used herein, the term “idle state” means a state in which a processor unit is not executing instructions. Modern processor units can have various sleep states in which they can be placed, with the varying idle states being distinguished by how much power the processor unit consumes in the idle state and idle state exit costs (e.g., how much time and how much power it takes for the processor unit to transition from the idle state to an active state). Idle states can be referred to as “shallow” or “deep”, depending on idle state power consumption and idle state exit costs. An idle state can be referred to as “shallower” or “deeper” with respect to another idle state based on the amount of idle state power consumed and/or idle state exist costs relative to another idle state.[0018] Idle states for some existing processor units can be referred to as “C-states”. In one example of a set of idle states, some Intel® processors can be placed in Cl, C1E, C3, C6, C7, and C8 idle states. This is in addition to a “CO” state, which is the processor’s active state. (P- states can further describe the active state of some Intel® processors, with the various P-states indicating the processor’s power supply voltage and operating frequency). The C1/C1E states are “auto halt” states in which all processes in a processor unit are performing a HALT or MW AIT instruction and the processor unit core clock is stopped. In the C1E state, the processor unit is operating in a state with its lowest frequency and supply voltage and with PLLs (phase-locked loops) still operating. In the C3 state, the processor unit’s LI (Level 1) and L2 (Level 2) caches are flushed to lower-level caches (e.g., L3 (Level 3) or LLC (last level cache)), the core clock and PLLs are stopped, and the processor unit operates at an operating voltage sufficient to allow it to maintain its state. In the C6 and deeper idle states, the processor unit stores its state to memory and its operating voltage is reduced to zero. As modem integrated circuit components can comprise multiple processor units, the individual processor units can be in their own idle states. These states can be referred to as C-states (core-states). Package C-states (PC-states) refer to idle states of integrated circuit components comprising multiple cores.[0019] In the following description, specific details are set forth, but embodiments of the technologies described herein may be practiced without these specific details. Well-known circuits, structures, and techniques have not been shown in detail to avoid obscuring an understanding of this description. Phrases such as “an embodiment,” “various embodiments,” “some embodiments,” and the like may include features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. [0020] Some embodiments may have some, all, or none of the features described for other embodiments. “First,” “second,” “third,” and the like describe a common object and indicate different instances of like objects being referred to. Such adjectives do not imply objects so described must be in a given sequence, either temporally or spatially, in ranking, or any other manner. “Connected” may indicate elements are in direct physical or electrical contact with each other and “coupled” may indicate elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.[0021] Reference is now made to the drawings, which are not necessarily drawn to scale, wherein similar or same numbers may be used to designate same or similar parts in different figures. The use of similar or same numbers in different figures does not mean all figures including similar or same numbers constitute a single or same embodiment. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.[0022] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding thereof. It may be evident, however, that the novel embodiments can be practiced without these specific details. In other instances, well known structures and devices are shown in block diagram form in order to facilitate a description thereof. The intention is to cover all modifications, equivalents, and alternatives within the scope of the claims.[0023] FIG. 2 is a simplified block diagram of an example computing system capable of performing dynamic interrupt steering. The computing system 200 comprises an integrated circuit component 204, devices 208 and 212, and an IOMMU 220. Device 208 is capable of generating message-based interrupts 218 and device 212 is capable of generating line-based (or pin-based) interrupts 222. The IOMMU 220 has access to an interrupt remapping table 240 that specifies which processor units handle which interrupt types. An interrupt type for line-based interrupts can be determined by, for example, the line (or pin) that an interrupt signal is received on. For message-based interrupts, the interrupt type can be determined by, for example, the contents of the interrupt message (e.g., requester ID, address, message data for PCIe (Peripheral Component Interconnect express) messaged-signaled interrupts). The IOMMU 220 provides an interrupt to the processor unit indicated by the interrupt remapping table 240, where they are received by a local interrupt controller belonging to the target processing unit. The local interrupt controller manages the interrupts to be handled by a processor unit. The local interrupt controllers 232 and 236 are shown in FIG. 2 as being integrated into the processor units 224 and 228, respectively, but in other embodiments, local interrupt controllers can be located on an integrated circuit die separate from the processor unit. In some embodiments, the local interrupt controller is an Intel® Local Advanced Programmable Interrupt Controller (LAPIC) that is part of an Intel® Advanced Programmable Interrupt Controller (APIC). In such embodiments, the APIC can optionally comprise an I/O Advanced Programmable Interrupt Controller (I/O APIC), such as 10 APIC 216.[0024] The processor units 224 and 228 refer to interrupt vector tables 244 and 248, respectively, to determine the interrupt handler that will be used to handle a received interrupt. In some embodiments, the interrupt vector tables are implemented as interrupt descriptor tables (IDTs).[0025] Dynamic interrupt steering comprises editing the interrupt remapping table 240 during the operation of a computing system 200. The interrupt remapping table 240 can be modified by an operating system 252 (or hypervisor) that is executing on the computing system 200. In some embodiments, an operating system kernel 256 modifies the interrupt remapping table 240.[0026] FIG. 3 is a flowchart of a first example dynamic interrupt steering method. The method 300 can be performed by an operating system or hypervisor executing on a computing system. At 308, for individual processor units in a set of processor units 304 designated for interrupt handling, the amount of time the processor unit has recently spent executing instructions associated with important workloads is determined. As used herein, the term “workload” can refer to a process or thread executing on a processor unit. This amount can be referred to as an important workload utilization rate or important CO% rate for the processor unit. The time interval over which the important workload utilization rate is determined can be of a predetermined length prior to the execution of the method 300, the time interval since a prior execution of the method 300, or another time interval. If the important workload utilization rate is greater than an important workload utilization rate threshold value, at 312, the processor unit is no longer designated as a processor unit for handling interrupts at 312 and important workloads executed on that processor unit will not be preempted by interrupt handling. At 316, the method 300 determines if additional processor units designated for interrupt handling are to be checked for possible removal from the set of processor units available for interrupt handling, and 308, 312, and 316 as repeated as needed until all processor units 304 have been checked. As a result of 308, 312, and 316 being performed for all processor units 304, fewer processor units may be left available for handling interrupts.[0027] In other embodiments, the method 300 can check the important workload utilization rate for processor units that are capable of handling interrupts but that are not currently designated for interrupt handling due to their important workload utilization rate having exceeded the important workload utilization rate threshold value in a prior iteration of the method 300 and designate them as available again for handling interrupts if their important workload utilization rate has dropped back below the important workload utilization threshold value. Thus, in these embodiments, processor units can be added to or removed from the set of processor units available for interrupt handling based on their important workload utilization rate. In some embodiments, the important workload utilization threshold value for designating a processor unit as not available for interrupt handling can be a different threshold value than the important workload utilization threshold value used for designating that a processor unit is available again to handle interrupts.[0028] The method 300 can utilize various metrics to determine whether instructions executed by a processor unit are associated with an important workload to determine an important workload utilization rate. Examples of such metrics include the priority of a thread or process associated with the instructions as determined by the operating system or hypervisor, whether the instructions are associated with an application that is operating in the foreground (foreground application) or on behalf of a foreground application, whether the instructions are associated with a user-initiated task (versus a scheduled background or maintenance task), whether the instructions are operating at an elevated privilege (such as an administrator privilege versus a default user privilege), and one or more energy-performance register values indicating how a processor unit operating mode is to be weighted toward higher performance or energy savings. In some Intel® processors, the one or more energy- performance registers can comprise an EPP (energy performance preference) register or an EPB (energy performance bias) register. [0029] After the important workload utilization rates are checked for the processor units 304, the method 300 proceeds to 320. If the computing system is a heterogeneous computing system (the processor units available for interrupt handling comprise two or more different processor unit types), interrupts are remapped to one of the processor units available for interrupt handling based on processor type and idle state of the available processor units at 328. If the computing system is a homogeneous computing system (the processor units available for interrupt handling are of the same processor type), interrupts are remapped to one of the processor units available for interrupt handling based on the idle state of the available processor units at 324.[0030] At 324 and 328, interrupt types routed to a processor unit that was removed at 312 from the set of processor units available for handling interrupts are remapped to one of the processor units that are still available for interrupt handling. Even if the set of processor units designated for handling interrupts remains unchanged after 308, 312, and 316, are performed, an interrupt may still be remapped to a new target processor unit in 324 or 328. This can be due to, for example, a processor unit designated for interrupt handling being placed into a deeper idle state due to its interrupt handling rate dropping below a threshold value, as will be discussed in greater detail below. In such a situation, interrupts steered to a processor unit that has been in a deeper idle state since the last iteration of the method 300 can be remapped to a processor unit that has a shallower idle state.[0031] Reference to a processor unit’s idle state in 324 and 328 can refer to the present idle state of the processor unit or, if the processor unit is in an active mode at the time the method 300 is performed, the most recent idle state in which the processor unit was placed. In some embodiments, the idle state for the processor can be the idle state that the processor unit was most commonly placed in since the last remapping of interrupts (e.g., since a prior execution of the method 300) or an idle state that the processor unit was most commonly placed in over a time interval prior to the execution of the method 300.[0032] At 324 and 328, an interrupt type can be remapped from a first processor to a second processor unit if the idle state of the second processor is shallower than that of the first processor. In some embodiments, interrupt types can be steered away from processor units with an idle state of C3 or deeper. In this manner, interrupts are steered to processor units that have lower latency and power idle state exit costs. The selection of a second processor unit to remap to can comprise selecting the second processor unit from among a plurality of processor units available for interrupt handling. The second processor unit can be selected based on the second processor unit having a shallower idle state relative to at least one other processor unit in the plurality of units. The second processor unit can be selected based on the processor unit type of the second processor unit relative to the processor unit type of one or more other processor units in the plurality of processor units. In some embodiments, the second processor unit can be selected based on multiple factors of the second processor unit relative to one or more of the processor units in the plurality units, such as the second processor unit idle state and the second processor unit processor unit type.[0033] At 328, an interrupt type can be remapped from a first processor unit of a first processor unit type to a second processor of a second processor unit type, the second processor unit type being able to handle the type advantageously (e.g., faster, less power consumption) relative to the first processor unit type. A processor unit type can be represented by information indicating one or more processor unit characteristics, such as latency or power consumption. For example, in a heterogeneous system, an interrupt can be remapped from a high- performance processor unit (“high-performance” being the first processor unit type) to a high- efficiency processor unit (“high-efficiency” being the second processor unit type) that can handle the interrupt with less power consumed relative to the high-performance processor unit. The processor unit type can act as a proxy for idle state exit costs as a larger high-performance processor unit can take longer and consume more power to exit the same idle state as a smaller high-efficiency processor unit. Dynamically steering interrupts based on processor unit type can also allow for power savings in scenarios where high-performance processor units are entered into a deep idle state, such as in a “one hour of remaining battery life” (Hour of Battery life, or HoBL) context. The remapping of interrupts in 324 and 328 can comprise the operating system or hypervisor modifying an interrupt remapping table, such as table 240 in FIG. 2. [0034] In some embodiments, the method 300 does not perform a check of whether the computing system is heterogeneous and proceeds to interrupt remapping after processor unit important workload utilization rates are checked. After an interrupt type has been remapped to a new target processor unit, any interrupts occurring after the remapping that are of the remapped interrupt type will be handled by the new target processor unit.[0035] The method 300 can be performed on a periodic or another basis. If performed periodically, the method 300 can be performed at a predetermined time interval, a user- specified time interval, or another time interval. In some embodiments, interrupt remapping (e.g., 324 and 328 in FIG. 3) can occur independently of checking whether to make processor units unavailable for interrupt handling based on their important workload utilization rates (e.g., 308, 312, 316 in FIG. 3). The interrupt remapping can occur at the same frequency but offset in time from checking processor unit important workload utilization rates or at a different frequency than checking processor unit important workload utilization rates. The portion of the operating system that performs the dynamic interrupt steering illustrated in FIG.3 can be referred to as a dynamic interrupt steering module.[0036] As mentioned, interrupts can be dynamically steered from a processor unit that is in a deep idle to a processor unit that is in a shallow state so that lower idle state exit costs are incurred when the interrupt handling processor unit transitions exits from an idle state to handle an interrupt. The idle state demotion technologies described herein allow a processor unit that handles interrupts to avoid placing itself in a deep idle state to avoid expensive deep idle state exit costs.[0037] Idle state demotion can occur when a processor unit receives an instruction to transition from an active state to a deep idle state. In response to receiving the instruction, the processor unit can determine whether to enter a shallower idle state than the deep idle state indicated in the instruction based on a recent interrupt handling rate for the processor unit. In some embodiments, the interrupt handling rate is determined in response to receiving the instruction to enter an idle state. An interrupt handling rate can be based on the number of interrupts a processor unit has been requested to handle within a time interval prior to receipt of the instruction to enter an idle state. The time interval can be a time interval of a pre determined length (e.g., 10 ms, 5 ms), a time interval since the processor unit last entered an active state, or any other time interval. In some embodiments, the number of interrupts that the processor unit has been requested to handle can be based on the number of interrupt requests received by a local interrupt controller, such as local interrupt controllers 232 and 236 in FIG. 2. If the interrupt handling rate is greater than an interrupt handling rate threshold value (or interrupt handling rate demotion threshold value), the processor unit can enter a shallower idle state than requested so that lower idle state exit costs are incurred when the processor unit exits from an idle state to handle an interrupt. For example, if a processor unit receives a request to enter a C3 idle state and an interrupt handling rate for the processor unit is greater than the interrupt handling rate threshold value, the processor unit can demote the idle state and place itself into a Cl idle state. If the interrupt handling rate is less than the interrupt rate threshold, the processor unit can enter the idle state indicated in the instruction.[0038] In some embodiments, the processor can use additional and/or different information to determine whether to enter a shallower idle state than requested. For example, an Intel® processor unit can utilize the receipt of an MW AIT instruction at the processor unit and/or the idle state indicated in a received MW AIT instruction in its determination of whether to perform idle state demotion and which idle state the processor unit should be placed into.[0039] In some embodiments, an interrupt handling rate for a processor unit is determined periodically or on another basis and not in response to receiving an instruction to enter an idle state. The processor unit uses the most recently determined interrupt handling rate for the processor unit upon receipt of a request to enter into an idle state to determine whether the processor unit is to override the idle state request and place itself into a shallower idle state than indicated in the instruction.[0040] In some embodiments, a processor unit can demote itself or undemoted itself independently of receiving a request to enter an idle state. For example, if a processor is in a demoted idle state (e.g., a Cl state) due to a prior interrupt handling rate of the processor being greater than an interrupt handling rate demotion threshold value and the processor unit determines that a more recent interrupt handling rate is less than an interrupt handling rate undemotion threshold value, the processor unit can wake itself from the demoted idle state and take itself out of the demoted idle state. That is, the processor unit can place itself into a deeper idle state, such as a deeper idle state that the processor was instructed to enter in a prior instruction to enter an idle state received at the processor unit. In some embodiments, the interrupt handling rate demotion threshold value can be different than the interrupt handling rate undemotion threshold value.[0041] In some embodiments, idle state demotion or undemotion for a processor unit can be performed based on interrupt handling rates of other processor units in the same integrated circuit component as the processor unit. For example, in an SoC or other integrated circuit component comprising multiple processor units, the idle state of a processor unit can be demoted if an average of interrupt handling rates for one or more other processor units in the integrated circuit component is greater than an interrupt handling rate SoC demotion threshold value and the interrupt handling rate for the processor unit is greater than an interrupt handling rate threshold value. The idle state of the processor unit can be demoted if an average interrupt handling rate for the one or more other processor units is less than an interrupt handling rate SoC undemotion threshold value and the interrupt handling rate for the processor unit is less than an interrupt handling rate threshold demotion value. The interrupt handling rate SoC demotion threshold value and the interrupt handle rate SoC undemotion value can be different values. [0042] In some embodiments, idle state demotion can be performed by processor unit microcode stored in processor unit memory. The portion of the processor unit microcode that performs idle state demotion can be referred to as an idle state demotion module. In other embodiments, idle state demotion can be performed by the processor unit executing operating system or hypervisor instructions. The portion of the operating system or hypervisor that performs idle state demotion can also be referred to as an idle state demotion module.[0043] Any of the modules described herein can be combined into a single module, and a single module can be split into multiple modules. Moreover, any of the modules described herein can be part of an operating system or hypervisor of a computing device, one or more software applications independent of the operating system or hypervisor, or operate at another software layer. Any of the modules described herein can be implemented in software, hardware, firmware, or combinations thereof. A computer device referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or combinations thereof.[0044] FIG. 4 is a flowchart of a second example dynamic interrupt steering method. The method 400 can be performed by an operating system kernel operating on a mobile computing device. At 410, an important workload utilization rate for a first processor unit is determined to exceed an important workload utilization threshold value. The first processor is designated to handle one or more interrupt types. At 420, handling of at least one of the one or more interrupt types is remapped from the first processor unit to a second processor unit.[0045] In other embodiments, the method 400 can comprise one or more additional elements. For example, the method 400 can further comprise receiving an interrupt having an interrupt type of the at least one of the one or more interrupt types and handling the interrupt by the second processor unit. In another example, the method 400 comprises determining the important workload utilization rate for the first processor unit. In a further example, the method 400 further comprises selecting the second processor unit from a plurality of processor units. [0046] FIG. 5 is a flowchart of an example processor unit idle state demotion method. The method 500 can be performed by, for example, a smaller high-efficiency processing unit in an integrated circuit component having a heterogeneous architecture. At 510, an instruction to place the processor unit into a first idle state can be received at the processor unit. At 520, the processor unit is placed in a second idle state if an interrupt handling rate for the processor unit exceeds an interrupt handling rate threshold value. The second idle state is a shallower idle state than the first idle state. At 530, the processor unit is placed in the first idle state if the interrupt handling rate for the processor unit does not exceed the interrupt handling rate threshold value.[0047] In other embodiments, the method 500 can comprise one or more additional elements. For example, the method 500 can further comprise determining the interrupt handling rate for the processor unit. In another example, the method 500 can further comprise the processor unit transitioning from the second idle state to an active state, handling an interrupt by the processor unit, and transitioning the processor unit from the active state back to the second idle state.[0048] The technologies described herein can be performed by or implemented in any of a variety of computing systems, including mobile computing systems (e.g., smartphones, handheld computers, tablet computers, laptop computers, portable gaming consoles, 2-in-l convertible computers, portable all-in-one computers), non-mobile computing systems (e.g., desktop computers, servers, workstations, stationary gaming consoles, set-top boxes, smart televisions, rack-level computing solutions (e.g., blade, tray, or sled computing systems)), and embedded computing systems (e.g., computing systems that are part of a vehicle, smart home appliance, consumer electronics product or equipment, manufacturing equipment). As used herein, the term “computing system” includes computing devices and includes systems comprising multiple discrete physical components. In some embodiments, the computing systems are located in a data center, such as an enterprise data center (e.g., a data center owned and operated by a company and typically located on company premises), managed services data center (e.g., a data center managed by a third party on behalf of a company), a colocated data center (e.g., a data center in which data center infrastructure is provided by the data center host and a company provides and manages their own data center components (servers, etc.)), cloud data center (e.g., a data center operated by a cloud services provider that host companies applications and data), and an edge data center (e.g., a data center, typically having a smaller footprint than other data center types, located close to the geographic area that it serves).[0049] FIG. 6 is a block diagram of a second example computing system in which technologies described herein may be implemented. Generally, components shown in FIG. 6 can communicate with other shown components, although not all connections are shown, for ease of illustration. The computing system 600 is a multiprocessor system comprising a first processor unit 602 and a second processor unit 604 comprising point-to-point (P-P) interconnects. A point-to-point (P-P) interface 606 of the processor unit 602 is coupled to a point-to-point interface 607 of the processor unit 604 via a point-to-point interconnection 605. It is to be understood that any or all of the point-to-point interconnects illustrated in FIG. 6 can be alternatively implemented as a multi-drop bus, and that any or all buses illustrated in FIG. 6 could be replaced by point-to-point interconnects.[0050] The processor units 602 and 604 comprise multiple processor cores. Processor unit 602 comprises processor cores 608 and processor unit 604 comprises processor cores 610. Processor cores 608 and 610 can execute computer-executable instructions in a manner similar to that discussed below in connection with FIG. 7, or other manners.[0051] Processor units 602 and 604 further comprise cache memories 612 and 614, respectively. The cache memories 612 and 614 can store data (e.g., instructions) utilized by one or more components of the processor units 602 and 604, such as the processor cores 608 and 610. The cache memories 612 and 614 can be part of a memory hierarchy for the computing system 600. For example, the cache memories 612 can locally store data that is also stored in a memory 616 to allow for faster access to the data by the processor unit 602. In some embodiments, the cache memories 612 and 614 can comprise multiple cache levels, such as level 1 (LI), level 2 (L2), level 3 (L3), level 4 (L4), and/or other caches or cache levels, such as a last level cache (LLC). Some of these cache memories (e.g., L2, L3, L4, LLC) can be shared among multiple cores in a processor unit. One or more of the higher levels of cache levels (the smaller and faster caches) in the memory hierarchy can be located on the same integrated circuit die as a processor core and one or more of the lower cache levels (the larger and slower caches) can be located on an integrated circuit dies that are physically separate from the processor core integrated circuit dies.[0052] Although the computing system 600 is shown with two processor units, the computing system 600 can comprise any number of processor units. Further, a processor unit can comprise any number of processor cores. A processor unit can take various forms such as a central processing unit (CPU), a graphics processing unit (GPU), general-purpose GPU (GPGPU), accelerated processing unit (APU), field-programmable gate array (FPGA), neural network processing unit (NPU), data processor unit (DPU), accelerator (e.g., graphics accelerator, digital signal processor (DSP), compression accelerator, artificial intelligence (AI) accelerator), controller, or other types of processor units. As such, the processor unit can be referred to as an XPU (or xPU). Further, a processor unit can comprise one or more of these various types of processor units. In some embodiments, the computing system comprises one processor unit with multiple cores, and in other embodiments, the computing system comprises a single processor unit with a single core. As used herein, the terms “processor unit” and “processing unit” can refer to any processor, processor core, component, module, engine, circuitry, or any other processing element described or referenced herein.[0053] In some embodiments, the computing system 600 can comprise one or more processor units that are heterogeneous or asymmetric to another processor unit in the computing system. There can be a variety of differences between the processor units in a system in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like. These differences can effectively manifest themselves as asymmetry and heterogeneity among the processor units in a system. [0054] The processor units 602 and 604 can be located in a single integrated circuit component (such as a multi-chip package (MCP) or multi -chip module (MCM)) or they can be located in separate integrated circuit components. An integrated circuit component comprising one or more processor units can comprise additional components, such as embedded DRAM, stacked high bandwidth memory (HBM), shared cache memories (e.g., L3, L4, LLC), input/output (I/O) controllers, or memory controllers. Any of the additional components can be located on the same integrated circuit die as a processor unit, or on one or more integrated circuit dies separate from the integrated circuit dies comprising the processor units. In some embodiments, these separate integrated circuit dies can be referred to as “chiplets”. In some embodiments where there is heterogeneity or asymmetry among processor units in a computing system, the heterogeneity or asymmetric can be among processor units located in the same integrated circuit component. In embodiments where an integrated circuit component comprises multiple integrated circuit dies, interconnections between dies can be provided by the package substrate, one or more silicon interposers, one or more silicon bridges embedded in the package substrate (such as Intel® embedded multi -die interconnect bridges (EMIBs)), or combinations thereof.[0055] Processor units 602 and 604 further comprise memory controller logic (MC) 620 and 622. As shown in FIG. 6, MCs 620 and 622 control memories 616 and 618 coupled to the processor units 602 and 604, respectively. The memories 616 and 618 can comprise various types of volatile memory (e.g., dynamic random-access memory (DRAM), static random- access memory (SRAM)) and/or non-volatile memory (e.g., flash memory, chalcogenide-based phase-change non-volatile memories), and comprise one or more layers of the memory hierarchy of the computing system. While MCs 620 and 622 are illustrated as being integrated into the processor units 602 and 604, in alternative embodiments, the MCs can be external to a processor unit. [0056] Processor units 602 and 604 are coupled to an Input/Output (I/O) subsystem 630 via point-to-point interconnections 632 and 634. The point-to-point interconnection 632 connects a point-to-point interface 636 of the processor unit 602 with a point-to-point interface 638 of the I/O subsystem 630, and the point-to-point interconnection 634 connects a point-to-point interface 640 of the processor unit 604 with a point-to-point interface 642 of the I/O subsystem 630. Input/Output subsystem 630 further includes an interface 650 to couple the I/O subsystem 630 to a graphics engine 652. The I/O subsystem 630 and the graphics engine 652 are coupled via a bus 654.[0057] The Input/Output subsystem 630 is further coupled to a first bus 660 via an interface 662. The first bus 660 can be a Peripheral Component Interconnect Express (PCIe) bus or any other type of bus. Various EO devices 664 can be coupled to the first bus 660. A bus bridge 670 can couple the first bus 660 to a second bus 680. In some embodiments, the second bus 680 can be a low pin count (LPC) bus. Various devices can be coupled to the second bus 680 including, for example, a keyboard/mouse 682, audio EO devices 688, and a storage device 690, such as a hard disk drive, solid-state drive, or another storage device for storing computer-executable instructions (code) 692 or data. The code 692 can comprise computer-executable instructions for performing methods described herein. Additional components that can be coupled to the second bus 680 include communication device(s) 684, which can provide for communication between the computing system 600 and one or more wired or wireless networks 686 (e.g. Wi-Fi, cellular, or satellite networks) via one or more wired or wireless communication links (e.g., wire, cable, Ethernet connection, radio-frequency (RF) channel, infrared channel, Wi-Fi channel) using one or more communication standards (e.g., IEEE 602.11 standard and its supplements).[0058] Any of the computing system components illustrated in FIG. 6 or any other components that can be a part of a computing system are capable of generating an interrupt that is to be service by a processor unit in the computing system. Depending on the interconnection between the component and the processor unit, the interrupt can be a line-based (or pin-based) interrupt or a message-signaled interrupt.[0059] In embodiments where the communication devices 684 support wireless communication, the communication devices 684 can comprise wireless communication components coupled to one or more antennas to support communication between the computing system 600 and external devices. The wireless communication components can support various wireless communication protocols and technologies such as Near Field Communication (NFC), IEEE 1002.11 (Wi-Fi) variants, WiMax, Bluetooth, Zigbee, 4GLong Term Evolution (LTE), Code Division Multiplexing Access (CDMA), Universal Mobile Telecommunication System (UMTS) and Global System for Mobile Telecommunication (GSM), and 5G broadband cellular technologies. In addition, the wireless modems can support communication with one or more cellular networks for data and voice communications within a single cellular network, between cellular networks, or between the computing system and a public switched telephone network (PSTN).[0060] The system 600 can comprise removable memory such as flash memory cards (e.g., SD (Secure Digital) cards), memory sticks, Subscriber Identity Module (SIM) cards). The memory in system 600 (including caches 612 and 614, memories 616 and 618, and storage device 690) can store data and/or computer-executable instructions for executing an operating system 694 and application programs 696. Example data includes web pages, text messages, images, sound files, and video data to be sent to and/or received from one or more network servers or other devices by the system 600 via the one or more wired or wireless networks 686, or for use by the system 600. The system 600 can also have access to external memory or storage (not shown) such as external hard drives or cloud-based storage.[0061] The operating system 694 can control the allocation and usage of the components illustrated in FIG. 6 and support the one or more application programs 696. The application programs 696 can include common computing system applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications) as well as other computing applications.[0062] In some embodiments, a hypervisor (or virtual machine manager) operates on the operating system 694 and the application programs 696 operate within one or more virtual machines operating on the hypervisor. In these embodiments, the hypervisor is a type-2 or hosted hypervisor as it is running on the operating system 694. In other hypervisor-based embodiments, the hypervisor is a type-1 or “bare-metal” hypervisor that runs directly on the platform resources of the computing system 694 without an intervening operating system layer. [0063] In some embodiments, the applications 696 can operate within one or more containers. A container is a running instance of a container image, which is a package of binary images for one or more of the applications 696 and any libraries, configuration settings, and any other information that one or more applications 696 need for execution. A container image can conform to any container image format, such as Docker®, Appc, or LXC container image formats. In container-based embodiments, a container runtime engine, such as Docker Engine, LXU, or an open container initiative (OCI)-compatible container runtime (e.g., Railcar, CRI- O) operates on the operating system (or virtual machine monitor) to provide an interface between the containers and the operating system 694. An orchestrator can be responsible for management of the computing system 600 and various container-related tasks such as deploying container images to the computing system 694, monitoring the performance of deployed containers, and monitoring the utilization of the resources of the computing system 694.[0064] The computing system 600 can support various additional input devices, such as a touchscreen, microphone, monoscopic camera, stereoscopic camera, trackball, touchpad, trackpad, proximity sensor, light sensor, electrocardiogram (ECG) sensor, PPG (photoplethysmogram) sensor, galvanic skin response sensor, and one or more output devices, such as one or more speakers or displays. Other possible input and output devices include piezoelectric and other haptic I/O devices. Any of the input or output devices can be internal to, external to, or removably attachable with the system 600. External input and output devices can communicate with the system 600 via wired or wireless connections.[0065] In addition, the computing system 600 can provide one or more natural user interfaces (NUIs). For example, the operating system 694 or applications 696 can comprise speech recognition logic as part of a voice user interface that allows a user to operate the system 600 via voice commands. Further, the computing system 600 can comprise input devices and logic that allows a user to interact with computing the system 600 via body, hand, or face gestures.[0066] The system 600 can further include at least one input/output port comprising physical connectors (e.g., USB, IEEE 1394 (FireWire), Ethernet, RS-232), a power supply (e.g., battery), a global satellite navigation system (GNSS) receiver (e.g., GPS receiver); a gyroscope; an accelerometer; and/or a compass. A GNSS receiver can be coupled to a GNSS antenna. The computing system 600 can further comprise one or more additional antennas coupled to one or more additional receivers, transmitters, and/or transceivers to enable additional functions.[0067] In addition to those already discussed, integrated circuit components and other component in the computing system 694 can communicate with interconnect technologies such as Intel® QuickPath Interconnect (QPI), Intel® Ultra Path Interconnect (UPI), Computer Express Link (CXL), cache coherent interconnect for accelerators (CCIX®), serializer/deserializer (SERDES), Nvidia® NYLink, ARM Infinity Link, Gen-Z, or Open Coherent Accelerator Processor Interface (OpenCAPI). Other interconnect technologies may be used and a computing system 694 may utilize more or more interconnect technologies. [0068] It is to be understood that FIG. 6 illustrates only one example computing system architecture. Computing systems based on alternative architectures can be used to implement technologies described herein. For example, instead of the processors 602 and 604 and the graphics engine 652 being located on discrete integrated circuits, a computing system can comprise an SoC (system-on-a-chip) integrated circuit incorporating multiple processors, a graphics engine, and additional components. Further, a computing system can connect its constituent component via bus or point-to-point configurations different from that shown in FIG. 6. Moreover, the illustrated components in FIG. 6 are not required or all-inclusive, as shown components can be removed and other components added in alternative embodiments. [0069] FIG. 7 is a block diagram of an example processor unit 700 to execute computer- executable instructions as part of implementing technologies described herein. The processor unit 700 can be a single-threaded core or a multithreaded core in that it may include more than one hardware thread context (or “logical processor”) per processor unit.[0070] FIG. 7 also illustrates a memory 710 coupled to the processor unit 700. The memory 710 can be any memory described herein or any other memory known to those of skill in the art. The memory 710 can store computer-executable instructions 715 (code) executable by the processor unit 700.[0071] The processor unit comprises front-end logic 720 that receives instructions from the memory 710. An instruction can be processed by one or more decoders 730. The decoder 730 can generate as its output a micro-operation such as a fixed width micro operation in a predefined format, or generate other instructions, microinstructions, or control signals, which reflect the original code instruction. The front-end logic 720 further comprises register renaming logic 735 and scheduling logic 740, which generally allocate resources and queues operations corresponding to converting an instruction for execution.[0072] The processor unit 700 further comprises execution logic 750, which comprises one or more execution units (EUs) 765-1 through 765-N. Some processor unit embodiments can include a number of execution units dedicated to specific functions or sets of functions. Other embodiments can include only one execution unit or one execution unit that can perform a particular function. The execution logic 750 performs the operations specified by code instructions. After completion of execution of the operations specified by the code instructions, back-end logic 770 retires instructions using retirement logic 775. In some embodiments, the processor unit 700 allows out of order execution but requires in-order retirement of instructions. Retirement logic 775 can take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like).[0073] The processor unit 700 is transformed during execution of instructions, at least in terms of the output generated by the decoder 730, hardware registers and tables utilized by the register renaming logic 735, and any registers (not shown) modified by the execution logic 750.[0074] As used herein, the term “module” refers to logic that may be implemented in a hardware component or device, software or firmware running on a processor unit, or a combination thereof, to perform one or more operations consistent with the present disclosure. Software and firmware may be embodied as instructions and/or data stored on non-transitory computer-readable storage media. As used herein, the term “circuitry” can comprise, singly or in any combination, non-programmable (hardwired) circuitry, programmable circuitry such as processor units, state machine circuitry, and/or firmware that stores instructions executable by programmable circuitry. Modules described herein may, collectively or individually, be embodied as circuitry that forms a part of a computing system. Thus, any of the modules can be implemented as circuitry, such as interrupt handling rate determination circuitry or idle state demotion circuitry. A computing system referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or combinations thereof.[0075] Any of the disclosed methods (or a portion thereof) can be implemented as computer-executable instructions or a computer program product. Such instructions can cause a computing system or one or more processor units capable of executing computer-executable instructions to perform any of the disclosed methods. As used herein, the term “computer” refers to any computing system, device, or machine described or mentioned herein as well as any other computing system, device, or machine capable of executing instructions. Thus, the term “computer-executable instruction” refers to instructions that can be executed by any computing system, device, or machine described or mentioned herein as well as any other computing system, device, or machine capable of executing instructions.[0076] The computer-executable instructions or computer program products as well as any data created and/or used during implementation of the disclosed technologies can be stored on one or more tangible or non-transitory computer-readable storage media, such as volatile memory (e.g., DRAM, SRAM), non-volatile memory (e.g., flash memory, chalcogenide-based phase-change non-volatile memory) optical media discs (e.g., DVDs, CDs), and magnetic storage (e.g., magnetic tape storage, hard disk drives). Computer-readable storage media can be contained in computer-readable storage devices such as solid-state drives, USB flash drives, and memory modules. Alternatively, any of the methods disclosed herein (or a portion) thereof may be performed by hardware components comprising non-programmable circuitry. In some embodiments, any of the methods herein can be performed by a combination of non programmable hardware components and one or more processor units executing computer- executable instructions stored on computer-readable storage media.[0077] The computer-executable instructions can be part of, for example, an operating system of the computing system, an application stored locally to the computing system, or a remote application accessible to the computing system (e.g., via a web browser). Any of the methods described herein can be performed by computer-executable instructions performed by a single computing system or by one or more networked computing systems operating in a network environment. Computer-executable instructions and updates to the computer- executable instructions can be downloaded to a computing system from a remote server.[0078] Further, it is to be understood that implementation of the disclosed technologies is not limited to any specific computer language or program. For instance, the disclosed technologies can be implemented by software written in C++, C#, Java, Perl, Python, JavaScript, Adobe Flash, C#, assembly language, or any other programming language. Likewise, the disclosed technologies are not limited to any particular computer system or type of hardware.[0079] Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, ultrasonic, and infrared communications), electronic communications, or other such communication means. [0080] As used in this application and the claims, a list of items joined by the term “and/or” can mean any combination of the listed items. For example, the phrase “A, B and/or C” can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and the claims, a list of items joined by the term “at least one of’ can mean any combination of the listed terms. For example, the phrase “at least one of A, B or C” can mean A; B; C; A and B; A and C; B and C; or A, B, and C. Moreover, as used in this application and the claims, a list of items joined by the term “one or more of’ can mean any combination of the listed terms. For example, the phrase “one or more of A, B and C” can mean A; B; C; A and B; A and C; B and C; or A, B, and C.[0081] The disclosed methods, apparatuses, and systems are not to be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and subcombinations with one another. The disclosed methods, apparatuses, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved.[0082] Theories of operation, scientific principles, or other theoretical descriptions presented herein in reference to the apparatuses or methods of this disclosure have been provided for the purposes of better understanding and are not intended to be limiting in scope. The apparatuses and methods in the appended claims are not limited to those apparatuses and methods that function in the manner described by such theories of operation.[0083] Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it is to be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth herein. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods.[0084] The following examples pertain to additional embodiments of technologies disclosed herein.[0085] Example l is a method comprising: during operation of a computing device: determining that an important workload utilization rate for a first processor unit exceeds an important workload utilization threshold value, the first processor unit designated to handle one or more interrupt types; and remapping handling of the one or more interrupt types from the first processor unit to a second processor unit.[0086] Example 2 comprises the method of Example 1, further comprising: receiving an interrupt having an interrupt type of one of the interrupt types; and handling the interrupt by the second processor unit. [0087] Example 3 comprises the method of Example 1 or 2, wherein the remapping comprises modifying an interrupt remapping table.[0088] Example 4 comprises the method of any one of Examples 1-3, further comprising determining the important workload utilization rate for the first processor unit.[0089] Example 5 comprises the method of Example 4, wherein the important workload utilization rate is determined based on a process priority or a thread priority associated with instructions performed by the first processor unit over a time interval for which the important workload utilization rate is determined.[0090] Example 6 comprises the method of Example 4, wherein the important workload utilization rate is determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a foreground application.[0091] Example 7 comprises the method of Example 4, wherein the important workload utilization rate is determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a user-initiated task.[0092] Example 8 comprises the method of Example 4, wherein the important workload utilization rate is determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are operating at an elevated privilege.[0093] Example 9 comprises the method of any one of Examples 1-8, wherein the second processor unit belongs to a plurality of processor units, the method further comprising selecting the second processor unit from the plurality of processor units.[0094] Example 10 comprises the method of Example 9, wherein the second processor unit is selected from the plurality of processor units based on an idle state of the second processor unit.[0095] Example 11 comprises the method of Example 9, wherein the second processor unit is selected from the plurality of processor units based on an idle state of the second processor unit being shallower than an idle state of at least one other processor unit of the plurality of processor units.[0096] Example 12 comprises the method of claim 9, wherein the second processor unit is selected based on a processor unit type of the second processor unit. [0097] Example 13 comprises the method of any one of Examples 1-12, wherein the important workload utilization rate is a first important workload utilization rate determined over a first time interval, the method further comprising: determining that a second important workload utilization rate for the first processor unit determined over a second time interval does not exceed the important workload utilization threshold value, the second time interval occurring later than the first time interval; and remapping handling of the one or more interrupt types from the second processor unit back to the first processor unit.[0098] Example 14 is a method comprising: receiving an instruction to place a processor unit into a first idle state; placing the processor unit in a second idle state if an interrupt handling rate for the processor unit exceeds an interrupt handling rate threshold value, the second idle state being a shallower idle state than the first idle state; and placing the processor unit in the first idle state if the interrupt handling rate for the processor unit does not exceed the interrupt handling rate threshold value.[0099] Example 15 comprises the method of Example 14, further comprising determining the interrupt handling rate for the processor unit.[0100] Example 16 comprises the method of Example 14 or 15, wherein the interrupt handling rate is determined over a time interval of a pre-determined length prior to the receiving of the instruction to place the processor unit into the first idle state.[0101] Example 17 comprises the method of Example 14 or 15, wherein the interrupt handling rate is determined over a time interval since the processor unit last entered an active state.[0102] Example 18 comprises the method of any one of Examples 14-17, further comprising: the processor unit transitioning from the second idle state to an active state; handling an interrupt by the processor unit; and transitioning the processor unit from the active state back to the second idle state after the interrupt has been handle by the processor unit. [0103] Example 19 comprises the method any one of Examples 14-18, wherein the method is performed by microcode stored in processor unit memory.[0104] Example 20 is one or more non-transitory computer-readable storage media having instructions stored thereon that, when executed, cause one or more processor units to perform any one of the methods of Examples 1-19.[0105] Example 21 is a computing device comprising: one or more processor units; and one or more non-transitory computer-readable media having instructions stored thereon that, when executed, cause the one or more processor units to: during operation of the computing device: determine that an important workload utilization rate for a first processor unit exceeds an important workload utilization threshold value, the first processor unit designated to handle one or more interrupt types; and remap handling of the one or more interrupt types from the first processor unit to a second processor unit, wherein the one or more processor units comprise the first processor unit.[0106] Example 22 comprises the computing device of Example 21, wherein the instructions are to further cause the one or more processor units to: receive an interrupt having an interrupt type of one of the interrupt types; and handle the interrupt by the second processor unit.[0107] Example 23 comprises the computing device of Example 21 or 22, wherein to remap handling of the one or more interrupt types comprises to modify an interrupt remapping table.[0108] Example 24 comprises the computing device of any one of Examples 21-23, wherein the instructions are to further cause the one or more processor units to determine the important workload utilization rate for the first processor unit.[0109] Example 25 comprises the computing device of Example 24, wherein the important workload utilization rate is to be determined based on a process priority or a thread priority associated with instructions performed by the first processor unit over a time interval for which the important workload utilization rate is determined.[0110] Example 26 comprises the computing device of Example 24, wherein the important workload utilization rate is to be determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a foreground application.[0111] Example 27 comprises the computing device of Example 24, wherein the important workload utilization rate is to be determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are associated with a user-initiated task.[0112] Example 28 comprises the computing device of Example 24, wherein the important workload utilization rate is to be determined based on whether instructions executed by the first processor unit during a time interval for which the important workload utilization rate is determined are operating at an elevated privilege.[0113] Example 29 comprises the computing device of Example any one of Examples 21- 28, wherein the second processor unit belongs to a plurality of processor units, wherein the instructions are to further cause the one or more processor units to select the second processor unit from the plurality of processor units.[0114] Example 30 comprises the computing device of Example 29, wherein the second processor unit is to be selected from the plurality of processor units based on an idle state of the second processor unit.[0115] Example 31 comprises the computing device of Example 29, wherein the second processor unit is to be selected from the plurality of processor units based on an idle state of the second processor unit being shallower than an idle state of at least one other processor unit of the plurality of processor units.[0116] Example 32 comprises the computing device of Example 29, wherein the second processor unit is to be selected based on a processor unit type of the second processor unit. [0117] Example 33 comprises the computing device of any one of Examples 21-32, wherein the important workload utilization rate is a first important workload utilization rate determined over a first time interval, the instructions are to further cause the one or more processor units to: determine that a second important workload utilization rate for the first processor unit determined over a second time interval does not exceed the important workload utilization threshold value, the second time interval occurring later than the first time interval; and remap handling of the one or more interrupt types from the second processor unit back to the first processor unit.[0118] Example 34 comprises the computing device of any of Examples 21-33, wherein the first processor unit and the second processor unit have different processor unit types.[0119] Example 35 comprises the computing device of any of Examples 21-33, wherein the first processor unit and the second processor unit are part of an integrated circuit component.[0120] Example 36 is a processor unit comprising: execution logic; and one or more non- transitory computer-readable media having instructions that, when executed, cause the execution logic to: in response to receiving, at the processor unit, an instruction to place the processor unit in a first idle state, place the processor unit in a second idle state if an interrupt handling rate for the processor unit exceeds an interrupt handling rate threshold value, the second idle state being a shallower idle state than the first idle state; and place the processor unit in the first idle state if the interrupt handling rate for the processor unit does not exceed the interrupt handling rate threshold value. [0121] Example 37 comprises the processor unit of Example 36, the instructions to further cause the execution logic to determine the interrupt handling rate for the processor unit.[0122] Example 38 comprises the processor unit of Example 36 or 37, wherein the interrupt handling rate is to be determined over a time interval of a pre-determined length prior to the receiving of the instruction to place the processor unit into the first idle state.[0123] Example 39 comprises the processor unit of any one of Examples 36-38, wherein the interrupt handling rate is to be determined over a time interval since the processor unit last entered an active state.[0124] Example 40 comprises the processor unit of any one of Examples 36-39, the instructions to further cause the execution logic to: transition the processing unit from the second idle state to an active state; handle an interrupt by the processor unit; and transition the processor unit from the active state back to the second idle state after the interrupt has been handled by the processor unit.
Methods and apparatus related to a mechanism for quickly adapting garbage collection resource allocation for an incoming I/O (Input/Output) workload are described. In one embodiment, non-volatile memory stores data corresponding to a first workload and a second workload. Allocation of one or more resources in the non-volatile memory is determined based at least in part on a determination of an average validity of one or more blocks, where the one or more candidate bands are to be processed during operation of the first workload or the second workload. Other embodiments are also disclosed and claimed.
1.A device comprising:A non-volatile memory for storing data corresponding to the first workload and the second workload; andFor determining, based at least in part on a determination of the average validity of one or more blocks to be processed during operation of the first workload or the second workload, in the non-volatile memory The logic of the allocation of one or more resources.2.The apparatus of claim 1, wherein the logic is to determine the allocation for the garbage collection logic and the one or more resources of a host coupled with the non-volatile memory.3.The apparatus of claim 2, wherein the garbage collection logic is to release space occupied by invalid data in the non-volatile memory.4.The apparatus of claim 2, wherein the logic to determine the allocation of the one or more resources comprises the garbage collection logic.5.The apparatus of claim 1, wherein logic is configured to determine, based at least in part on the one or more blocks to be processed during the transition from the first workload to the second workload Determining the allocation of the one or more resources in the non-volatile memory.6.The apparatus of claim 1, wherein the logic is to determine the allocation of the one or more resources to increase the effective spare space of the non-volatile memory.7.The apparatus of claim 1, wherein the logic is to determine the allocation of the one or more resources to cause a write amplification in the non-volatile memory to be reduced.8.The apparatus of claim 1, wherein the second workload is immediately after the first workload.9.The apparatus of claim 1, wherein the first workload is an empty or idle workload.10.The apparatus of claim 1, wherein the non-volatile memory and the logic are on a same integrated circuit device.11.The apparatus of claim 1, wherein the non-volatile memory comprises one of: a nanowire memory, a ferroelectric transistor random access memory (FeTRAM), a magnetoresistive random access memory (MRAM), a flash memory , Spin-transfer torque random access memory (STTRAM), resistive random access memory, phase change memory (PCM), and byte addressable three-dimensional crosspoint memory.12.The apparatus of claim 1, wherein the SSD includes the non-volatile memory and the logic.13.A method comprising:Storing data corresponding to the first workload and the second workload in a non-volatile memory; andDetermine one of the non-volatile memories or at least partially based on a determination of the average validity of one or more blocks processed during operation of the first workload or the second workload or The allocation of multiple resources.14.The method of claim 13, further comprising determining the allocation of the one or more resources for garbage collection logic and a host coupled to the non-volatile memory.15.The method of claim 13, further comprising the garbage collection logic freeing up space in the non-volatile memory that is occupied by invalid data.16.The method of claim 13, wherein determining the allocation of the one or more resources in the non-volatile memory increases the effective spare space of the non-volatile memory.17.The method of claim 13, wherein determining the allocation of the one or more resources in the non-volatile memory causes a write amplification in the non-volatile memory to be reduced.18.The method of claim 13, wherein the first workload is an empty or idle workload.19.The method of claim 13, wherein the non-volatile memory comprises one of: a nanowire memory, a ferroelectric transistor random access memory (FeTRAM), a magnetoresistive random access memory (MRAM), a flash memory , Spin-transfer torque random access memory (STTRAM), resistive random access memory, phase change memory (PCM), and byte addressable three-dimensional crosspoint memory.20.The method of claim 13 further comprising, based at least in part on an average validity of the one or more chunks being processed during the transition from the first workload to the second workload Determine to determine the allocation of the one or more resources in the non-volatile memory.21.A system comprising:Non-volatile memory; andAt least one processor core for accessing the non-volatile memory;The non-volatile memory for storing data corresponding to a first workload and a second workload; andFor determining, based at least in part on a determination of the average validity of one or more blocks to be processed during operation of the first workload or the second workload, in the non-volatile memory The logic of the allocation of one or more resources.22.The system of claim 21, wherein the logic is to determine the allocation for the garbage collection logic and the one or more resources of a host coupled with the non-volatile memory.23.The system of claim 21, wherein the logic is to determine the allocation of the one or more resources to increase the effective spare space of the non-volatile memory.24.A computer-readable medium comprising one or more instructions that, when executed on a processor, configure the processor to perform the method of any one of claims 13 to 20 One or more operations described.25.A device comprising means for performing the method as set forth in any one of claims 13 to 20.
Mechanisms for adapting garbage collection resource allocation in solid-state drivesRelated applicationsThis application claims the benefit of U.S. Application No. 14 / 672,084, filed March 27, 2015, under 35 U.S.C. 365 (b). The entire content of the said application No. 14 / 672,084 is hereby incorporated by reference.Technical fieldThe present disclosure relates generally to the field of electronics. More specifically, some embodiments relate generally to mechanisms for managing memory allocation in a solid state drive (SSD).Background techniqueIn general, the memory used to store data in the computing system may be volatile (for storing volatile information) or non-volatile (for storing persistent information). Volatile data structures stored in volatile memory are typically used for temporary or intermediate information needed to support the functionality of a program during the runtime of the program. On the other hand, persistent data structures stored in non-volatile (or persistent) memory are available outside the runtime of the program and can be reused. In addition, new data is usually first generated as volatile data before the user or programmer decides to make the data durable. For example, a programmer or user can cause a mapping (ie, instantiation) of volatile structures in volatile main memory that is directly accessible by the processor. On the other hand, persistent data structures are instantiated on non-volatile storage devices such as spinning disks attached to input / output (I / O or IO) buses or non-volatile memory based devices such as flash memory Or solid state drive).BRIEF DESCRIPTION OF THE DRAWINGS FIGA detailed description is provided with reference to the drawings. In the figures, the left-most digit (s) of a reference number identifies the image in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or analogous items.1 and FIGS. 4-6 illustrate a block diagram of an embodiment of a computing system that may be used to implement the various embodiments discussed herein.Figures 2A and 2C show a block diagram of different methods for calculating resource allocation for garbage collection in accordance with some embodiments.2B, 2D, 2E, and 2F show graphs of sample curves according to some embodiments.Figure 3 shows a block diagram of the various components of a solid state drive according to an embodiment.detailed descriptionIn the following description, numerous specific details are set forth in order to provide a thorough understanding of various embodiments. However, various embodiments may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to obscure the particular embodiments. In addition, various aspects of the embodiments may be implemented using, for example, computer-implemented instructions integrated into a semiconductor circuit ("hardware"), organized into one or more programs ("software") or hardware and software Some combination. For the purposes of this disclosure, references to "logic" may mean hardware, software, firmware, or some combination thereof.In general, random write bandwidth can be artificially high when the SSD is empty. Empty indicates just unpacked or immediately after secure erase operation. When the SSD is empty, background cleanup (garbage collection) is not performed until data is written to the SSD. When data is written to the drive, it will reach a level known as steady state, where writes and garbage collection are properly balanced to measure the performance of the SSD. In addition, the way in which drives are written is the major difference between HDDs (Hard Disk Drives) and SSDs. By changing the magnetic information on the disc, the data can be overwritten to the HDD at any time. With SSDs, the message can not be overwritten because the SSD is made up of NAND flash memory. In general, NAND memory is arranged as pages; pages are arranged as blocks. Data can only be written to empty or (for example, new) erased pages. When the drive is new, all the pages are empty and can therefore be written quickly. When most or all of the pages written to the drive are full, the chunks are therefore erased to make room for new data to be written. Erasing can occur only in blocks rather than as a single page. To make it possible to erase and move data, the SSD has additional NAND that is not calculated into the claimed capacity of the drive. The amount of such additional NAND varies with the driver model number. Additional NAND or spare area is used, so the drive can perform writes even when the drive is full of data.As higher-capacity NAND technologies evolve for solid-state drives (SSDs) toward lower cost, the band size also increases with the size of the NAND erased block, which is the smallest erase granularity in NAND media, or "EB" Increase and increase. As discussed herein, "band" generally refers to a logical structure or block comprised of the same EB (or otherwise comprising the same EB) that spans a number of NAND dies. Newer SSDs have a smaller number of larger tapes. In general, the main purpose of garbage collection is to free up space that is occupied by invalid data. As discussed herein, "invalid data" generally refers to data that is out of date and is no longer considered to be available. For example, the ATA Trim command (based on at least one instruction set architecture) allows to proactively mark a NAND block that contains user data (but that user data is no longer used) as invalid. This allows SSDs to be more effective by eliminating the need to move outdated data during internal garbage collection activities. Moreover, this method improves write performance after large amounts of data have been discarded. In addition to its main purpose, some SSD garbage collection mechanisms can handle moving valid data during wear leveling and background data refresh (BDR) while maintaining consistent SSD performance. In order to perform all these functions, a certain number of bands are reserved for garbage collection. These reserved bands do not count as a valid spare for the drive. As discussed herein, "active standby" generally refers to the amount of additional physical capacity in a drive that exceeds the logical reported capacity. As the size of the band increases, the reservation bands required for garbage collection continue to occupy a greater percentage of physical space than the effective spare of the SSD, degrading performance and reducing SSD lifetime (or increasing Write Amplification (WA) ).In addition, initial garbage collection design tends to weigh the effective back-up for simplicity of implementation and allows additional retention to meet specific workload performance consistency goals. However, as the size of the band increases, the trade-off is no longer cost-effective. In addition, based on a systematic understanding of the workload, free-space production occurs at the same rate as a host consuming a certain amount of resources during steady-state. However, garbage collection may need to adapt to its resource allocation during workload transformations, which may cause it to lag behind the host. As a result, the host can consume free space more quickly than the free space being produced during the workload conversion, and other system services may lack free space if not properly managed, and the entire SSD may fail.To prevent catastrophic failures during workload transitions, garbage collection can use some of its bands as reserved space for host consumption as garbage collection accommodates its resources and keeps up with the host. The amount of reserve space consumed depends on how fast the resources fit into the new workload. By allowing garbage collection to adapt more quickly to its resources, some of the bands reserved for garbage collection can be freed and returned as a valid backup for SSDs.To this end, some embodiments relate to mechanisms for quickly adapting garbage collection resource allocation for incoming (eg, I / O (input / output)) workloads and maximizing the effective spare capacity of solid state drives (SSDs). The embodiments provide the ability to dynamically maximize the effective back-up of an SSD by minimizing operational backup requirements for garbage collection. By (eg, in time) adding effective spares, SSD performance is improved, write amplification is reduced, and overall drive life is increased. In addition, if there is a good understanding of the intended user or customer workload, less NAND media is required on the SSD to achieve the target standby level, reducing bill of materials (BOM) costs.Moreover, while some embodiments are discussed with reference to NAND media, embodiments are not limited to NAND media and may be applied to NOR media. In addition, even though some embodiments are discussed with reference to SSDs (eg, memory cells including NAND and / or NOR types), the embodiments are not limited to SSDs and may be used with other types of non-volatile storage devices (or non-volatile (NVMs), including for example, one or more of the following: nanowire memory, ferroelectric transistor random access memory (FeTRAM), magnetoresistive random access memory (MRAM), flash memory, spin-transfer torque randomization Memory (STTRAM), resistive random access memory, byte addressable three-dimensional cross point memory, PCM (phase change memory) and the like.The techniques discussed herein may be provided to various computing systems (eg, including non-mobile computing devices such as desktops, workstations, servers, carrier systems, etc.) and mobile computing devices such as smartphones, tablets, UMPCs Ultra-mobile personal computers), laptop computers, Ultrabook ™ computing devices, smart watches, smart glasses, smart bracelets, etc.), including those discussed with reference to Figures 1 through 6. More specifically, Figure 1 illustrates the implementation of An example block diagram of a computing system 100. The system 100 may include one or more processors 102-1 through 102-N (commonly referred to herein as "multiple processors 102" or "processors 102"). The processor 102 may receive Interconnects, or bus 104. For clarity, each processor may include various components, only some of which are discussed with reference to the processor 102-1 Thus, each of the remaining processors 102-2 through 102-N One may include the same or similar components discussed with reference to processor 102-1.In an embodiment, the processor 102-1 may include one or more processor cores 106-1 through 106-M (referred to herein as "multiple cores 106" or more commonly referred to as "cores 106"), a cache 108 (which may be a shared cache or a dedicated cache in various embodiments) and / or router 110. The processor core 106 may be implemented on a single integrated circuit (IC) chip. In addition, the chip may include one or more shared and / or dedicated caches (eg, cache 108), a bus or interconnect (eg, bus or interconnect 112), logic 120, a memory controller Those discussed in Figure 6), or other components.In one embodiment, router 110 may be used to communicate between various components of processor 102-1 and / or system 100. In addition, the processor 102-1 may include more than one router 110. In addition, many of the routers 110 may communicate to enable data routing among various components, internal or external to the processor 102-1.The cache 108 may store data (eg, including instructions) utilized by one or more components of the processor 102 - 1 (eg, core 106). For example, the cache 108 may cache data stored in the memory 114 locally for faster access by the components of the processor 102. As shown in FIG. 1, memory 114 may communicate with processor 102 via interconnect 104. In an embodiment, the cache 108 (which may be shared) may have various levels, for example, the cache 108 may be a mid-level cache and / or a last-level cache (LLC). In addition, each of the cores 106 may include a level one (LI) cache (116-1) (commonly referred to herein as "L1 cache 116"). Various components of processor 102-1 may communicate directly with cache 108 over a bus (eg, bus 112) and / or a memory controller or hub.As shown in FIG. 1, memory 114 may be coupled with other components of system 100 through memory controller 120. Memory 114 includes volatile memory and may be referred to interchangeably as main memory. Even though the memory controller 120 is shown coupled between the interconnect 104 and the memory 114, the memory controller 120 may be located elsewhere in the system 100. For example, in some embodiments, the memory controller 120 or portions thereof may be provided within one of the processors 102.System 100 may also include a non-volatile (NV) storage device, such as SSD 130 coupled with interconnect 104 via SSD controller logic 125. Thus, logic 125 may control various components of system 100 accessing SSD 130. In addition, even though logic 125 is shown as being directly coupled to interconnect 104 in FIG. 1, logic 125 may alternatively be connected via a memory bus / interconnect (eg, a SATA (Serial Advanced Technology Attachment) bus, a Peripheral Component Interconnect Such as a bus bridge, a chipset (eg, a bus bridge), a bus bridge (eg, a bus bridge), a PCI bus For example as discussed with reference to FIGS. 4-6), etc.). In addition, in various embodiments, logic 125 may be incorporated into memory controller logic (such as those discussed with reference to FIGS. 1 and 4 - 6), or be provided on the same integrated circuit (IC) device (Eg, on the same IC device as the SSD 130 or in the same enclosure as the SSD 130).In addition, logic 125 and / or SSD 130 may be coupled with one or more sensors (not shown) for receiving information (eg, in the form of one or more bits or signals) to indicate the status of the one or more sensors Or by one or more sensors. These sensors may be provided near components of system 100 (or other computing systems discussed herein, such as those discussed with reference to other figures including FIGS. 4-6) for sensing power / heat affecting the system / platform Such as temperature, operating frequency, operating voltage, power consumption, and / or inter-core communication activity, among other factors of the system 100, wherein the components of the system 100 include a core 106, an interconnect 104 or 112, Component, an SSD 130, an SSD bus, a SATA bus, logic 125, logic 160, and the like.As shown in FIG. 1, the SSD 130 may include logic 160, which may be in the same housing as the SSD 130 and / or fully integrated on a printed circuit board (PCB) of the SSD 130. Logic 160 provides a mechanism for quickly adapting garbage collection resource allocation for incoming I / O (input / output) workloads, as discussed herein with reference to FIGS. 2A-6, for example.More specifically, embodiments (also referred to as forward MAV (Moving Average Availability) or FMAV) allow garbage collection to adapt its resources more quickly to changing workloads; thus, reducing the number of bands it requires. This, in turn, translates into more effective back up, better performance and longer SSD lifetime. In order to solve this problem, garbage collection can check the status of a band that is a candidate for garbage collection instead of the status of a band that has just been processed. By examining the amount of valid data in the candidate band, garbage collection has a better representation of the resources required for the workload to be brought in and can more quickly adapt to its resource allocation.In some implementations (eg, the flowchart / block diagram of FIG. 2A), the amount of valid data in the band is recorded whenever the band (eg, the queue from candidate band 208) is processed by garbage collection 204. The moving average 202 of the validity of the last (eg, 32) bands processed is referred to as backsight moving average effectiveness (or backsight MAV). This is used to represent the workload and calculate how much resources the garbage collection 204 needs. Moving average effectiveness is a good indicator of how much resources are needed because the amount of valid data is directly related to how much garbage collection needs to do. For example, if the last 32 bands processed contain 20% of the valid data, the logic / firmware allocates 20% of the system's buffer (plus some adders for overhead) for the garbage collection effort.However, rear view MAV is the historical running average of the last 32 bands. This means that incoming workloads (those that require more resources) can use the same amount of resources as previous workloads. There may be a relatively long time delay between the host workload changes and the back-view MAV stabilized to the correct value. During this time, garbage collection will lag behind host running, free space production will not match host consumption and free space will drop. Requires extra tapes to serve as reserved space for host consumption until garbage collection can catch up. Without additional tapes as "buffering", other key system services will lack free space and the system will fail.FIG. 2B shows a graph of free space available over time (where an available free space is indicated as an available indirect element (labeled "IND" in the drawing)). More specifically, FIG. 2B highlights the effect for rear view MAVs for free space during workload transitions. The workload transition occurs around the arrow 210. The workload on the left is represented by a MAV of 0 (rearview MAV) while the workload on the right will have an MAV of about 80. Available is the amount of free space the system currently has. At "Normal," free space production equals host consumption. The goal is to keep "Available" at or very near "Normal" except for workloads with MAV = 0. When "Available" hits "Critical," other system services will lack free space and the system may fail, as described earlier.In Figure 2B, "Available" drops significantly below "Normal" but does not reach "Critical," because of the extra band for "Host" consumption of "Normal" and "Critical" space. Note how "Available" approaches "Normal" when the MAV slowly approaches its correct value. If MAV changes more quickly, "Available" will not significantly drop below "Normal" and several bands of reserved space may be returned as a valid spare for the SSD.In an embodiment, a forward MAV (FMAV) may be used in order to prevent the free space from dropping significantly below "Normal" (see, for example, Figure 2C showing a block diagram / flow chart of an FMAV according to an embodiment) . In an embodiment, logic 160 includes one or more of the blocks shown in FIG. 2C. Candidate bands for garbage collection process 226 are organized in queue 228. By calculating the moving average effectiveness 222 by using the validity of the bands in the queue, the garbage collection 224 has a better representation of the resources required for the incoming workload and can more quickly adapt its resource allocation.Figure 2D shows a graph of available free space over time. More specifically, 2D highlights the effect of FMAV on free space during workload transitions. Workload conversion occurs around the arrow 250. The workload on the left is represented by a MAV of 0 (FMAV) and the workload on the right will have a MAV of about 80. Because MAV responds quickly to changes in the workload, Available does not significantly drop below Normal, and a number of bands in the reserved band can now be returned to the SSD as valid back-ups. The current workload and the amount of time it takes for garbage collection to adapt its resource allocation during workload transformation are two of the major factors that determine the number of bands collected by the garbage collector needed to ensure functional and SSD performance consistency .Workloads that generate high WA require less reserved band than low WA workloads. By detecting the workload running on the SSD, garbage collection can release unwanted reserved bands as a valid back-up while still maintaining performance consistency. In addition, by quickly adapting its resource allocation during workload transitions, garbage collection 224 may free up additional reserved bands as valid spares while maintaining SSD functionality.As mentioned earlier, based on the worst-case scenario, some garbage collection implementations can retain a fixed number of bands for SSD functionality and performance consistency. According to some embodiments, for example, a band that is not reserved based on a workload is returned as a valid backup, with the highest WA workload receiving the most backup. Extra effective backup translates to better performance and lower WA.To solve this problem, garbage collection logic (eg, logic 226 of FIG. 2C) checks the moving average validity (MAV) of the band to be processed. The number of bands reserved for garbage collection is adjusted based on the MAV. Because MAV and WA have a direct relationship, workloads with high WA and MAV are assigned fewer reserved bands and more active spares while workloads with lower WA / MAV receive the reverse of the above.In addition, in some implementations, SSDs statically allocate a certain number of reserved bands for use during garbage collection during power-up. For example, ten bands are reserved to maintain performance consistency during normal operation, and an extra ten bands are reserved to maintain SSD functionality during workload transitions. The number of bands reserved remains fixed and will not change, even though a certain number of bands are not needed based on the current workload. For example, performance consistency requires that drive performance does not fall below an average of 90%. To meet this requirement, garbage collection uses the amount of free space in the SSD and the WA of the workload as a feedback to determine how much resources and bandwidth are needed. Since WA and MAV have a direct relationship, the logic / firmware uses the MAV of the band to be processed as a measure of WA for the workload. Figure 2E shows a graph of the relationship between WA, free space, and host / garbage collection bandwidth (the garbage collection bandwidth is the inverse of the host bandwidth).The reason for the ten-band performance coherency buffering (in Figure 2E, Normal to Corner) is the fully valid band sent to the garbage collection logic through BDR absorption and is loss-balanced at regular intervals. BDR and loss equalizer bands can achieve any effectiveness up to 100%. If the WA for a given host workload is very low, for example close to 1, then the host receives substantially 100% of the write bandwidth (curve 270 in FIG. 2E). However, hosts also expect up to 10% performance degradation when garbage collection is processing the BDR and wear leveling. In this case, garbage collection only allows up to 10% of the write bandwidth. This means that the host will consume ten bands of space, and garbage collection will rewrite a BDR / wear leveling band. Most importantly, ten-band performance coherency buffering is only needed for low WA workloads (for example, about 1) because garbage collection can only consume up to 10% of the bandwidth. When WA is high, garbage collection is allowed enough bandwidth to exactly match host idle space consumption (plot 274 in Figure 2E), while maintaining performance consistency.In addition, free space can drop significantly during workload transformation until garbage collection adapts its resources to the workload that will come. Additional bands (Corner to Critical in Figure 2E) are needed to act as reserved space for host consumption until garbage collection can catch up. In the absence of extra tape as a buffer, other key system services lack free space and the system will fail. The amount of reserve space consumed depends on how fast the resource is adapted to the new workload. If resources are quickly adapted, some bands between Corner and Critical can be released as a valid backup.To this end, in some embodiments, the number of reserved bands required to maintain performance coherency and SSD functionality is recalculated based on the MAV each time the band is processed by garbage collection logic (eg, logic 224 of FIG. 2C). Because only low WA workloads require ten-band performance coherency buffering (Normal to Corner in Figure 2E), buffering is reduced as MAV begins to increase. Conversely, when the MAV starts to decrease, the cushion is increased. In addition, the mechanism (eg, logic 160) for resource allocation to speed garbage collection during workload transformation also allows for a reduction in ten-band buffering (Corner to Critical in Figure 2E) for SSD functionality.Figure 2F shows a diagram of the available free space versus time according to an embodiment. More specifically, FIG. 2F shows how the number of bands reserved changes with the WA of the workload. As shown in Figure 2F, during time stamps 1000-8500 and 14800-21000: Run high WA (for example, greater than 2) workloads (4K random writes) and Normal to Corner = Corner to Critical = 2 bands reserved. In addition, during the timestamp 8501-14799: Run a low WA (approximately equal to 1) workload (128K serial write) and Normal to Corner = Corner to Critical = Reserved 10 bands.Figure 3 shows a block diagram of the various components of an SSD, according to an embodiment. Logic 160 may be located in various locations, such as within SSDs or SSD controller logic, for example, as shown in FIG. 3. SSD 130 includes controller logic 382, ​​which in turn includes one or more processor cores or processors 384 and memory controller logic 386, random access memory (RAM) 388, firmware storage 390, and one or more memory modules or Die 392 - 1 through 392 - n (which may include NAND flash, NOR flash, or other types of nonvolatile memory such as those discussed with reference to FIGS. 2A-2F). Memory modules 392-1 through 392-n are coupled with memory controller logic 386 via one or more memory channels or buses. In addition, SSD 130 communicates with logic 125 via interfaces such as SATA, SAS, PCIe (Express Peripheral Component Interconnect) interfaces. One or more of the operations discussed with reference to FIGS. 1-6 may be performed by one or more of the components of FIG. 3, for example, the processor 384 and / or the controller 382 may write to one or more of the memory modules 392 - 1 to 392 - 392-n or data read from memory modules 392-1 through 392-n (or otherwise cause compression / decompression). In addition, one or more of the operations of FIGS. 1-6 may be programmed into the firmware 390. In addition, the controller 382 can include logic 160.Accordingly, some embodiments provide one or more of the following implementations:(1)By detecting workloads running on SSDs, garbage collection can release unwanted reserved bands as a valid backup while still maintaining performance consistency;a. Recalculate the number of bands that are required for retention of performance consistency and SSD functionality based on the MAV whenever the tape is processed by garbage collection.b. Buffering is reduced as the MAV begins to increase (as a result, effective spares, performance and durability increase and WA is increased as a result of the ten-band performance coherency buffer (Normal to Corner in Figure 2E) cut back). Conversely, when MAV starts to decrease, the buffer is increased (back to its original value); and / or(2)In addition, by quickly adapting its resource allocation during workload transformation, garbage collection can free up additional reserved bands as an effective backup while maintaining SSD functionality;a. Candidate bands for garbage collection are organized into queues. By calculating the moving average effectiveness by using the validity of the bands in the queue, garbage collection has a better representation of the resources needed for the workload to come and can adapt more quickly to its resource allocation;Available and does not significantly drop as the MAV responds quickly to changes in the workload, and some extra amount of band-retaining band (those bands previously used to prevent the system from lacking free space and failure) can now be valid Spares are returned to the SSD to maximize the effect of (1) above.FIG. 4 shows a block diagram of a computing system 400 according to an embodiment. Computing system 400 may include one or more central processing units (CPUs) 402 or processors that communicate via an interconnection network (or bus) 404. The processor 402 may include a general purpose processor, a network processor (to process data communicated over the computer network 403), an application processor (such as a processor used in a cellular phone, a smart phone, etc.), or another type of processor Including Reduced Instruction Set Computer (RISC) processors or Complex Instruction Set Computer (CISC)). Various types of computer networks 403 may be used including wired (eg, Ethernet, gigabit, optical, etc.) or wireless networks (eg, cellular, 3G (third generation cellular phone technology or third generation wireless format )), 4G, low power embedded (LPE), etc.). In addition, the processor 402 may have a single core or multi-core design. A processor 402 with a multi-core design may integrate different types of processor cores on the same integrated circuit (IC) die. In addition, processor 402 with a multi-core design may be implemented as a symmetric or asymmetric multiprocessor.In embodiments, one or more of the processors 402 may be the same as or similar to the processor 102 of FIG. 1. For example, one or more of the processors 402 may include one or more of the cores 106 and / or the cache 108. In addition, the operations discussed with reference to FIGS. 1-3 may be performed by one or more components of system 400.The chipset 406 may also communicate with the interconnection network 404. Chipset 406 may include a graphics and memory control hub (GMCH) 408. The GMCH 408 may include a memory controller 410 (in an embodiment, the memory controller 410 may be the same as or similar to the memory controller 120 of FIG. 1) in communication with the memory 114. Memory 114 may store data, a sequence of instructions that is executed by CPU 402, or any other device included in computing system 400. In addition, system 400 includes logic 125, SSD 130, and / or logic 160 (which in various embodiments may be coupled to system 400 via other interconnects (eg, 404) via bus 422 as shown, where logic 125 is Into chipset 406, etc.). In one embodiment, the memory 114 may include one or more volatile memory (or memory) devices such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM) Or other type of storage device. It is also possible to use non-volatile memory, such as hard disk drives, flash memory, etc., including any of the NVMs discussed herein. Additional devices may communicate via the interconnection network 404, such as multiple CPUs and / or multiple system memories.The GMCH 408 may also include a graphics interface 414 in communication with the graphics accelerometer 416. In one embodiment, graphics interface 414 may communicate with graphics accelerometer 416 via an Accelerated Graphics Port (AGP) or Peripheral Component Interconnect (PCI) (or Express PCI (PCIe) interface. In an embodiment, the display 417 (eg, a flat panel display, a touch screen, etc.) may communicate with the graphics interface 414 through, for example, a single converter that converts images stored in a storage device (eg, video memory or system memory) The digital representation is converted to a display signal interpreted and displayed by the display. The display signal generated by the display device may be passed through various control devices before being interpreted by the display 417 and then displayed on the display 417.Hub interface 418 may allow GMCH 408 to communicate with input / output control hub (ICH) 420. The ICH 420 may provide an interface to I / O devices that communicate with the computing system 400. The ICH 420 may communicate with the bus 422 through a peripheral bridge (or controller) 424 such as a Peripheral Component Interconnect (PCI) bridge, Universal Serial Bus (USB) controller, or other type of peripheral bridge or control Device. Bridge 424 may provide a data path between CPU 402 and peripherals. Other types of topologies can be used. In addition, multiple buses may communicate with the ICH 420, such as through multiple bridges or controllers. In addition, other peripheral components in communication with the ICH 420 may include an integrated drive electronics (IDE) or small computer system interface (SCSI) hard drive, a USB port, a keyboard, a mouse, a parallel port, a serial Port, floppy drive, digital output support (eg, digital video interface (DVI)), or other device.The bus 422 may be in communication with an audio device 426, one or more disk drives 428, and a network interface device 430 (eg, which communicates with the computer network 403 via a wired or wireless interface). As shown, the network interface device 430 may be coupled with the antenna 431 for wirelessly (eg, via an Institute of Electrical and Electronics Engineers (IEEE) 802.11 interface including IEEE802.11a / b / g / n / ac, , 3G, 4G, LPE, etc.) with the network 403. Other devices may communicate via bus 422. In addition, various components, such as network interface device 430, may communicate with GMCH 408 in some embodiments. In addition, the processor 402 and the GMCH 408 may be combined to form a single chip. In addition, in other embodiments, the graphical accelerometer 416 may be included within the GMCH 408.In addition, computing system 400 may include volatile and / or non-volatile memory (or storage). For example, the non-volatile memory may include one or more of the following: a read only memory (ROM), a programmable ROM (PROM), an erasable PROM (EPROM), an electronic EPROM (EEPROM), a magnetic disk drive ), Floppy disk, compact disk ROM (CD-ROM), digital versatile disk (DVD), flash memory, magneto-optical disk, or other type of non-volatile machine- readable medium capable of storing electronic data (eg, including instructions).FIG. 5 illustrates a computing system 500 arranged in a point-to-point (PtP) configuration according to an embodiment. Specifically, FIG. 5 shows a system in which a processor, a memory, and input / output devices are interconnected through a plurality of point-to-point interfaces. The operations discussed with reference to FIGS. 1-4 may be performed by one or more components of system 500.As shown in FIG. 5, system 500 may include several processors, only two of which are shown for clarity. The processors 502 and 504 may each include a local memory controller hub (MCH) 506 and 508 for enabling communication with the memories 510 and 512. Memory 510 and / or 512 may store various data, such as those discussed with reference to memory 114 of FIG. 1 and / or FIG. 4. In addition, MCH 506 and 508 may include memory controller 120 in some embodiments. In addition, system 500 includes logic 125, SSD 130, and / or logic 160 (which in various embodiments may be coupled with system 500 via bus 540/544 as shown, via other point-to-point connections with processor 502/504 or Chipset 520, where logic 125 is incorporated into chipset 520, etc.).In an embodiment, the processors 502 and 504 may be one of the processors 402 discussed with reference to FIG. 4. The processors 502 and 504 may exchange data using the PtP interface circuits 516 and 518 via a point-to-point (PtP) interface 514, respectively. In addition, the processors 502 and 504 may each exchange data with the chipset 520 using the point-to-point interface circuits 526, 528, 530, and 532 via separate PtP interfaces 522 and 524. The chipset 520 may also exchange data with the high performance graphics circuit 534 via a high performance graphics interface 536, for example using a PtP interface circuit 537. As discussed with reference to FIG. 4, in some embodiments, the graphics interface 536 may be coupled with a display device (eg, display 417).As shown in FIG. 5, one or more of the cores 106 and / or cache 108 of FIG. 1 may be located within the processors 502 and 504. However, other embodiments may exist in other circuits, logic units or devices within the system 500 of FIG. 5. In addition, other embodiments may be distributed among the several circuits, logic units or devices shown in FIG. 5.The chipset 520 may communicate with the bus 540 using the PtP interface circuit 541. Bus 540 may have one or more devices with which to communicate, for example, a bus bridge 542 and I / O devices 543. The bus bridge 542 may communicate with other devices such as a keyboard / mouse 545, a communication device 546 (eg, a modem, a network interface device, or other communication device that may communicate with the computer network 403 such as a reference network For example, via an antenna 431, an audio I / O device, and / or a data storage device 548, as discussed by the interface device 430. Data storage device 548 may store code 549 that may be executed by processors 502 and / or 504.In some embodiments, one or more of the components discussed herein may be implemented as a system-on-chip (SOC) device. Figure 6 shows a block diagram of an SOC package according to an embodiment. As shown in FIG. 6, the SOC 602 includes one or more central processing unit (CPU) cores 620, one or more GPU cores 630, an input / output (I / O) interface 640, and a memory controller 642. Various components of SOC package 602 may be coupled with an interconnect or a bus, for example as discussed herein with reference to the other figures. In addition, SOC package 602 may include more or fewer components, such as those discussed herein with reference to other figures. In addition, each component of SOC package 620 may include one or more other components, for example, as discussed with reference to the other figures herein. In one embodiment, the SOC package 602 (and its components) is provided on one or more integrated circuit (IC) dies, for example, packaged on a single semiconductor device.As shown in FIG. 6, SOC package 602 is coupled via memory controller 642 and memory 660 (which may be similar or identical to the memory discussed herein with reference to other figures). In an embodiment, memory 660 (or portions thereof) may be integrated on SOC package 602.I / O interface 640 may be coupled to one or more I / O devices 670, for example, via interconnects and / or buses as discussed herein with reference to other figures. The I / O device 670 may include one or more of the following: a keyboard, a mouse, a touch pad, a display, an image / video capture device such as a video camera or camcorder / video recorder, a touch screen, a speaker, and the like. In addition, in an embodiment, SOC package 602 may include / integrate logic 125. Alternatively, logic 125 may be provided outside of SOC package 602 (ie, as discrete logic).The following examples relate to further embodiments. Example 1 includes an apparatus comprising: a non-volatile memory for storing data corresponding to a first workload and a second workload; and logic for generating, based at least in part on the first Workload, or determination of the average validity of one or more candidate bands to be processed during operation of the second workload to determine the allocation of one or more resources in the non-volatile memory. Example 2 includes the apparatus of Example 1, wherein the logic is to determine the allocation for the garbage collection logic and the one or more resources of a host coupled to the non-volatile memory. Example 3 includes the apparatus of Example 2, wherein the garbage collection logic is to release space occupied by invalid data in the non-volatile memory. Example 4 includes the apparatus of Example 2, wherein the logic to determine the allocation of the one or more resources includes the garbage collection logic. Example 5 includes the apparatus of Example 1, wherein the logic is to determine, based at least in part on the one or more candidates to be processed during the transition from the first workload to the second workload Band to determine the allocation of the one or more resources in the non-volatile memory. Example 6 includes the apparatus of Example 1, wherein the logic is to determine the allocation of the one or more resources to cause an increase in effective spare space of the non-volatile memory. Example 7 includes the apparatus of example 1, wherein the logic is to determine the allocation of the one or more resources to cause a decrease in write amplification in the non-volatile memory. Example 8 includes the apparatus of Example 1, wherein the second workload is immediately following the first workload. Example 9 includes the apparatus of Example 1, wherein the first workload is an empty or idle workload. Example 10 includes the device of Example 1, wherein the non-volatile memory and the logic are on a same integrated circuit device. Example 11 includes the device of Example 1, wherein the non-volatile memory comprises one of: a nanowire memory, a ferroelectric transistor random access memory (FeTRAM), a magnetoresistive random access memory (MRAM) Flash memory, spin-transfer torque random access memory (STTRAM), resistive random access memory, phase change memory (PCM), and byte addressable three-dimensional cross point memory. Example 12 includes the device of Example 1, wherein the SSD includes the non-volatile memory and the logic.Example 13 includes a method comprising: storing data corresponding to a first workload and a second workload in a non-volatile memory; and based at least in part on the first workload or the second Determination of the average validity of one or more candidate bands processed during operation of the workload to determine the allocation of one or more resources in the non-volatile memory. Example 14 includes the method of Example 13, further comprising determining the allocation for the garbage collection logic and the one or more resources of a host coupled with the non-volatile memory. Example 15 includes the method of Example 13, further comprising the garbage collection logic freeing up space in the non-volatile memory that is occupied by invalid data. Example 16 includes the method of example 13, wherein determining the allocation of the one or more resources in the non-volatile memory causes an increase in the effective spare space of the non-volatile memory. Example 17 includes the method of Example 13, wherein determining the allocation of the one or more resources in the non-volatile memory causes a decrease in write amplification in the non-volatile memory. Example 18 includes the method of example 13, wherein the first workload is an empty or idle workload. Example 19 includes the method of example 13, wherein the non-volatile memory comprises one of: a nanowire memory, a ferroelectric transistor random access memory (FeTRAM), a magnetoresistive random access memory (MRAM) Flash memory, spin-transfer torque random access memory (STTRAM), resistive random access memory, phase change memory (PCM), and byte addressable three-dimensional cross point memory. Example 20 includes the method of example 13, further comprising based at least in part on an average validity of the one or more candidate bands being processed during the transition from the first workload to the second workload Determine to determine the allocation of the one or more resources in the non-volatile memory.Example 21 includes a system comprising: a non-volatile memory; and at least one processor core for accessing the non-volatile memory; the non-volatile memory for storing data associated with a first workload And data corresponding to a second workload; and logic for determining, based at least in part on an average of one or more candidate bands to be processed during operation of the first workload or the second workload Determine the allocation of one or more resources in the non-volatile memory. Example 22 includes the system of Example 21, wherein the logic is to determine the allocation for the garbage collection logic and the one or more resources of a host coupled with the non-volatile memory. Example 23 includes the system of Example 21, wherein the logic is to determine the allocation of the one or more resources to increase the effective spare space of the non-volatile memory. Example 24 includes the system of Example 21, the logic to determine the allocation of the one or more resources to cause a decrease in write amplification in the non-volatile memory. Example 25 includes the system of example 21, wherein the first workload is an empty or idle workload.Example 26 includes a computer-readable medium that includes one or more instructions that, when executed on a processor, configure the processor to perform one or more operations for: The data corresponding to the first workload and the second workload are stored in non-volatile memory; and based at least in part on one or more of the data processed during the operation of the first workload or the second workload, Determine the average validity of the candidate bands to determine the allocation of one or more resources in the non-volatile memory. Example 27 includes the computer-readable medium of Example 26, further comprising one or more instructions that, when executed on the processor, configure the processor to perform one or more The operations are for causing the allocation of the one or more resources for garbage collection logic and a host coupled to the non-volatile memory to be determined. Example 28 includes the computer-readable medium of Example 26, further comprising one or more instructions that, when executed on the processor, configure the processor to perform one or more Operations for causing the garbage collection logic to free up space in the non-volatile memory that is occupied by invalid data. Example 29 includes the computer-readable medium of Example 26, further comprising one or more instructions that, when executed on the processor, configure the processor to perform one or more Operations for determining the allocation of the one or more resources in the non-volatile memory to increase the effective spare space of the non-volatile memory. Example 30 includes the computer-readable medium of Example 26, further comprising one or more instructions that, when executed on the processor, configure the processor to perform one or more Operations for determining the allocation of the one or more resources in the non-volatile memory to cause a decrease in write amplification in the non-volatile memory. Example 31 includes the computer-readable medium of Example 26, wherein the first workload is an empty or idle workload. Example 32 includes the computer readable medium of Example 26, wherein the non-volatile memory comprises one of: a nanowire memory, a ferroelectric transistor random access memory (FeTRAM), a magnetoresistive random access memory ( MRAM), flash memory, spin-transfer torque random access memory (STTRAM), resistive random access memory, phase change memory (PCM), and byte addressable three-dimensional crosspoint memory. Example 33 includes the computer-readable medium of Example 26, further comprising one or more instructions that, when executed on the processor, configure the processor to perform one or more One or more operations for determining the non-triviality based at least in part on the determination of the average validity of the one or more candidate bands being processed during the transition from the first workload to the second workload The allocation of the one or more resources in the miss memory.Example 34 includes an apparatus that includes a unit for performing the method of any of the preceding examples.Example 35 includes a machine-readable storage device comprising machine readable instructions that, when executed, implement a method as in any of the preceding examples or to implement a method as set forth in any of the preceding examples Device.In various embodiments, the operations discussed herein, for example, with reference to FIGS. 1-6, may be implemented as hardware (eg, circuitry), software, firmware, microcode or a combination thereof that may be provided as a computer program product For example, a tangible (eg, non-transitory) machine-readable or computer-readable medium having stored thereon instructions that are used to program a computer to perform the processes process. In addition, the term "logic" may include, for example, software, hardware, or a combination of software and hardware. The machine-readable medium may include storage devices such as those discussed with respect to FIGS. 1-6.In addition, such tangible computer readable media can be downloaded as a computer program product, where the program can be downloaded over a communication link (eg, a bus, a modem, or a network connection) through a data signal (eg, in a carrier wave or other propagation medium) The remote computer (eg, server) is transmitted to the requesting computer (eg, client).Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one implementation. The appearances of the phrase "in one embodiment" in various places in the specification may or may not be all referring to the same embodiment.Additionally, the terms "coupled" and "linked" and derivatives thereof may be used in the specification and claims. In some embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" may also mean that two or more elements may not be in direct contact with each other, but may still be able to collaborate or interact with each other.Thus, while the embodiments have been described in terms of structural features and / or methodological acts, it is to be understood that the claimed subject matter may not be limited to the particular features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claimed subject matter.
A bipolar transistor breakdown voltage enhancement circuit extends the voltage swing which can be tolerated at an output terminal driven by an emitter follower-connected bipolar output transistor operating in a high impedance state. The enhancement circuit connects the base of the output transistor to a voltage which extends the allowable high impedance output voltage swing: an NPN output transistor's base is tied to a voltage that is the lower of the output voltage or ground, and a PNP output transistor's base is tied to a voltage which is the higher of the output voltage or VDD.
I claim: 1. A bipolar breakdown enhancement circuit for use with a tri-state output stage, comprising:a supply voltage, a ground terminal which is at a ground potential, an NPN emitter follower-connected bipolar output transistor, the collector-emitter circuit of said output transistor connected between said supply voltage and an output terminal, said transistor arranged to produce an output voltage at said output terminal which is referenced to said ground potential, a comparison circuit which produces an output equal to the lesser of said output voltage and said ground potential, wherein said comparison circuit comprises first and second FETs, said first FET connected between said ground terminal and a common node and arranged to conduct a first current to said common node in response to said output voltage and said second FET connected between said output terminal and said common node and arranged to conduct a second current to said common node in response to said ground potential, said common node being the output of said comparison circuit, and a switch arranged to connect the base of said output transistor to the output of said comparison circuit in response to a tri-state control signal which is asserted when said output transistor is to be placed in a high-impedance state. 2. The breakdown enhancement circuit of claim 1, wherein said switch comprises a FET connected between the base of said output-transistor and said common node.3. The breakdown enhancement circuit of claim 1, wherein said output transistor has shorted base breakdown voltage parameters BVces and BVecs, and the allowable output swing at said output terminal when said output transistor is an a high-impedance state is given by:(VDD-BVces) to (VDD+BVecs), where VDD is equal to said supply voltage. 4. A bipolar breakdown enhancement circuit for use with a tri-state output stage, comprising:supply voltage, a ground terminal which is at a ground potential, a PNP emitter follower-connected bipolar output transistor, the collector-emitter circuit of said output transistor connected between said ground terminal and an output terminal, said transistor arranged to produce an output voltage at said output terminal which is referenced to said ground potential, a comparison circuit which produces an output equal to the greater of said output voltage and said supply voltage, wherein said comparison circuit comprises first and second FETs, said first FET connected between said supply voltage and a common node and arranged to conduct a first current to said common node in response to said output voltage and said second FET connected between said output terminal and said common node and arranged to conduct a second current to said common node in response to said supply voltage, said common node being the output of said comparison circuit, and a switch arranged to connect the base of said output transistor to the output of said comparison circuit in response to a tri-state control signal which is asserted when said output transistor is to be placed in a high-impedance state. 5. The breakdown enhancement circuit of claim 4, wherein said switch comprises a FET connected between the base of said output transistor and said common node.6. The breakdown enhancement circuit of claim 4, wherein said output transistor has shorted base breakdown voltage parameters BVces and BVecs, and the allowable output swing at said output terminal when said output transistor is an its high-impedance mode is given by:(GND-BVecs) to (GND+BVces), where GND is equal to said ground potential. 7. A bipolar breakdown enhancement circuit for use with a tri-state output stage, comprising:a first supply voltage VDD, a second supply voltage VEE, a ground terminal which is at a ground potential GND, an NPN emitter follower-connected bipolar output transistor, the collector-emitter circuit of said NPN output transistor connected between VDD and an output terminal, a PNP emitter follower-connected bipolar output transistor, the collector-emitter circuit of said PNP output transistor connected between VEE and said output terminal, said output/transistors arranged to produce an output voltage at said output terminal which is referenced to said ground potential, a first comparison circuit connected to said output voltage and said ground potential at respective inputs and which produces an output equal to the lesser of said output voltage and said ground potential, a second comparison circuit connected to said output voltage and said supply voltage at respective inputs and which produces an output equal to the greater of said output voltage and said supply voltage, a first switch arranged to connect the base of said NPN output transistor to the output of said first comparison circuit in response to a first tri-state control signal which is asserted when said NPN output transistor is to be placed in a high-impedance state, and a second switch arranged to connect the base of said PNP output transistor to the output of said second comparison circuit in response to a second tri-state control signal which is asserted when said PNP output transistor is to be placed in a high-impedance state. 8. The breakdown enhancement circuit of claim 7, wherein said first comparison circuit comprises first and second FETs, said first FET connected between said ground terminal and a common node and arranged to conduct a first current to said common node in response to said output voltage and said second FET connected between said output terminal and said common node and arranged to conduct a second current to said common node in response to said ground potential, said common node being the output of said first comparison circuit.9. The breakdown enhancement circuit of claim 8, wherein said first switch comprises a FET connected between the base of said NPN output transistor and said common node.10. The breakdown enhancement circuit of claim 7, wherein said second comparison circuit comprises first and second FETs, said first FET connected between said supply voltage and a common node and arranged to conduct a first current to said common node in response to said output voltage and said second FET connected between said output terminal and said common node and arranged to conduct a second current to said common node in response to said supply voltage, said common node being the output of said second comparison circuit.11. The breakdown enhancement circuit of claim 10, wherein said first switch comprises a FET connected between the base of said PNP output transistor and said common node.12. The breakdown enhancement circuit of claim 7, wherein said NPN output transistor has a shorted base breakdown voltage parameter BVecsNPN and said PNP output transistor has a shorted base breakdown voltage parameter BVecsPNP, and the allowable output swing at said output terminal when said output transistors are in a high-impedance state is given by:(-BVecsPNP+GND) to (+BVecsNPN+VDD). 13. The breakdown enhancement circuit of claim 7, wherein said first tri-state control signal is provided by a first inverter circuit and said second tri-state control signal is provided by a second inverter circuit, said first inverter circuit powered by the output (VPOS) of said second comparison circuit and said second inverter circuit powered by the output (VNEG) of said first comparison circuit such that said output transistors remain in their tri-state mode when VDD=GND.14. The breakdown enhancement circuit of claim 13, wherein said first inverter circuit compriss a PMOS FET and an NMOS FET connected in series between the VPOS and GND, the junction of said PMOS and NMOS FETs providing said first tri-state control signal.15. The breakdown enhancement circuit of claim 13, wherein said second inverter circuit comprises a PMOS FET and an NMOS FET connected in series between said VDD and VNEG, the junction of said PMOS and NMOS FETs providing said second tri-state control signal.16. A bipolar breakdown enhancement circuit for use with a tri-state output stage, comprising:a first supply voltage VDD, a second supply voltage VEE, a ground terminal which is at a ground potential GND, an NPN emitter follower-connected bipolar output transistor, the collector-emitter circuit of said NPN output transistor connected between VDD and an output terminal, a PNP emitter follower-connected bipolar output transistor, the collector-emitter circuit of said PNP output transistor connected between VEE and said output terminal, said output transistors arranged to produce an output voltage at said output terminal which is referenced to said ground potential, a first comparison circuit which produces an output equal to the lesser of said output voltage and said ground potential, said first comparison circuit comprising first and second FETs, said first FET connected between said ground terminal and a first common node and arranged to conduct a current to said first common node in response to said output voltage and said second FET connected between said output terminal and said first common node and arranged to conduct a current to said first common node in response to said ground potential, said first common node being the output of said first comparison circuit, a second comparison circuit which produces an output equal to the greater of said output voltage and said supply voltage, said second comparison circuit comprising third and fourth FETs, said third FET connected between said supply voltage and a second common node and arranged to conduct a current to said second common node in response to said output voltage and said fourth FET connected between said output terminal and said second common node and arranged to conduct a current to said second common node in response to said supply voltage, said second common node being the output of said second comparison circuit, a fifth FET arranged to connect the base of said NPN output transistor to the output of said first comparison circuit in response to a tri-state control signal which is asserted when said NPN output transistor is to be placed in a high-impedance state, and a sixth FET arranged to connect the base of said PNP output transistor to the output of said second comparison circuit in response to a tri-state control signal which is asserted when said PNP output transistor is to be placed in a high-impedance state, wherein said NPN output transistor has a shorted base breakdown voltage parameter BVecsNPN and said PNP output transistor has a shorted base breakdown voltage parameter BVecsPNP, and the allowable output swing at said output terminal when said output transistors are in their high-impedance states is given by: (-BVecsPNP+GND) to (+BVecsNPN+VDD). 17. The breakdown enhancement circuit of claim 16, wherein said fifth FET receives a first tri-state control signal and said sixth FET receives a second tri-state control signal, said first tri-state control signal provided by a first inverter circuit and said second tri-state control signal provided by a second inverter circuit, said first inverter circuit powered by the output of said second comparison circuit and said second inverter circuit powered by the output of said first comparison circuit such that said output transistors remain in their tri-state mode when VDD=GND.
BACKGROUND OF THE INVENTION1. Field of the InventionThis invention relates to the field of bipolar transistor output stages, and particularly to methods of enhancing the output transistors' breakdown characteristics when the output stage is in a high impedance mode.2. Description of the Related ArtMany interface circuits require output stages which are capable of operating in a "tri-state" or "high impedance" mode, in which the output terminal presents a high impedance to the external circuitry connected to it. This external circuitry is often powered with supply voltages which differ from those used by the interface circuit. As a result, the output stage's output transistors may need to withstand voltages which swing above and below the interface circuit's power forms without breaking down.In a conventional bipolar interface circuit, the output stage consists of one or two bipolar output transistors which are emitter follower-connected to the output terminal. To put the output stage into a high impedance mode, the bases of the output transistors are left floating. When so arranged, the high impedance output voltage swing range is limited to the transistors' open base breakdown voltages BVeco (emitter reverse-biased compared to the base) and BVceo (collector reverse-biased compared to the base). For example, if an interface circuit is powered with a supply voltage VDD of 5 volts, and its output stage includes an emitter follower-connected NPN output transistor having a BVeco of 5 volts and a BVceo of 20 volts, the allowable output voltage swing is given by:(VDD-BVceo) to (VDD+BVeco)=-15 volts to +10 volts. If the interface circuit's output terminal is subjected to voltage swings beyond this range, the tri-state requirement may be violated and the output transistor may be damaged or destroyed.SUMMARY OF THE INVENTIONA bipolar transistor breakdown voltage enhancement circuit is presented which overcomes the limitations noted above. The invention extends the high impedance output voltage swing which can be tolerated at an interface circuit's output terminal which is driven by an emitter follower-connected bipolar transistor.The present invention is a circuit which, when an emitter follower-connected bipolar output transistor is to be in a high impedance state, connects the base of the output transistor to a voltage which extends the allowable output voltage swing. When the output transistor is an NPN, the invention ties the transistor's base to a voltage that is the lower of the interface circuit's output voltage or ground (or another low impedance voltage path). For a PNP output transistor, the transistor's base is tied to a voltage which is the higher of the interface circuit's output voltage or VDD (or another low impedance voltage path). By shorting the base in this way, the output transistor's breakdown characteristic is enhanced, such that the allowable voltage swing is extended by up to 50% or more.Further features and advantages of the invention will be apparent to those skilled in the art from the following detailed description, taken together with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1a is a schematic diagram of a bipolar breakdown enhancement circuit as might be used with an emitter follower-connected NPN output transistor.FIG. 1b is a schematic diagram of a bipolar breakdown enhancement circuit as might be used with an emitter follower-connected PNP output transistor.FIG. 2 is a schematic diagram of a bipolar breakdown enhancement circuit as might be used with an output stage which includes both NPN and PNP emitter follower-connected output transistors.DETAILED DESCRIPTION OF THE INVENTIONFIG. 1a illustrates an exemplary embodiment of a breakdown enhancement circuit in accordance with the present invention, as might be used with an interface circuit's emitter follower-connected NPN output transistor. The collector-emitter circuit of an NPN output transistor Q1 is connected between a supply voltage VDD and an output terminal 10. Q1 is emitter-follower connected; i.e., a signal applied to Q1's base-from an input stage, for example-appears at its emitter, and thus at output terminal 10. The voltage at output terminal 10 is the interface circuit's output OUT, which is referenced to a voltage potential GND; GND is typically ground, but may be a non-zero potential as well.The invention extends the allowable voltage swing at output terminal 10 when Q1 is operated in a high impedance state. This is accomplished by shorting Q1's base to a particular voltage. A comparison circuit 12 produces an output at a node 14 which is the lesser of output voltage OUT and GND. A switch 16 is arranged to connect the base of output transistor Q1 to the output 14 of comparison circuit 12, in response to a tri-state control signal 18 which is asserted when Q1 is to be placed in a high-impedance state. Thus, when Q1 is placed in high impedance mode via tri-state control signal 18, the base of Q1 is shorted to the lesser of OUT and GND.This serves to extend Q1's breakdown voltage, and thus the voltage swing which can be tolerated at output terminal 10 without damaging Q1 or violating the tri-state requirement. In a conventional output stage, when Q1 is operated in high impedance mode, its base is left floating. As noted above, this results in a high impedance output voltage swing range which is limited to Q1's open base breakdown voltages BVeco and BVceo, with the allowable output voltage swing given by:(VDD-BVceo) to (VDD+BVeco).However, when configured as shown in FIG. 1a, the high impedance output voltage swing range is extended out to Q1's shorted base breakdown voltages BVecs and BVces. The shorted base breakdown voltages tend to be higher (typically about double) than the open base breakdown voltages. When closed, switch 16 connects Q1's base to a low impedance source (node 14), which ensures the higher breakdown voltages. When employing the present breakdown enhancement circuit, the high impedance output voltage swing range is given by:(VDD-BVces) to (VDD+BVecs).As an example, assume the following parameters:VDD=5 voltsBVeco=5 voltsBVceo=20 voltsBVecs=10 voltsBVceo=40 voltsThe allowable output voltage swing with Q1's base left open is given by:(VDD-BVceo) to (VDD+BVeco)=-15 volts to +10 volts.When used with the invention, however, the allowable output voltage swing is given by:(VDD-BVces) to (VDD+BVecs)=-35 volts to +15 volts.This extended voltage swing enables the interface circuit to tolerate a wider range of voltages on output terminal 10 without damaging or destroying Q1, or violating the tri-state requirement (which happens when breakdown occurs and current flows into or out of the output terminal).The invention finds particular applicability when the interface circuit is connected to external circuitry which is powered with different (and possibly higher) supply voltages. This might be found, for example, if the interface circuit is a bus transmitter or a power converter.Comparison circuit 12 is preferably implemented with a pair of field-effect transistors (FETs) MN1 and MN2. The drain-source circuit of MN1 is connected between GND and node 14, with its gate connected to OUT. Similarly, the drain-source circuit of MN2 is connected between OUT and node 14, with its gate connected to GND. When so arranged, the lesser of OUT and GND appears at node 14. When this voltage is connected to the base of Q1 via switch 16, the extended output voltage swing defined above is achieved.Switch 16 is preferably implemented with a FET MN3, having its drain-source circuit connected between Q1's base and node 14, and receiving tri-state control signal 18 at its gate.As noted above, when Q1 is to be in a high impedance state, its base is connected to the lesser of OUT and GND. This is required to maintain Q1 in its tri-state mode (i.e., not forward-biased) when OUT<GND-Vbe,on. As such, if the interface circuit has other low impedance voltage paths available which might be even lower than OUT or GND, comparison circuit 12 can be adapted to include these voltages in the comparison with OUT and GND, or to compare OUT with one of these voltages instead of with GND. In all cases, comparison circuit 12 is arranged to provide the lowest available voltage at node 14.FIG. 1b illustrates an implementation of the invention with an interface circuit's emitter follower-connected PNP output transistor. The interface circuit is connected between a supply voltage VDD and a ground potential GND (which may be non-zero). The collector-emitter circuit of a PNP output transistor Q2 is connected between GND and an output terminal 20; a signal applied to Q2's base appears at its emitter, and thus at output terminal 20. The voltage at output terminal 20 is the interface circuit's output OUT, which is referenced to voltage potential GND (which may be non-zero).Here, a comparison circuit 22 produces an output at a node 24 which is the greater of output voltage OUT and VDD. A switch 26 connects the base of output transistor Q2 to the output 24 of comparison circuit 22 in response to a tri-state control signal 28 which is asserted when Q2 is to be placed in a high-impedance state. This serves to extend Q2's breakdown voltage, and thus the voltage swing which can be tolerated at output terminal 20 without damaging Q2 or violating the tri-state requirement.If the base of Q2 is left floating, the high impedance output voltage swing range is limited to Q2's open base breakdown voltages BVeco and BVceo, with the allowable output voltage swing given by:(GND-BVeco) to (GND+BVceo).However, when configured as shown in FIG. 1b, the high impedance output voltage swing range is extended out to Q2's shorted base breakdown voltages BVecs and BVces. Thus, when employing the present breakdown enhancement circuit, the high impedance output voltage swing range is given by:(GND-BVecs) to (GND+BVces).As an example, assume the following parameters:GND=0 voltsBVeco=10 VoltsBVceo=40 voltsBVecs=20 voltsBVceo=80 voltsThe allowable output voltage swing with Q2's base left open is given by:(GND-BVeco) to (GND+BVceo)=-10 volts to +40 volts.When used with the invention, however, the allowable output voltage swing is given by:(GND-BVecs) to (GND+BVces)=-20 volts to +80 volts.This extended voltage swing enables the interface circuit to tolerate a wider range of voltages on output terminal 20 without damaging or destroying Q2 or violating the tri-state requirement.Comparison circuit 22 is preferably implemented with a pair of FETs MP1 and MP2. The drain-source circuit of MP1 is connected between VDD and node 24, with its gate connected to OUT. Similarly, the drain-source circuit of MP2 is connected between OUT and node 24, with its gate connected to VDD. When so arranged, the greater of OUT and VDD appears at node 24. When this voltage is connected to the base of Q2 via switch 26, the extended output voltage swing defined above is achieved.Switch 26 is preferably implemented with a FET MP3, having its drain-source circuit connected between Q2's base and node 24, and receiving tri-state control signal 28 at its gate.As noted above, when Q2 is to be in a high impedance state, its base is connected to the greater of OUT and VDD. However, if the interface circuit has other low impedance voltage paths available which might be even higher than OUT or VDD, comparison circuit 22 can be adapted to include these voltages in the comparison with OUT and VDD, or to compare OUT with one of these voltages instead of with VDD. In all cases, comparison circuit 22 is arranged to provide the highest available voltage at node 24.Note that while the implementations of comparison circuits 12 and 22 and switches 16 and 26 shown in FIGS. 1a and 1b are preferred, the invention is not limited to these implementations. Many other circuit designs could be employed to provide the comparison function required of comparison circuits 12 and 22, and the switching function provided by switches 16 and 26.A preferred embodiment of the invention is shown in FIG. 2. Here, the interface circuit's output stage includes both an NPN output transistor Q3 and a PNP output transistor Q4. Q3's collector-emitter circuit is connected between a supply voltage VDD and an output terminal 30, and Q4's collector-emitter circuit is connected between a supply voltage VEE and output terminal 30. The interface circuit also include a ground terminal GND, which is typically at ground potential but may also be non-zero. Q3 and Q4 are driven with respective drive signals and produce a GND-referenced output voltage OUT at output terminal 30 in response.Q3 and Q4 each have breakdown enhancement circuits as described above. For Q3, a comparison circuit 32 provides the lesser of GND and OUT at a node 34, and the base of Q3 is connected to node 34 via a switch 36 which is closed when Q3 is to be put into a high impedance state. Similarly, for Q4, a comparison circuit 37 provides the greater of VDD and OUT at a node 38, and the base of Q4 is connected to node 38 via a switch 40 which is closed when Q4 is to be put into a high impedance state.This arrangement serves to extend the high impedance voltage swing in the same manner as was discussed above. Without the invention, with the Q1 and Q2 bases left floating, the high impedance output voltage swing range is given by:(GND-BVeco(Q2)) to (VDD+BVeco(Q1))However, when configured as shown in FIG. 2, the high impedance output voltage swing range is extended from BVeco to BVecs. Thus, when employing the present breakdown enhancement circuit, the high impedance output voltage swing range is given by:(GND-BVecs(Q2)) to (VDD+BVecs(Q1)).As an example, assume the following parameters:VDD=5 voltsGND=0 voltsBVeco(Q1)=5 voltsBVeco(Q2)=10 voltsBVecs(Q1)=10 voltsBVecs(Q2)=20 voltsThe allowable output voltage swing with the bases of Q1 and Q2 left open is given by:(GND-BVeco(Q2)) to (VDD+BVeco(Q1))=-10 volts to +10 volts.When used with the invention, however, the allowable output voltage swing is given by:(GND-BVecs(Q2)) to (VDD+BVecs(Q1))=-20 volts to +15 volts.This extended voltage swing enables the interface circuit to tolerate a wider range of voltages on output terminal30 without damaging or destroying Q1 or Q2, or violating the tri-state requirement.Comparison circuit 32 is preferably implemented as discussed above: a FET MN4 is connected between GND and node 34 and controlled by OUT, and a FET MN5 is connected between OUT and node 34 and controlled by GND. Switch 36 is also preferably implemented as discussed above, using a FET MN6.Comparison circuit 37 is similarly implemented: a FET MP4 is connected between VDD and node 38 and controlled by OUT, and a FET MP5 is connected between OUT and node 34 and controlled by VDD. Switch 40 is also preferably implemented as discussed above, using a FET MP6.In some applications, the interface circuit may be powered down, while the external circuitry connected to output terminal 30 continues to operate. Under these conditions, it may be desirable for the breakdown enhancement circuit to continue to operate, even when VDD=GND. One way in which this can be accommodated is illustrated in FIG. 2. Tri-state switch 36 is driven with an inverter circuit 42, and tri-state switch 40 is driven with an inverter circuit 44. A tri-state control signal 46 is connected to inverter circuit 42, and an inverted version 48 of tri-state control signal 46 is connected to inverter circuit 44. A low-going tri-state control signal 46 causes switches 36 and 40 to be turned on and Q3 and Q4 to be put into a high impedance state.Each of inverter circuits 42 and 44 are preferably made from a PMOS FET and an NMOS FET: FETs MP7 and MN7 make up inverter 42, and FETs MP8 and MN8 make up inverter 44. Inverter circuit 42 is powered via a connection to node 38 (labeled VPOS), and inverter circuit 44 is powered via a connection to node 34 (labeled VNEG). VPOS provides a positive voltage relative to VNEG, which enables inverter circuit 42 to operate (and switch 36 to be controlled) such that Q3's base can be connected to node 34. Similarly, VNEG provides a negative voltage relative to VPOS, which enables inverter circuit 44 to operate (and switch 40 to be controlled) such that Q4's base can be connected to node 38.When the interface circuit is unpowered (VDD=GND=VPOS), inverter transistors MP7 and MN7 are unpowered. However, the output of MP7/MN7 will drift to VDD=GND=VPOS and is pinned to within a Vbe drop of VDD=GND=VPOS through parasitic diode currents. This looks like a logic "high" signal to switch transistor MN6 (source=body=VNEG), turning MN6 on and shorting the base of Q3 to VNEG. Similar mechanisms cause switch transistor MP6 to be turned on when the interface circuit is unpowered, such that the base of Q4 is shorted to VPOS.Note that for proper operation, the drain-source breakdown voltages (BVds) of FETs MN1-MN8 and MP1-MP8 need to be sufficiently high. If the BVds voltages are too low, the output stage's tri-state breakdown voltage may be limited by the FETs instead of bipolar output transistors Q1-Q4.While particular embodiments of the invention have been shown and described, numerous variations and alternate embodiments will occur to those skilled in the art. Accordingly, it is intended that the invention be limited only in terms of the appended claims.
A no-flow underfill material (20) and process suitable for underfilling a bumped circuit component (10). The underfill material (20) initially comprises a dielectric polymer material (22) in which is dispersed a precursor capable of reacting to form an inorganic filler (24). The underfill process generally entails dispensing the underfill material (20) over terminals (18) on a substrate (16), and then placing the component (10) on the substrate (16) so that the underfill material (20) is penetrated by the bumps (12) on the component (10) and the bumps (12) contact the terminals (18) on the substrate (16). The bumps (12) are then reflowed to form solid electrical interconnects (26) that are encapsulated by the resulting underfill layer (28). The precursor may be reacted to form the inorganic filler (24) either during or after reflow.
A method of underfilling a bumped circuit component (10) with a no-flow underfill material (20), the method comprising the steps of dispensing the underfill material (20) over terminals (18) on a substrate (16), penetrating the underfill material (20) with bumps (12) on the circuit component (10) so that the bumps (12) contact the terminals (18), heating the bumps (12) so that the bumps (12) melt, and then cooling the molten bumps (12) so that the molten bumps (12) form solid electrical interconnects that are metallurgically bonded to the terminals (18), the underfill material (20) forming an underfill layer (28) that encapsulates the interconnects (26) and contacts both the circuit component (10) and the substrate (16), characterized in that :the no-flow underfill material (20) is formed to comprise a dielectric polymer material (22) containing a precursor, the underfill material (20) is substantially free of an inorganic filler when penetrated by the bumps (12), and the precursor is subsequently reacted to form an inorganic filler (24) having a CTE lower than the CTE of the dielectric polymer material (22).The method according to claim 1, characterized in that the precursor is an organometallic silicon compound and the inorganic filler (24) comprises silica particles (24).The method according to claim 2, characterized in that at least some of the silica particles (24) are nano-particles.The method according to claim 1, characterized in that the precursor is reacted to form the inorganic filler (24) during the heating step.The method according to claim 1, characterized in that the precursor is reacted to form the inorganic filler (24) during a second heating step performed after the cooling step.The method according to claim 5, characterized in that the dielectric polymer material (22) is cured during the second heating step.The method according to claim 1, characterized in that the dielectric polymer material (22) is cured during the heating step.A no-flow underfill material (20) comprising a dielectric polymer material (22), characterized in that :the underfill material (20) further comprises a precursor and is substantially free of an inorganic filler, the precursor being capable of reacting to form an inorganic filler (24) having a CTE lower than the CTE of the dielectric polymer material (22).The no-flow underfill material (20) of claim 8, characterized in that the underfill material (20) covers terminals (18) on a substrate (16).The no-flow underfill material (20) of claim 8, characterized in that the underfill material (20) is penetrated by bumps (12) on a circuit component (10) so that the bumps (12) contact the terminals (18) and the underfill material (20) completely fills a space defined by and between the circuit component (10) and the substrate (16).The no-flow underfill material (20) of claim 10, characterized in that the substrate (16) is an organic laminate substrate (16).The no-flow underfill material (20) of claim 8, characterized in that the precursor is an organometallic silicon compound and, if the underfill material (20) is sufficiently heated, the precursor reacts to form silica nano-particles (24) dispersed within the underfill material (20).The no-flow underfill material (20) of claim 8, characterized in that the dielectric polymer material (22) is an epoxy adhesive material.A no-flow underfill layer (28) between a flip chip die (10) and an organic laminate substrate (16), the die (10) being attached to the substrate (16) with solder connections (26) bonded to terminals (18) on the substrate (16), the underfill layer (28) encapsulating the solder connections (26) and completely filling a space defined by and between the die (10) and the substrate (16), characterized in that :the underfill layer (28) comprises silica nano-particles (24) dispersed in an adhesive material (22).The no-flow underfill layer (28) of claim 14 further characterized in that , other than the silica nano-particles (24), the underfill layer (28) is free of an inorganic filler having a CTE lower than the CTE of the adhesive material (22).
TECHNICAL FIELDThe present invention generally relates to underfill materials for flip chip devices. More particularly, this invention relates to a no-flow material for underfilling a flip chip device and an underfill method using the no-flow material.BACKGROUND OF THE INVENTIONUnderfilling is well known for promoting the reliability of flip chip components, such as flip chips and ball grid array (BGA) packages that are physically and electrically connected to traces on organic or inorganic circuit boards with numerous solder bump connections. A basic function of an underfill material is to reduce the thermal expansion mismatch loading on the solder joints that electrically and physically attach a component, e.g., die, to an inorganic or organic substrate, such as a reinforced epoxy resin laminate circuit board. Underfill processes generally involve using a specially formulated dielectric material to completely fill the gap between the die and substrate and encapsulate the solder bump connections of the die. In conventional practice, underfilling takes place after the die is attached to the substrate. The underfill material is placed along the perimeter of the die, and capillary action is relied on to draw the material beneath the die.Underfill materials preferably have a coefficient of thermal expansion (CTE) that is relatively close to that of the solder connections, die and substrate to minimize CTE mismatches that would otherwise reduce the thermal fatigue life of the solder connections. Dielectric materials having suitable flow and processing characteristics for capillary underfill processes are typically thermosetting polymers such as epoxies. To achieve an acceptable CTE, a fine particulate filler material such as silica is added to the underfill material to lower the CTE from that of the polymer to something that is more compatible with the CTE's of the die, circuit board, and the solder composition of the solder connections.For optimum reliability, the composition of a filled underfill material and the underfill process parameters must be carefully controlled so that voids will not occur in the underfill material beneath the die, and to ensure that a uniform fillet is formed along the entire perimeter of the die. Both of these aspects are essential factors in terms of the thermal cycle fatigue resistance of the solder connections encapsulated by the underfill. While highly-filled capillary-flow underfill materials have been widely and successfully used in flip chip assembly processes, expensive process steps are typically required to repeatably produce void-free underfills. Capillary underfill materials require the use of expensive dispensing equipment, and the capillary underfill process is a batch-like process that disrupts an otherwise continuous flip chip assembly process. Also, the adhesive strength of a capillary underfill material critically depends on the cleanliness of the die after reflow, necessitating costly cleaning equipment and complex process monitoring protocols. As such, the benefits of flip chip assembly using capillary underfill materials must be weighed against the burden of the capillary underfill process itself. These considerations limit the versatility of the flip chip underfill process to the extent that capillary underfilling is not practical for many flip chip applications.In view of the above, alternative underfill techniques have been developed. One such technique is to laminate a film of underfill material to a bumped wafer prior to die singulation and attachment. With this technique, referred to as wafer-applied underfill (WAU), the solder bumps on the wafer must be re-exposed, such as by burnishing or a laser ablation process. WAU has not been widely used because of the required burnishing step, which can yield inconsistent results, such as uneven underfill thickness. Another underfill technique involves the use of what has been termed a "no-flow" underfill material. In this technique, depicted in Figure 1, an underfill material 120 is deposited on the surface of a substrate 116. A bumped die 110 is then placed on the substrate 116, and force is applied to the die 110 to cause solder bumps 112 on the die 110 to penetrate the underfill material 120 and register with terminals 118 (e.g., traces or bond pads) on the substrate 116. Finally, the solder bumps 112 are reflowed to secure the die 110 to the substrate 116, during which time the underfill material 120 cures.Contrary to capillary-flow underfill materials, filler materials are not typically added to no-flow underfill materials because of the tendency for the filler material to hinder the flip chip assembly process. With reference again to Figure 1, filler particles 124 present in the underfill material 120 can impede the penetration of the underfill material 120 by the solder bumps 112. Filler particles 124 can also become trapped between the solder bumps 112 and the terminals 118 to interfere with the formation of a metallurgical bond, resulting in reduced reliability of the electrical connection. Without a filler material to reduce their CTE, no-flow underfill materials have not been practical for use in harsh environments, such as automotive applications for flip chips on laminate circuit boards.In view of the above, it would be desirable if an underfill material and process were available that were capable of achieving the product reliability obtainable with capillary-flow underfill materials and processes, but without the cost and processing limitations of these materials.BRIEF SUMMARY OF THE INVENTIONThe present invention provides a no-flow underfill material and process suitable for underfilling flip chips and other bumped components employed in harsh environments. The underfill material and process are adapted to incorporate a filler material in a manner that does not compromise component placement, solder connection and reliability, and therefore are suitable for use in underfill applications that have previously required capillary-flow underfill materials.The no-flow underfill material of this invention is initially in the form of a dielectric polymer material in which a precursor is dispersed. According to a preferred aspect of the invention, the underfill material can initially be free of any particulate filler material, such as an inorganic filler typically used to reduce the CTE of a capillary-flow underfill material. However, the precursor added to the underfill material of this invention is chosen on the basis of being capable of reacting to form an inorganic filler that, as a result of having a CTE lower than the CTE of the polymer material, is able to reduce the CTE of the underfill material.The underfill process of this invention generally entails forming the underfill material to comprise the polymer material containing the precursor, and then dispensing the underfill material over terminals on a substrate to which a bumped circuit component is to be mounted. The component is then placed on the substrate so that the underfill material is penetrated with bumps on the component and the bumps contact the terminals on the substrate. The bumps are then heated until molten (reflowed), followed by cooling so that the molten bumps form solid electrical interconnects that are metallurgically bonded to the terminals with electrical integrity. The underfill material forms an underfill layer that encapsulates the interconnects and contacts both the circuit component and the substrate. Either during heating of the bumps or a subsequent heat treatment, the precursor is reacted to form an inorganic filler having a CTE lower than the CTE of the polymer material.According to a preferred aspect of the invention, the underfill layer is continuous, void-free, and completely fills the space defined by and between the component and the substrate. Because the underfill layer formed by the no-flow underfill material incorporates a filler material to reduce its CTE to something closer to that of the electrical (e.g., solder) connections it protects, the underfill material and process of this invention are capable of achieving the product reliability previously possible only with the use of highly-filled capillary-flow underfill materials and processes, but without the processing costs and limitations associated with capillary-flow underfill materials.Other objects and advantages of this invention will be better appreciated from the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1 represents a step in a no-flow underfill process in accordance with the prior art, by which a flip chip die is placed on a substrate so that solder bumps on the die penetrate a filled no-flow underfill material.Figures 2 and 3 represent a sequence of steps in which an unfilled no-flow underfill material deposited on a substrate is penetrated by solder bumps on a flip chip die, and then a filler material is formed in situ within the underfill material during or following die attachment in accordance with the invention.DETAILED DESCRIPTION OF THE INVENTIONA no-flow underfill process in accordance with the present invention is schematically represented in Figures 2 and 3, by which a no-flow underfill material 20 is initially deposited in an unfilled condition (Figure 2), but is formulated to form a particulate filler material 24 in situ (Figure 3) following placement of a bumped circuit component 10. In Figure 2, the underfill material 20 is shown as having been deposited on a substrate 16, which may be a circuit board formed of various materials, such as a thin organic laminate printed wiring board (PWB). As shown in Figure 2, the circuit component, more specifically a flip chip die 10, is to be attached to the substrate 16 with solder bumps 12 formed on pads 14, such as under-bump metallurgy (UBM), defined on the die surface. The solder bumps 12 are intended to register with metal traces 18 (or other suitable terminals) on the substrate 16. While the underfill material 20 is represented as being deposited as a single layer, additional layers could be incorporated into the initial underfill structure.The underfill material 20 is represented in Figure 2 as not containing any filler material, more particularly, any inorganic filler particles capable of reducing the CTE of the underfill material 20 to something closer to those of the die 10, substrate 16 and solder bumps 12. As such, the underfill material 20 does not contain any filler particles of sufficient size and in a sufficient amount to significantly alter its CTE. Instead, the underfill material 20 is formulated to comprise a dielectric polymer material 22 containing a precursor capable of forming in situ the desired particulate filler material 24, which is shown in Figure 3 as being dispersed in an underfill layer 28 formed as a result of curing or otherwise solidifying the underfill material 20.The polymer material 22 is chosen to be compositionally and physically compatible with the materials it contacts, as well as have processing (e.g., cure) temperatures that are compatible with the die 10, the substrate 16, and the various components and circuit structures that might already be present on the substrate 16. Particularly suitable materials for the polymer material 22 are thermosetting polymers, such as epoxy adhesives. An example of a suitable epoxy adhesive material is commercially available from Loctite under the name FF2200. This material has a cure temperature of about 230°C (compatible with the solder reflow profile) and a glass transition temperature of about 130°C. Other suitable polymer materials having different compositions and different cure and glass transition temperatures could be used, depending on the particular application. Furthermore, a flux compound can be added to the polymer material 22, such as in an amount of about 13 to about 25 weight percent, to crack, displace and/or reduce oxides on the solder bumps 12 and traces 18 that would otherwise interfere with the ability of these features to metallurgically bond to each other.The precursor for the polymer material 22 is chosen in part on the basis of being able to form filler particles 24 having a CTE that is lower than that of the polymer material 22, with the effect of reducing the overall CTE of the underfill material 20 to something closer to the CTE's of the die 10, substrate 16, and solder bumps 12, for example, about 18 to 32 ppm/°C. Suitable precursors for use with this invention include organometallic compounds that can be thermally decomposed or otherwise reacted to form a metal oxide, an example of which is organometallic silicon (organosilicon) compounds capable of forming silica (SiO2) when heated to temperatures and for durations that can be withstood by the die 10, solder bumps 12 and substrate 16. A particular organometallic silicon compound believed to be suitable for this purpose is tetraethylorthosilicate. When heated to a temperature of about 220°C for about five minutes, this precursor thermally decomposes to form Si-O chains, whose condensation leads to the formation of silica nano-particles, i.e., particles whose major dimension is generally one hundred nanometers or less. When used in combination with an epoxy as the polymer material 22, thermal decomposition of the precursor can coincide with curing (polymerization) of the epoxy, which is believed to result in a structure having purely organic (epoxy) regions, glass-like inorganic (silica) regions, and mixed inorganic/organic regions.The underfill material 20 must contain a sufficient amount of the precursor so that the resulting underfill layer 28 will contain enough filler particles 24 to appropriately adjust the CTE of the underfill layer 28. For example, the underfill layer 28 should contain about 60 weight percent, preferably about 55 to about 65 weight percent of the filler particles 24, depending on their composition. Adding the above-identified organometallic silicon compound in an amount of about 30 to about 40 weight percent of the underfill material 20 is believed to be sufficient to form silica nano-particles in an amount of about 55 to about 65 weight percent of the underfill layer 28.As is apparent from Figure 2, when assembling the die 10 with the substrate 16, the solder bumps 12 must penetrate the underfill material 20 to make contact with their respective traces 18. An important feature of this invention is that registration of the solder bumps 12 with their traces 19 is not hindered by the presence of filler particles in the underfill material 20, as evident from Figure 2. During die placement, the underfill material 20 preferably forms a fillet 30 along the peripheral wall of the die 10, as depicted in Figure 3. Once the underfill material 20 is penetrated and the solder bumps 12 contact their respective traces 18, the assembly can undergo a conventional reflow process to melt and coalesce the solder bumps 12, which upon cooling form solder connections 26 that are metallurgically bonded to their traces 18.During reflow, which is performed at a temperature of at least 183°C and typically about 210°C to about 225°C if the solder bumps 12 are formed of the eutectic tin-lead solder, the polymer material 22 of the underfill material 20 may undergo curing if formed of the above-noted epoxy adhesive, but in any event the underfill material 20 surrounds the molten solder bumps 12 and contacts both the lower surface of the die 10 and the upper surface of the substrate 16. During reflow, the precursor may also undergo thermal decomposition to form the filler particles 24, creating a relatively uniform dispersion of the filler particles 24 throughout the underfill layer 28 that lowers the overall CTE of the layer 28 to something closer to the CTE of the solder connections 26. Upon cooling the assembly, the underfill layer 28 encapsulates the solder connections 26 and completely fills the space defined by and between the die 10 and substrate 16, thereby bonding the die 20 to the substrate 16. If curing of the polymer material 22 and/or thermal decomposition of the precursor was incomplete or did not occur during reflow, the assembly can undergo a thermal treatment to complete either or both of these reactions.In view of the above, one can appreciate that the filled underfill layer 28 formed by the no-flow underfill material 20 and process of this invention can have a CTE that is sufficiently close to that of the solder connections 26 to improve the reliability of the flip chip assembly, while having a simplified manufacturing process and a reduced number of process steps as compared to capillary-flow underfill materials. As a result, the no-flow underfill material 20 and process of this invention enable CTE matching in a wider variety of flip chip applications than capillary-flow underfill materials and processes.While the invention has been described in terms of a preferred embodiment, it is apparent that other forms could be adopted by one skilled in the art. Accordingly, the scope of the invention is to be limited only by the following claims.
A processor implementing techniques for supporting configurable security levels for memory address ranges is disclosed. In one embodiment, the processor includes a processing core a memory controller,operatively coupled to the processing core, to access data in an off chip memory and a memory encryption engine (MEE) operatively coupled to the memory controller. The MEE is to responsive to detecting a memory access operation with respect to a memory location identified by a memory address within a memory address range associated with the off-chip memory, identify a security level indicator associated with the memory location based on a value stored on a security range register. The MEE is further to access at least a portion of a data item associated with the memory address range of the off-chip memory in view of the security level indicator.
1.A processor includes:Deal with nucleara memory controller operatively coupled to the processing core for accessing data in off-chip memory; andA memory encryption engine (MEE) operatively coupled to the memory controller, the MEE for:In response to detecting a memory access operation with respect to a memory location identified by a memory address within a range of memory addresses associated with the off-chip memory, identifying a security associated with the memory location based on a value stored on a security scope register Level indicator; andAccessing at least a portion of the data items associated with the memory address range of the off-chip memory in view of the security level indicator.2.The processor of claim 1, wherein the security level indicator identifies an encryption only memory range and a full protection memory range of the off-chip memory.3.The processor of claim 2, wherein the security level indicator includes one or more memory address ranges for identifying the only encrypted memory range of the off-chip memory and the fully protected memory At least one of the ranges.4.The processor of claim 2, wherein the security level indicator comprises a memory address that divides the off-chip memory into the only encryption-only memory range and the fully-protected memory range.5.The processor of claim 2, wherein in response to detecting that the data item is to be transferred to the off-chip memory only encrypted memory range, the MEE is further used for encryption and the data Items associated with the data.6.The processor of claim 2, wherein in response to detecting that the data item is to be transmitted from the only encrypted memory range of the off-chip memory, the MEE is further for decrypting the data item. Associated data.7.The processor of claim 2, wherein in response to detecting that the data item is to be transferred to the fully protected memory range of the off-chip memory, the MEE is further for storing the data Encryption metadata associated with the item.8.The processor of claim 2, wherein in response to detecting that the data item is to be transmitted from the fully protected memory range of the off-chip memory, the MEE is further for retrieving the data item. Associated encryption metadata.9.The processor of claim 2, wherein the data is protected by an instruction set architecture (ISA) instruction associated with the processor core, the ISA instruction protecting the data from software attacks, and The range of memory associated with the ISA instructions is stored in the fully protected memory range to protect the data from active and replay attacks.10.A method includes:In response to detecting a memory access operation with respect to a memory location identified by a memory address within a range of memory addresses associated with the off-chip memory device, using the processing device to identify the memory location based on the value stored on the security range register The security level indicator; andThe processing device is used in view of the security level indicator to access at least a portion of a data item associated with the memory address range of the off-chip memory.11.The method of claim 10, wherein the security level indicator identifies an encryption only memory range and a full protection memory range of the off-chip memory device.12.The method of claim 11, wherein the security level indicator comprises one or more of at least one of the encrypted-only memory range and the fully-protected memory range that identifies the off-chip memory device. Memory address range.13.The method of claim 11, wherein the security level indicator comprises a memory address that separates the off-chip memory into the only encrypted memory range and the fully protected memory range.14.The method of claim 11, further comprising encrypting the data item associated with the data item in response to detecting that the data item is to be transmitted to the off-chip memory device only encrypted memory range. data.15.The method of claim 11, further comprising decrypting data associated with the data item in response to detecting that the data item is to be transmitted from the only encrypted memory range of the off-chip memory device. .16.The method of claim 11, further comprising storing encryption associated with the data item in response to detecting that the data item is to be transferred to the fully protected memory range of the off-chip memory. Metadata.17.The method of claim 11, further comprising retrieving cryptographic elements associated with the data item in response to detecting that the data item is to be transmitted from the fully protected memory range of the off-chip memory. data.18.The method of claim 11, wherein the data is protected by an instruction set architecture (ISA) instruction associated with the processor core, the ISA instruction protecting the data from software attacks, and The range of memory associated with the ISA instructions is stored in the fully protected memory range to protect the data from active and replay attacks.19.A non-transitory computer-readable storage medium includes executable instructions that, when executed by a processing system, cause the processing system to:In response to detecting a memory access operation with respect to a memory location identified by a memory address within a range of memory addresses associated with the off-chip memory, identifying a security associated with the memory location based on a value stored on a security scope register Level indicator; andAccessing at least a portion of the data items associated with the memory address range of the off-chip memory in view of the security level indicator.20.A non-transitory computer-readable storage medium includes instructions that, when executed by a processor, cause the processor to perform the method of claims 10-18.21.A system on chip (SoC) includes a plurality of functional units and a memory controller unit (MCU) coupled to the plurality of functional units, wherein the MCU includes a memory encryption engine (MEE), wherein the MEE The method configured to perform claims 10-18.22.The system of claim 29, wherein the SoC further comprises the subject matter of any one of claims 1-9.23.A device that includes:Multiple functional units of the processor;A memory access operation for identifying a memory location identified by a memory address within a range of memory addresses associated with off-chip memory in response to identifying a security associated with the memory location based on a value stored on a security scope register Level indicator device; andMeans for accessing at least a portion of data items associated with the range of memory addresses of the off-chip memory in view of the security level indicator.24.The apparatus of claim 31, further comprising the subject matter of any one of claims 1-9.25.A system includes:A memory device and a processor including a memory encryption engine (MEE), wherein the processor is configured to perform the method of any of claims 10-18.
Supports configurable security levels for memory address rangesTechnical fieldEmbodiments of the present disclosure generally relate to computer systems, and more specifically but not limited to supporting configurable levels of security for memory address ranges.backgroundIt is increasingly important to ensure the security of the execution and integrity of applications and data within the computer system. Various known security technologies cannot fully secure the application and data in a flexible but reliable manner.Description of the drawingsThe present disclosure will be more fully understood from the detailed description given herein below and the accompanying drawings of various embodiments of the present disclosure. However, the drawings should not be construed as limiting the present disclosure to specific embodiments, but rather these drawings are for illustration and understanding only.FIG. 1 shows a block diagram of a processing device according to one embodiment.FIG. 2 illustrates a system including a memory for supporting a configurable level of security for a range of memory addresses according to one embodiment.FIG. 3 schematically illustrates an example data structure employed for storing cryptographic metadata for implementing integrity and replay protection in accordance with one or more aspects of the present disclosure.FIG. 4 shows a flowchart of a method for supporting a configurable level of security for a range of memory addresses according to one embodiment.FIG. 5A is a block diagram illustrating a microarchitecture of a processor according to one embodiment.FIG. 5B is a block diagram illustrating an in-order pipeline and register renaming stage, out of order issue/execution pipeline according to one embodiment.FIG. 6 is a block diagram illustrating a computer system according to an implementation.FIG. 7 is a block diagram illustrating a system in which embodiments of the present disclosure may be used.FIG. 8 is a block diagram illustrating a system in which embodiments of the present disclosure may be used.FIG. 9 is a block diagram illustrating a system in which embodiments of the present disclosure may be used.FIG. 10 is a block diagram illustrating a system-on-chip (SoC) in which embodiments of the present disclosure may be used.FIG. 11 is a block diagram illustrating a SoC design in which embodiments of the present disclosure may be used.FIG. 12 shows a block diagram illustrating a computer system in which embodiments of the present disclosure may be used.detailed descriptionTechniques are described for supporting a configurable level of security for a range of memory addresses. In one embodiment, a processor is provided. The processor may include processing logic configured to implement a software protection extensiontechnology to provide memory protection. "Memory protection" can generally include protecting the confidentiality of data through encryption, integrity, and/or replay protection. Integrity protection can resist attacks where, for example, an attacker can modify the encrypted data in memory before decryption. Replay protection can prevent attacks where, for example, the attacker repeats the decryption operation to gain unauthorized access to the protected data.uses certain processor instructions that provide secure hardware-encrypted computing and storage environments (eg, secure enclaves). The "secure enclave" herein will refer to a protected area within the application's address space that enables applications to keep secret and protect the integrity of their code. Access to data associated with secure enclaves from applications that are not resident in the enclave is prevented even if such access is attempted by a privileged application such as a BIOS, operating system, or virtual machine monitor. In addition, the data content of safe enclaves cannot be decrypted by privileged code or even by applying hardware probes to the memory bus.Several technologies can be implemented to protect data in safe enclaves. In one embodiment,may use memory cryptographic engine (MEE) hardware for data encryption and integrity and replay protection of the data. The MEE can encrypt data that is moved to untrusted system memory that may be outside the processor. When memory pages are stored in system memory, the MEE uses encryption mechanisms to encrypt the data and use other techniques to provide integrity and confidentiality. When reading data from system memory, the MEE decrypts the data and checks the integrity of the data, and then places the data in the processor's internal cache.In order to provide integrity protection of the protected data, the MEE may store a message authentication code (MAC) value in which each data line cached by the processor is moved to system memory. When reading a row of data from system memory, its integrity can be verified by calculating the MAC value of the data row and comparing the calculated MAC value with the stored MAC value. In some implementations, the MAC is stored in system memory and therefore also needs to be protected from being accessed or tampered with. Replay protection can be further provided by the processor by storing a version (VER) of data lines that increment each time a row of data is written back to system memory.In order to protect the MAC and VER values ​​themselves, a replay protection tree that includes multiple nodes can be used. Each node of the tree is verified by Embedded MAC (eMAC) based on the node content and the value of the counter stored on the next level of the tree. The values ​​of MAC, VER, and counter may be collectively referred to herein as "encrypted metadata." The replay protection tree can have a variable number of levels, including one type of counter at each level. In order to ensure replay protection, each data line read from the external memory is verified by traversing the tree from the end node of the VER value storing the data line. The number of levels in the replay protection tree linearly increases with the size of the protected memory area. For example, in order to protect a 64 GB memory area, a seven-level tree may be utilized, thus requiring a top-level counter of 8 KB. Thus, for each read of a row of data, seven additional rows of memory would need to be loaded from the system memory, thereby creating the required data memory bandwidth compared to the amount of data memory bandwidth that an unprotected memory read would require. The amount of seven times the overhead.In some cases, various systems may require a large amount of memory bandwidth. For example, some system applications may call high rates of data moving back and forth between them and memory. In addition, some systems may have usage models that require the entire application to operate in a safe enclave. In these situations, system performance is highly dependent on the amount of memory bandwidth available to the application, and the amount of memory bandwidth available to the application may be affected by the bandwidth requirements associated with replaying the protection tree.Embodiments of the present disclosure provide a mechanism that allows a user, such as a system administrator, to configure the balance between security and performance. In some embodiments, the security range register may be configured to minimize the bandwidth and performance impact of using the replay protection tree to protect the data. In one embodiment, the security range register may include a security level indicator that may be used to divide the total range of protected memory into at least two subcategories: 1) encryption only range, and 2) full protection range. In the full protection range, the processor may use cryptographic hardware such as MEE to provide encryption, integrity, and replay protection. In the encryption-only range, cryptographic hardware can be used to encrypt only data rows before sending them to memory. Because cryptographic hardware does not require access to cryptographic metadata (eg, MAC and VER data), cryptographic hardware does not significantly affect the processor's bandwidth utilization, thereby improving system performance.FIG. 1 shows a block diagram of a processing device 100 according to one embodiment that may support a configurable security range functionality. Processing device 100 may generally be referred to as a "processor" or "CPU." A "processor" or "CPU" herein will refer to a device that is capable of executing instructions that encode arithmetic, logical, or I/O operations. In one illustrative example, the processor may include an arithmetic logic unit (ALU), a control unit, and a plurality of registers. In another aspect, the processor may include one or more processing cores, and thus, the processor may be a single-core processor that is generally capable of processing a single instruction pipeline, or may be a multi-core processor that can process multiple instruction pipelines at the same time. In another aspect, the processor may be implemented as a single integrated circuit, two or more integrated circuits, or may be a multi-chip module (eg, where each microprocessor die is included in a single integrated circuit package, Therefore, these microprocessor die share a single slot of components.As shown in FIG. 1, the processing device 100 may include various components. In one embodiment, processing device 100 may include one or more processor cores 110 and memory controller unit 120 and other components, as shown coupled to each other. Processing device 100 may also include communication components (not shown) that may be used to handle point-to-point communications between various components of device 100 . Processing device 100 may be used to include, but is not limited to, desktop computers, tablet computers, laptop computers, netbooks, notebook computers, personal digital assistants (PDAs), servers, workstations, cell phones, mobile computing devices, smart phones, Internet devices, or any In a computing system (not shown) of other types of computing devices. In another embodiment, the processing device 100 may be used in a system-on-a-chip (SoC) system. In one embodiment, the SoC may include the processing device 100 and memory. The memory of one such system is DRAM memory. The DRAM memory can be located on the same chip as the processor and other system components. In addition, other logic blocks such as a memory controller or a graphics controller may also be located on the chip.Processor core 110 may execute instructions of processing device 100 . The instructions may include, but are not limited to, prefetch logic for fetching instructions, decode logic for decoding instructions, execution logic for executing instructions, and the like. The computing system may represent a processing system based on afamily processor and/or microprocessor that may be obtained from BritishCompany, Santa Clara, California, USA, although other systems (including those with other microprocessors) may be used as well. Computing equipment, engineering workstations, set-top boxes, etc.) In one embodiment, the sample computing system may execute one version of an operating system, embedded software, and/or a graphical user interface. Thus, embodiments of the present disclosure are not limited to any specific combination of hardware circuitry and software.In an illustrative example, processing core 110 may have a microarchitecture that includes processor logic and circuitry for implementing an instruction set architecture (ISA). Multiple processor cores with different microarchitectures can share at least a portion of a common instruction set. For example, the same register architecture of the ISA can be implemented in different microarchitectures using various techniques in different ways, including dedicated physical registers, using a register renaming mechanism (such as using register alias tables (RATs), reordering buffers ( ROB), and one or more dynamically allocated physical registers that retire the register file.The memory controller 120 may perform functions that enable the processing device 100 to access and communicate with a memory (not shown) including volatile memory and/or non-volatile memory. In one embodiment, memory controller 120 may be coupled to a memory encryption engine (MEE) 130 or the like. The MEE 130 herein will refer to hardware implemented processing logic with data communications between the cryptographic processing device 100 and a memory outside the processor chip, such as a random access memory (SRAM) or a dynamic random access memory (DRAM). the amount. In some embodiments, the MEE 130 may be located on a processor chip associated with the processing device 100 while the memory is located off-processor.Processing device 100 includes a cache unit 140 for caching instructions and/or data. Cache unit 140 includes, but is not limited to, one level (L1) 142, two level (L2) 144, and last level cache (LLC) 146, or any other configuration of cache memory in processing device 100. As shown, cache unit 140 may be integrated in processing core 110 . Cache unit 140 may store data (eg, including instructions) utilized by one or more components of processing device 100 . For example, cache unit 140 may cache data locally for faster access to processing device 100 components. In some embodiments, L1 cache 142 and L2 cache 144 may transfer data between them and LLC 146 . In one embodiment, the memory controller 120 may be connected to the LLC 146 and may be connected to the MEE 130. Memory controller 120 may access protected data that resides on memory that may be external to processing device 100 .In some embodiments, processing device 100 may utilizetechnology to protect at least a portion of the memory in the protected environment. In one embodiment, the processing device 100 may support a secure enclave (not shown) and the safe enclave may represent any logic, circuitry, hardware (such as the MEE 130), or other structures executed by the processing core 110 for use as a memory A part of it creates and maintains a protected area. Each instance of such an environment may be referred to as a secure enclave, although embodiments of the present invention are not limited to those environments that use safe enclaves as a protected environment. In one embodiment, secure enclaves may be created and maintained usinginstructions from theprocessor family of theprocessor family or processors in the other processor family.Processing device 100 may implement several techniques for protecting memory data associated with safe enclaves. In one example, processing device 100 may use MEE 130 to implement a protection mechanism. For example, if the cache data line belongs to a safe-envelope, the MEE 130 can protect the cache data line that was evicted from the processing device 100 and moved to the memory. In one embodiment, MEE 130 may use encryption of cache lines to resist passive attacks, for example when an attacker attempts to observe data rows silently as data rows move into processing device 100 and move to processing device 100 . For example, to encrypt a cache line, MEE 130 may implement an algorithm that may perform a series of transformations using a secret key (eg, a cryptographic key) to transform understandable data called "plaintext" into what is called " The incomprehensible form of ciphertext. In this example, decryption (inverse cryptography) may be performed by the MEE using a cryptographic key in a series of transformations to transform the ciphertext back into plaintext. This is only an example, as other types of encryption logic may also be implemented by the MEE 130.In another embodiment, in order to provide integrity/replay protection against active attacks (where an attacker may alter data stored in memory to cause activities that would not otherwise occur in processing device 100), MEE 130 may A counter mode encryption technique is performed which requires that the encryption seed is unique in time and space for the data rows. Spatial uniqueness can be achieved by using the address of the data line to be accessed, while time uniqueness can be achieved by using a counter that serves as a version of the data line. In one embodiment, MEE 130 also protects data rows by using a counter tree structure where only the root of the tree is stored on the die and forms a trust root (ie, a trust boundary). The version of the data line is part of the counter tree structure. Alternatively, other protection mechanisms may be used for replay protection. For example, the MAC associated with the secure cache line may be stored on the die because a successful replay attack will require playback of the data line and its associated MAC.In some embodiments, the processing device 100 may include a mechanism that divides the protected memory into two categories including a first security level and a second security level. For example, the first security level may indicate the scope of using only encrypted protected memory, while the second security level may indicate the range of memory protected using the encrypted metadata generated by the MEE 130. In one embodiment, the configurable security range register 150 may be used to divide the protected memory into at least two subcategories. The configurable security range register 150 may be storage hardware that may be used as part of the processing device 100 . In one embodiment, the configurable security scope register 150 may include a set of security level indicators (eg, arrays, trees, other registers, or various types of data structures, etc.), the set of security level indicators may be configured with or There are other ways to program one or more values ​​to specify a range of memory that divides system memory into subcategories. The configurable security range register 150 may work in conjunction with the ISA architecture of the processing device 100 . In one embodiment, when an enclave is launched, the safe enclave may set a value of the configurable security range register 150 and set a lock bit in the register 150, eg, this blocking value is subsequently changed unless a restart occurs.FIG. 2 illustrates a system 200 including a memory 201 for supporting a configurable level of security for a range of memory addresses according to one embodiment. In this example, the memory 201 includes a protected address range that is divided into multiple security level subcategories, such as a full protection range (eg, encrypting data using the cryptographic metadata generated by the MEE 130) and only the encryption range (eg, Encrypt data without generating encrypted metadata). As described above, the configurable security range register 150 of the processing device 100 may include a security level indicator 202 to identify which memory ranges of the memory 201 are fully protecting the range or only the encryption range is protected. In some embodiments, the security level indicator 202 may include, but is not limited to, an array of data, a tree, other registers that may be set to a value, or different types of data structures, such as memory addresses.In some embodiments, the security level indicator 202 of the configurable security range register 150 may be programmed to identify the security level associated with the memory range in the memory 201 in several ways. In one embodiment, the security scope register 150 may be configured to explicitly identify the protected encryption-only scope. Therefore, any data item stored in a location outside this range can receive full protection. For example, the security level indicator 202 of the security scope register 150 may include a data structure that can be set to a memory location address corresponding to the start memory address 203 and the end memory address 205 of the encryption only memory range such as the memory range 210 . This configuration may identify this encryption-only range for the processing device 100 for any data stored in the encryption only range of the memory. In this regard, encryption logic (eg, cipher text) may be used to encrypt/decrypt data stored in this range. When data is stored in memory locations outside of this range, such as memory ranges 220 and 230, the data can be completely protected using the cryptographic metadata generated by the MEE 130.In another embodiment, the security level indicator 202 may include a data structure that is set to a memory address that identifies a boundary between sub-scopes. For example, the security level indicator 202 may include a data structure that may be set to memory addresses 203, 205, and 207, where each memory address indicates the start and end boundaries of the sub-range. In this example, the security level indicator 202 may also include an encryption-only range indicating that, for example, a sub-range between memory addresses 203 and 205 may be protected memory and/or a sub-range between memory addresses 205 and 207 may be The value of the full protection range of the protected memory.In yet another embodiment, the security level indicator 202 may include a data structure that is configured to identify a memory address in the memory 201 that divides the memory 201 into boundary points of a plurality of security level subranges. For example, the data structure may include a value corresponding to a memory address 203 that can divide the protection range of the memory 201 into an encryption-only range and a full protection range. In this example, the security level indicator 202 may include a direction bit, which may be configured to identify which side of the boundary is the only encryption range and which side is the full protection range. The data moved to and from these identified ranges may be encrypted according to the value set in the security level indicator 202 . In addition, other techniques may be employed to utilize the security level indicator 202 of the configurable security range register 150 to identify different security level subranges of the protection range of the memory 201 .As shown in FIG. 2 , the encrypted logic of the processing device 100 may be used to protect protected application code and data (such as applications/data associated with safe enclaves). For example, any data row belonging to the memory location that is evicted from the processor chip will be logically encrypted (eg, cipher text) by the processor encryption logic before being stored in the memory 201 and then be returned before being returned to the memory core 110 Decryption. Thus, there is no need to additionally store and retrieve the cryptographic metadata used by the full protection range. Although the encrypted data/cipher text maintains the confidentiality of the data, it does not protect the memory from active or replay attacks. However, attackers cannot read data easily. In addition, data can be protected by certaininstructions that protect the data from many software attacks, and the range of ISA usage (eg, memory range 230) is stored in full protection to avoid any active attack on the scope of the memory. Or replay attacks.In some embodiments, a full protection range may be configured to occupy a certain percentage (eg, less than 10 percent) of the total range of protected memory 201 . This percentage can be configured to save memory bandwidth. For example, if the total protection range includes 64 GB of memory, the full protection range may include 1 GB of the memory, and only the encryption range may include 63 GB. In some embodiments, all the memory that controls the operation of the ISA is located in this fully protected range (eg, memory range 230). This can ensure that multiple threads will have temporal and spatial locality when multiple threads are running in the memory. Temporal locality refers to multiple accesses to a particular memory location within a relatively small period of time. Spatial locality refers to access to relatively close memory locations within a relatively small period of time.In some embodiments, the memory located in the full protection range can be completely protected by using the encryption metadata. In one embodiment, in addition to protecting the confidentiality of data, MEE 130 may generate cryptographic metadata for data integrity and replay protection. Thereafter, the MEE 130 may store the encrypted metadata in a location in a full protection range, such as in the memory range 220 . An example of a data structure for storing encrypted metadata is discussed below with reference to FIG. 3 (eg, replaying the protection tree).FIG. 3 schematically illustrates an example data structure 300 employed for storing cryptographic metadata for implementing integrity and replay protection in accordance with one or more aspects of the present disclosure. In this example, data structure 300 may include, for example, a replay protection tree generated by MEE 130 of processing device 100 . The replay protection tree structure includes a hierarchy of tree node levels. The top (root) level includes a series of on-die counters (ie, L3 counters 310) that are stored in the internal memory of the processor die associated with the processing device 100. Internal storage includes, but is not limited to, on-die static random access memory (SRAM), register files, and any other suitable memory in the processor die. Since the L3 counters 310 are on the processor die, their contents are trusted and protected from passive and active attacks.In one embodiment, each L3 counter 310 is linked to a block of L2 intermediate metadata 315 that includes a series of L2 counters 320 . Each L2 counter 320 is linked to a block (not shown) of L1 intermediate metadata, which includes a series of L1 counters (not shown). In this example, the blocks representing the L1 intermediate metadata and the L1 counter are omitted from FIG. 3 for simplicity of explanation. Each L1 counter is linked to a block of L0 intermediate metadata 325 that includes a series of L0 counters 330 . Each L0 counter 330 is linked to a version block 340, which includes a series of version nodes 345. Each version node 345 is associated with an encrypted data row 360 in the protected area of ​​memory 201 . The content of the version node 345 is the version of the associated data row, which provides the time component of the encrypted seed in counter mode encryption. Lower-level counters (including L2, L1, and L0 counters, and version node 345) are outside the processor die and are therefore vulnerable to attack, so utilize an embedded message authentication code (MAC) (shown as hatched) The block) encodes each counter and each version node to ensure their integrity.Before the data line is moved to the memory 201, it can be encrypted by the MEE 130. For reading from memory, encrypted data line 360 ​​may be decrypted by MEE 130 before being passed to processing core 110 . Each encrypted data line 360 ​​is encoded with a MAC node 350 that includes a MAC calculated from the content of the data line 360. Whenever a row of data is written back to memory, MEE 130 updates the MAC to reflect the most recent data value stored in memory 201 . When reading a row of data from the memory 201, the MEE 130 verifies its integrity by calculating the MAC value of the row of data and comparing the calculated MAC value with the stored MAC value. Replay protection may be further provided by storing for each data row its version (VER) value 345 incremented whenever the data row is written back to the memory 201 .When the processing device 100 performs a write operation to write one of the encrypted data lines 360 to a protected memory region that resides on the memory 201 (eg, when the data line from the on-die LLC 146 is evicted to memory In the protected area in 201, the MEE 130 updates the MAC 350 associated with the data row and makes the version 345 of the data row and the L0, L1, L2 and L3 counters associated with the data row (310, 320, 330) Increment. This update process continues from the bottom level of the counter tree up to the root level of the L3 counter, and the L3 counter is securely stored on the chip on the processor die and is therefore guaranteed to be protected against attacks. The counter at each level of the counter tree acts as the next lower level version, ending with the version node 345 storing the version of the data row. Therefore, when writing a row of data, branches of all counters (including versions) and their associated embedded MACs and address identifiers of data rows are updated to reflect version updates.To ensure replay protection, whenever a row of data is loaded from a protected area, the tree node up to the root of the counter tree verifies the authenticity of that row of data. Mismatches at any level indicate potential attacks and cause security anomalies that defeat the attack. Specifically, when the processing device 100 performs a read operation on one of the encrypted data lines 360, the MEE 130 identifies the version of the data line and the L0, L1, L2, and L3 counters (310, 320, 330). The read operation does not change the version and the values ​​of the L0, L1, L2, and L3 counters (310, 320, 330). After the read operation, the MEE 130 verifies the MAC 350 associated with the data row. In addition, the MEE 130 verifies the embedded MAC associated with each of the versions, L0, L1, L2, and L3 counters (310, 320, 330). This verification process continues from the bottom level of the counter tree up to the secure root counter L3.FIG. 4 shows a flowchart of a method for supporting a configurable level of security for a range of memory addresses according to one embodiment. Method 400 may be performed by processing logic that may include hardware (eg, circuitry, dedicated logic, programmable logic, microcode, etc.), software (such as instructions running on a processing device), firmware, or a combination thereof. In one embodiment, processor core 110 of processing device 100 may perform method 400 . Although shown in a particular order or order, the order of these processes may be modified unless otherwise indicated. Thus, the illustrated implementations are to be construed as merely illustrative, and the illustrated processes may be executed in a different order, and some processes may be performed in parallel. In addition, one or more processes may be omitted in various embodiments. Therefore, not all processes are required in each implementation. Other process flows are possible.Method 400 begins at block 410 where a register can be identified that indicates whether the protected memory range is encrypted only or fully protected. For example, processing device 100 may identify security level indicator 202 in configurable security level register 150 . As described above, the set of indicators 202 may include one or more data structures that are set to a value used to indicate whether the memory data is protected as an encrypted only or fully protected value. At block 415, the method 400 branches depending on whether the data item is being moved to off-chip memory or off-chip memory. If the data is being moved to memory, method 400 may continue to block 420 . Otherwise, method 400 continues to block 460.At block 420, the method 400 determines the security level of the data item that was moved into memory based on the register identified at block 410. If it is determined that the data item is being moved to the full protection range of the memory, the method 400 may continue to block 430. Otherwise, the method 400 continues to block 450. At block 430, the cryptographic hardware of processing device 100, such as MEE 130, may retrieve the encrypted metadata from the local metadata cache for data items. If a cache miss occurs, the cryptographic hardware can get metadata from off-chip memory. At block 440, the encryption metadata may be stored in a local metadata cache, and the data items may be encrypted and sent to memory. If it is determined that the data item is being moved to the memory's only encryption protection range, at block 450, the processing hardware of the processing device 100 is used to encrypt the data item without generating the encrypted metadata.At block 460, method 400 determines the security level of the data item read from memory based on the register identified at block 410. If it is determined that the data item is being read from the full protection range of the memory, the method 400 may continue to block 470. Otherwise, method 400 continues to block 490 . At block 470, the MEE 130 retrieves the encrypted metadata for the data item from the memory, and uses the encrypted metadata at block 480 to verify the data item. If it is determined that the data item is being read from the encryption-only protection range of the memory, at 490, the processing device 100 uses the cryptographic hardware to decrypt the data item.FIG. 5A is a block diagram illustrating a microarchitecture of a processor 500 for implementing techniques that support a configurable level of security for a range of memory addresses in accordance with one embodiment of the present disclosure. In particular, processor 500 depicts an in-order architecture core and register renaming logic, out-of-order issue/execution logic to be included in a processor in accordance with at least one embodiment of the present disclosure.The processor 500 includes a front end unit 530 that is coupled to an execution engine unit 550, both of which are coupled to a memory unit 570. The processor 500 may include a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a mixed or alternative core type. As another option, the processor 500 may include dedicated cores such as, for example, a network or communications core, a compression engine, a graphics core, and the like. In one embodiment, processor 500 may be a multi-core processor or may be part of a multi-processor system.The front end unit 530 includes a branch prediction unit 532 coupled to an instruction cache unit 534 coupled to an instruction translation lookaside buffer (TLB) 536 that is coupled to an instruction fetch unit 538 that instructs the fetch unit Coupled to decoding unit 540. A decoding unit 540 (also referred to as a decoder) may decode the instruction and generate one or more micro-operations, micro-code entry points, decoded from the original instruction, or otherwise reflecting the original instruction, or derived from the original instruction, Micro instructions, other instructions, or other control signals are output. Decoder 540 may be implemented using a variety of different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read-only memories (ROMs), and the like. Instruction cache unit 534 is further coupled to memory unit 570 . Decoding unit 540 is coupled to rename/allocator unit 552 in execution engine unit 550 .Execution engine unit 550 includes a rename/allocator unit 552 coupled to a set of retirement units 554 and one or more scheduler units 556 . Scheduler unit 556 represents any number of different schedulers, including reservation stations (RSs), central instruction windows, and the like. Scheduler unit 556 is coupled to physical register file unit 558 . Each of physical register file units 558 represents one or more physical register files, where different physical register files store one or more different data types (such as: scalar integers, scalar floating point, packed integers, compact floating point, Vector integers, vector floats, etc.), states (such as the instruction pointer is the address of the next instruction to be executed), and so on. Physical register file unit 558 overlaps retirement unit 554 to illustrate various ways in which register renaming and out-of-order execution can be implemented (e.g., using reorder buffers and retirement register files; using future files, history buffers, and retirements Register file; use register maps, register pools, etc.) The execution engine unit 550 may include, for example, a power management unit (PMU) 590 that manages power functions of the functional units.In general, architectural registers are visible from outside the processor or from the perspective of the programmer. These registers are not limited to any known specific circuit type. Many different types of registers are applicable as long as they can store and provide the data described herein. Examples of suitable registers include, but are not limited to: dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated physical registers and dynamically allocated physical registers, and the like. The retirement unit 554 and the physical register file unit 558 are coupled to the execution cluster 560. Execution cluster 560 includes a set of one or more execution units 562 and a set of one or more memory access units 564 . Execution unit 562 may perform a variety of operations (eg, shift, add, subtract, multiply) and may be performed on a variety of data types (eg, scalar floating point, packed integer, packed floating point, vector integer, vector floating point).Although some embodiments may include multiple execution units dedicated to a particular function or set of functions, other embodiments may include only one execution unit or multiple execution units that perform all functions. Scheduler unit 556, physical register file unit 558, and execution cluster 560 are shown as potentially multiple because certain embodiments create separate pipelines for certain types of data/operations (e.g., scalar integer pipelines, scalar floating-points) / Compact Integer / Compact Float / Vector Integer / Vector Floating-point Pipeline, and / or Memory Access Pipeline Each With Its Own Scheduler Unit, Physical Register File Unit, and/or Execute Cluster - and In Separate Memory In the case of the access pipeline, some embodiments are implemented where only the execution cluster of the pipeline has a memory access unit 564). It should also be understood that where separate pipelines are used, one or more of these pipelines may be issued/executed out of order, and the remaining pipelines may be ordered to issue/execute.The set of memory access units 564 is coupled to a memory unit 570, which may include a data prefetcher 580, a data TLB unit 572, a data cache unit (DCU) 574, a second level (L2) cache unit 576, only Give a few examples. In some embodiments, the DCU 574 is also referred to as a first level data cache (L1 cache). The DCU 574 can handle multiple pending cache misses and continue serving incoming stores and loads. It also supports maintaining cache coherency. The data TLB unit 572 is a cache for improving the virtual address conversion speed by mapping the virtual and physical address spaces. In one exemplary embodiment, memory access unit 564 may include a load unit, a memory address unit, and a memory data unit, each of which is coupled to data TLB unit 572 in memory unit 570 . The L2 cache unit 576 may be coupled to one or more other levels of cache and eventually coupled to main memory.In one embodiment, data prefetcher 580 speculatively loads/prefetches data to DCU 574 by automatically predicting which data the program will consume. Prefetching may indicate that the data is being converted closer (eg, resulting in less access latency) before the data stored in a memory location of a memory hierarchy (eg, a lower level cache or memory) is actually requested by the processor. ) The processor's higher level memory location. More specifically, prefetching may refer to early retrieval of data from one of the lower level caches/memory to the data cache and/or prefetch buffer before the processor issues a demand for the particular data being returned.In one implementation, the processor 500 may be the same as the processing device 100 described with reference to FIG. 1 . Specifically, data TLB unit 572 may be the same as TLB 155 and described with reference to FIG. 1 to implement in the processing device a technique for supporting a configurable level of security for memory address ranges described with reference to implementations of the present disclosure.Processor 500 may support one or more instruction sets (such as the x86 instruction set (with some extensions added with newer versions), the MIPS instruction set from MIPS Technologies, Inc. of Sunnyvale, Calif., Sunnyvale, California. ARM Holdings' ARM instruction set (with optional additional extensions such as NEON)).It should be understood that the core may support multi-threading (executing two or more parallel operations or a set of threads), and the multi-threading may be accomplished in various ways including time-division multi-threading and synchronization. Multi-threading (where a single physical core provides a logical core for each of the threads in which the physical core is synchronizing multithreading), or a combination thereof (eg, time-division fetch and decode, and thereafter such as with thehyperthreading technique Synchronize multithreading).Although register renaming is described in the context of out-of-order execution, it should be understood that register renaming can be used in an in-order architecture. Although the illustrated embodiment of the processor also includes separate instruction and data cache units and shared L2 cache units, alternative embodiments may also have a single internal cache for instructions and data, such as, for example, Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache external to the core and/or processor. Alternatively, all caches can be external to the core and/or processor.FIG. 5B is a block diagram illustrating an in-order pipeline and register renaming stage, out of order issue/execution pipeline implemented by the processor 500 of FIG. 5A in accordance with some embodiments of the present disclosure. The solid line box in FIG. 5B shows the ordered pipeline, while the dashed box shows the register renaming, out-of-order issue/execution pipeline. In FIG. 5B, the processor pipeline 501 includes a fetch stage 502, a length decode stage 504, a decode stage 506, an allocation stage 508, a renaming stage 510, a schedule (also referred to as dispatch or issue) stage 512, a register read/memory, Read stage 514, execution stage 516, write back/memory write stage 518, exception handling stage 522, and commit stage 524. In some embodiments, the ordering of the various stages 502-524 may be different from that shown, and is not limited to the particular ordering shown in FIG. 5B.FIG. 6 shows a block diagram of a microarchitecture of a processor 600 including logic circuitry to implement techniques for supporting a configurable level of security for a range of memory addresses in accordance with one embodiment of the present disclosure. In some embodiments, instructions according to one embodiment may be implemented with pairs of byte sizes, word sizes, double word sizes, quadword sizes, etc. and have many data types such as single and double precision integers and floating point data Data elements of type) perform operations. In one embodiment, the ordered front end 601 is part of the processor 600 that fetches the instructions to be executed and prepares these instructions for later use by the processor pipeline.Front end 601 may include several units. In one embodiment, instruction prefetcher 626 fetches instructions from memory and feeds the instructions to instruction decoder 628, which then decodes or interprets the instructions. For example, in one embodiment, the decoder decodes the received instructions into one or more operations that are machine-executable, referred to as "microinstructions" or "microoperations" (also known as microops or uops). In other embodiments, the decoder parses the instructions into opcodes and corresponding data and control fields that are used by the microarchitecture to perform operations according to one embodiment. In one embodiment, the trace cache 630 accepts the decoded micro-ops and assembles them into program-ordered sequences or traces in the micro-operation queue 634 for execution. When the trace cache 630 encounters complex instructions, the microcode ROM 632 provides the uop required to complete the operation.Some instructions are converted to a single micro-operation, while other instructions require several micro-operations to complete the entire operation. In one embodiment, if more than four micro-operations are needed to complete the instruction, decoder 628 accesses microcode ROM 632 to execute the instruction. For one embodiment, the instructions may be decoded into a small number of micro-operations for processing at the instruction decoder 628 . In another embodiment, instructions may be stored in the microcode ROM 632 if several micro operations are required to complete the operation. The trace cache 630 references the entry point programmable logic array (PLA) to determine the correct microinstruction pointer to read the microcode sequence from the microcode ROM 632 to complete one or more instructions in accordance with one embodiment. After the microcode ROM 632 completes the micro-operation serialization of the instruction, the front end 601 of the machine resumes extracting the micro-operation from the trace cache 630 .The out-of-order execution engine 603 is where the instructions are ready for execution. The out-of-order execution logic has several buffers for smoothing and reordering the instruction stream to optimize the performance of the instruction stream after it enters the pipeline, and to schedule the instruction stream for execution. The distributor logic allocates machine buffers and resources needed for each micro-operation for execution. The register renaming logic renames the logical registers as entries in the register file. Before the instruction scheduler (memory scheduler, fast scheduler 602, slow/universal floating point scheduler 604, simple floating point scheduler 606), the allocator also allocates entries for each micro-operation in two micro-operation queues Of these, one micro-operation queue is used for memory operations and the other micro-operation queue is used for non-memory operations. The micro-operational schedulers 602, 604, 606 determine when the micro-operations are ready for execution based on the readiness of their dependent input register operand sources and the availability of execution resources that the micro-operations need to complete their operations. The fast scheduler 602 of one embodiment may schedule on each half of the master clock cycle while other schedulers may only schedule once on each master processor clock cycle. The scheduler arbitrates the allocation ports to schedule micro-operations for execution.In execution block 611, register files 608 and 610 are located between schedulers 602, 604, and 606 and execution units 612, 614, 616, 618, 620, 622, and 624. There are separate register files 608, 610 for integer and floating-point operations, respectively. Each register file 608, 610 of one embodiment also includes a bypass network, and the bypass network can bypass or forward the just-completed results that have not been written to the register file to the new dependent micro-operations. Integer register file 608 and floating register file 610 can also pass data to each other. For one embodiment, the integer register file 608 is divided into two separate register files, one register file for low-order 32-bit data and the second register file for high-order 32-bit data. The floating-point register file 610 of one embodiment has 128-bit wide entries because floating-point instructions typically have an operand width of from 64 to 128 bits.Execution block 611 includes execution units 612, 614, 616, 618, 620, 622, 624 that actually execute instructions in execution units 612, 614, 616, 618, 620, 622, 624. The block includes register files 608, 610 which store the integer and floating-point data operand values ​​that the microinstruction needs to execute. The processor 600 of one embodiment includes multiple execution units: an address generation unit (AGU) 612, an AGU 614, a fast ALU 616, a fast ALU 618, a slow ALU 620, a floating point ALU 622, and a floating point move unit 624. For one embodiment, floating point execution blocks 622, 624 perform floating point, MMX, SIMD, SSE, or other operations. The floating-point ALU 622 of one embodiment includes a 64-bit/64-bit floating-point divider for performing division, square root, and remainder micro-operations. For various embodiments of the present disclosure, instructions involving floating point values ​​may be processed using floating point hardware.In one embodiment, ALU operations enter high-speed ALU execution units 616, 618. The fast ALUs 616, 618 of one embodiment can perform fast operations with a valid latency of half a clock cycle. For one embodiment, most complex integer operations enter the slow ALU 620 because the slow ALU 620 includes integer execution hardware for long latency type operations such as multipliers, shifters, flag logic, and branch processing. Memory load/store operations are performed by AGUs 612, 614. For one embodiment, integer ALUs 616, 618, 620 are described as performing integer operations on 64-bit data operands. In alternative embodiments, ALUs 616, 618, 620 may be implemented to support a variety of data bits, including 16, 32, 128, 256, and the like. Similarly, floating-point units 622, 624 may be implemented to support a series of operands having multiple widths of bits. For one embodiment, floating point units 622, 624 may operate on 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.In one embodiment, the micro-operational scheduler 602, 604, 606 dispatches dependent operations before the parent load completes execution. Because the micro-operations are speculatively scheduled and executed in processor 600, processor 600 also includes logic to handle memory misses. If the data load misses in the data cache, there may be a running dependency operation in the pipeline that has left the scheduler with temporary error data. The replay mechanism tracks the instructions that use the wrong data and re-executes them. Only dependent operations need to be replayed, allowing independent operations to complete. The scheduler and replay mechanism of one embodiment of the processor are also designed to capture sequences of instructions for text string comparison operations.According to various embodiments of the present disclosure, the processor 600 further includes logic for implementing memory address prediction for memory disambiguation. In one embodiment, execution block 611 of processor 600 may include a memory address predictor (not shown) for implementing techniques for supporting a configurable level of security for a range of memory addresses.The term "register" may refer to an on-board processor storage location that is used as part of an instruction to identify an operand. In other words, the registers can be processor memory locations that are available outside of the processor (from the programmer's perspective). However, the registers of the embodiments are not limited to represent a specific type of circuit. In contrast, the registers of the embodiments are capable of storing and providing data, and are capable of performing the functions described herein. The registers described herein may be implemented by circuitry in the processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers that utilize register renaming, combinations of dedicated and dynamically allocated physical registers, and the like. In one embodiment, an integer register stores 32-bit integer data. The register file of one embodiment also contains eight multimedia SIMD registers for compacting data.For the following discussion, registers should be understood to be data registers designed to hold packed data, such as the 64-bit wide MMXTM registers in an MMX-enabled microprocessor from Intel Corporation of Santa Clara, California. In some examples, it is also called 'mm' register). These MMX registers (available in integer and floating-point forms) can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128-bit wide XMM registers involving SSE2, SSE3, SSE4, or newer technologies (collectively referred to as "SSEx") can also be used to maintain such compact data operands. In one embodiment, registers do not need to distinguish between these two types of data when storing packed data and integer data. In one embodiment, integer and floating-point data may be included in the same register file or included in different register files. Further, in one embodiment, floating-point and integer data may be stored in different registers or in the same register.Embodiments can be implemented in many different system types. Referring now to FIG. 7, shown is a block diagram illustrating a system 700 in which embodiments of the present disclosure may be used. As shown in FIG. 7, the multiprocessor system 700 is a point-to-point interconnect system and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Although shown with only two processors 770 and 780, it should be understood that the scope of the embodiments of the present disclosure is not limited thereto. In other embodiments, there may be one or more additional processors in a given processor. In one embodiment, multiprocessor system 700 may implement the techniques described herein for supporting a configurable level of security for a range of memory addresses.Processors 770 and 780 are shown to include integrated memory controller units 772 and 782, respectively. The processor 770 also includes point-to-point (P-P) interfaces 776 and 778 as part of its bus controller unit; similarly, the second processor 780 includes P-P interfaces 786 and 788. Processors 770, 780 may exchange information via P-P interface 750 using point-to-point (P-P) interface circuits 778, 788. As shown in FIG. 7, IMCs 772 and 782 couple processors to respective memories, namely memory 732 and memory 734, which may be portions of main memory locally attached to respective processors.Processors 770, 780 may exchange information with chipset 790 via respective P-P interfaces 752, 754 using point-to-point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with high-performance graphics circuitry 738 via high-performance graphics interface 739 .A shared cache (not shown) may be included in any processor or external to both processors but connected to these processors via a PP interconnect so that if the processor is placed in a low power mode, either Or the two processors' local cache information can be stored in the shared cache.Chipset 790 may be coupled to first bus 716 via interface 796 . In one embodiment, the first bus 716 may be a Peripheral Component Interconnect (PCI) bus or a bus such as a PCI Express bus or another third-generation I/O interconnect bus, but the scope of the present disclosure is not limited thereto. .As shown in FIG. 7, various I/O devices 714 may be coupled to the first bus 716 along with a bus bridge 718 that couples the first bus 716 to the second bus 720. In one embodiment, the second bus 720 may be a low pin count (LPC) bus. In one embodiment, various devices may be coupled to the second bus 720 including, for example, a keyboard and/or mouse 722, a communication device 727, and a memory unit 728 (such as a disk drive or other device that may include instructions/code and data 730). Mass storage device). In addition, audio I/O 724 may be coupled to second bus 720 . Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG. 7, the system may implement a multi-branch bus or other such architecture.Referring now to FIG. 8, shown is a block diagram of a system 800 in which one embodiment of the present disclosure may operate. System 800 may include one or more processors 810, 815 coupled to a graphics memory controller hub (GMCH) 820. The optional nature of the additional processor 815 is represented by a dashed line in FIG. In one embodiment, the processors 810, 815 implement techniques for supporting a configurable level of security for a range of memory addresses in accordance with embodiments of the present disclosure.Each processor 810, 815 may be some version of a circuit, integrated circuit, processor, and/or silicon integrated circuit as described above. However, it should be noted that integrated graphics logic and integrated memory control units are less likely to appear in the processors 810, 815. FIG. 8 shows that the GMCH 820 may be coupled to a memory 840, which may be, for example, a dynamic random access memory (DRAM). For at least one embodiment, the DRAM may be associated with a non-volatile cache.The GMCH 820 may be part of a chipset or chipset. The GMCH 820 may communicate with the processors 810, 815 and control the interaction between the processors 810, 815 and the memory 840. The GMCH 820 may also act as an acceleration bus interface between the processors 810, 815 and other elements of the system 800. For at least one embodiment, the GMCH 820 communicates with the processors 810, 815 via a multi-drop bus, such as a front-side bus (FSB) 895.In addition, the GMCH 820 is coupled to a display 845 such as a tablet or touch screen display. The GMCH 820 may include an integrated graphics accelerator. The GMCH 820 is further coupled to an input/output (I/O) controller hub (ICH) 850, which may be used to couple various peripheral devices to the system 800. The external graphics device 860 and another peripheral device 870 are shown by way of example in the embodiment of FIG. 8. The external graphics device 860 may be a discrete graphics device coupled to the ICH 850.Alternatively, additional or different processors may also be present in system 800. For example, additional processors 815 may include the same additional processors as processor 810, additional processors that are heterogeneous or asymmetric to processors 810, accelerators such as, for example, graphics accelerators or digital signal processing (DSP). Unit), Field Programmable Gate Array, or any other processor. There are various differences between processors 810, 815 in terms of a range of quality metrics including architecture, micro-architectural, thermal, power consumption characteristics, and the like. These differences can be effectively shown as the asymmetry and heterogeneity between processors 810 and 815. For at least one embodiment, the various processors 810 and 815 may reside in the same die package.Referring now to FIG. 9, shown is a block diagram of a system 900 in which embodiments of the present disclosure may operate. Processors 970 and 980 are shown in FIG. In one embodiment, the processors 970, 980 may implement the techniques described above for supporting a configurable level of security for a range of memory addresses. Processors 970, 980 may include integrated memory and I/O control logic ("CL") 972 and 982, respectively, and communicate with each other via point-to-point interconnect 950 between point-to-point (P-P) interfaces 978 and 988, respectively. Processors 970, 980 each communicate with chipset 990 via respective P-P interfaces 976-994 and 986-998 via point-to-point interconnects 952 and 954, as shown. For at least one embodiment, CL 972, 982 may include an integrated memory controller unit. CL 972, 982 may include I/O control logic. As shown, memories 932, 934 are coupled to CL 972, 982, and I/O device 914 is also coupled to control logic 972, 982. Conventional I/O device 915 is coupled to chipset 990 via interface 996 .Embodiments can be implemented in many different system types. FIG. 10 is a block diagram of a SoC 1000 according to an embodiment of the present disclosure. Dashed boxes are optional features of more advanced SoCs. In FIG. 10, the interconnection unit 1012 is coupled to: an application processor 1020 including a set of one or more cores 1002A-N and a shared cache unit 1006; a system agent unit 1010; a bus controller unit 1016; an integrated memory controller Unit 1014; a group or one or more media processors 1018, which may include integrated graphics logic 1008, an image processor 1024 for providing static and/or video camera functionality, an audio processor 1026 for providing hardware audio acceleration, and providing video An encoding/decoding accelerated video processor 1028, a static random access memory (SRAM) unit 1030, a direct memory access (DMA) unit 1032, and a display unit 1040 are coupled to one or more external displays. In one embodiment, a memory module may be included in integrated memory controller unit 1014. In another embodiment, the memory module may be included in one or more other components of the SoC 1000 that may be used to access and/or control the memory. The application processor 1020 may include PMUs for implementing silent memory instructions and miss rate tracking to optimize the switching strategy for threads as described in the embodiments herein.The memory hierarchy includes one or more levels of cache within the core, a group or one or more shared cache units 1006, and an external memory (not shown) coupled to the set of integrated memory controller units 1014. The set of shared cache units 1006 may include one or more mid-level caches, such as second level (L2), third level (L3), fourth level (L4), or other levels of cache, last level cache (LLC) and/or a combination of the above.In some embodiments, one or more cores 1002A-N can implement multi-threading. The system agent 1010 includes those components that coordinate and operate the cores 1002A-N. The system agent unit 1010 may include, for example, a power control unit (PCU) and a display unit. The PCU may be the logic and components needed to adjust the power state of the cores 1002A-N and the integrated graphics logic 1008, or may include these logics and components. The display unit is used to drive one or more externally connected displays.The cores 1002A-N may be homogeneous or heterogeneous in architecture and/or instruction set. For example, some of the cores 1002A-N may be ordered while others are out of order. As another example, two or more of the cores 1002A-N can execute the same set of instructions, while other cores can only execute a subset of the instruction set or a different set of instructions.The application processor 1020 may be a general-purpose processor such as a CoreTM i3, i5, i7, 2 Duo and Quad, XeonTM, ItaniumTM, AtomTM, or QuarkTM processors. Available from Intel Corporation of Santa Clara, California. Alternatively, application processor 1020 may come from another company, such as ARM HoldingsTM, MIPSTM, and the like. The application processor 1020 may be a dedicated processor, such as, for example, a network or communications processor, a compression engine, a graphics processor, a co-processor, an embedded processor, and the like. Application processor 1020 may be implemented on one or more chips. The application processor 1020 may be part of one or more substrates, and/or the application processor 1020 may be implemented in one or more liners using any of a variety of process technologies such as, for example, BiCMOS, CMOS, or NMOS. On the bottom.FIG. 11 is a block diagram of an embodiment of a system-on-chip (SoC) design according to the present disclosure. As a specific illustrative example, SoC 1100 is included in user equipment (UE). In one embodiment, a UE refers to any device that can be used by an end user for communication, such as a handheld phone, a smart phone, a tablet, an ultra-thin notebook, a notebook with a broadband adapter, or any other similar communication device. The UE is often connected to a base station or node, which essentially corresponds to a mobile station (MS) in the GSM network.Here, the SoC 1100 includes 2 cores - 1106 and 1107. The cores 1106 and 1107 may conform to an instruction set architecture, such as abased CoreTM processor, an AMD processor, a MIPS-based processor, an ARM-based processor design, or their Customers, and their licensees or adopters. Cores 1106 and 1107 are coupled to cache control 1108, which is associated with bus interface unit 1109 and L2 cache 1110 to communicate with other portions of system 1100. Interconnect 1110 includes on-chip interconnects that may implement one or more aspects of the disclosure, such as IOSF, AMBA, or other interconnects discussed above. In one embodiment, the cores 1106, 1107 may implement techniques for supporting a configurable level of security for a range of memory addresses as described herein.The interconnect 1110 provides communication channels to other components, such as a SIM 1130 for interfacing with a subscriber identity module (SIM) card, a boot ROM for holding boot code executed by the cores 1106 and 1107 to initialize and boot the SoC 1100, and the like. 1140: SDRAM controller 1140 for interfacing with external memory (eg, DRAM 1160), flash controller 1145 for interfacing with non-volatile memory (eg, flash memory 1165), peripheral control for interfacing with peripheral devices Device 1150 (eg, a serial peripheral interface), video codec 1120 and video interface 1125 for displaying and receiving input (eg, input that allows touch), GPU 1115 for performing graphics-related calculations, and the like. Any of these interfaces may include the various aspects disclosed herein. In addition, system 1100 illustrates peripheral devices used for communications, such as Bluetooth module 1170, 3G modem 1175, GPS 1180, and Wi-Fi 1185.FIG. 12 shows a schematic diagram of a machine in an example form of a computer system 1200 within which a set of instructions for causing a machine to perform any one or more of the methods discussed herein may be performed. In alternative embodiments, machines may be connected (eg, networked) to other machines in a LAN, an intranet, an extranet, or the Internet. The machine may operate as a server or client device in a client-server network environment or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set top box (STB), a personal digital assistant (PDA), a cellular phone, a web device, a server, a network router, a switch, or a bridge, or capable of performing actions specified by the machine A set of instructions (continuously or otherwise) for any machine. Furthermore, although only a single machine is shown, the term "machine" should also include machines that individually or collectively execute one (or more) set of instructions to perform any one or more of the methods discussed herein. Any set.The computer system 1200 includes a processing device 1202, a main memory 1204 (eg, a read only memory (ROM), a flash memory, a dynamic random access memory (DRAM) such as a synchronous DRAM (SDRAM) or a DRAM (RDRAM), etc.), a static memory 1206. (e.g., flash memory, static random access memory (SRAM), etc.) and data storage device 1218, which communicate with one another via bus 1230.Processing device 1202 represents one or more general-purpose processing devices, such as a microprocessor, a central processing unit, and the like. More specifically, the processing device may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computer (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, or a processor implementing other instruction sets. Or a processor that implements a combination of instruction sets. Processing device 1202 may also be one or more special-purpose processing devices, such as application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), digital signal processors (DSPs), network processors, and the like. In one embodiment, processing device 1202 may include one or more processing cores. Processing device 1202 is configured to execute processing logic 1226 for performing the operations and steps discussed herein. In one embodiment, the processing device 1202 is the same as the processor architecture 100 described with reference to FIG. 1 as a technique for supporting a configurable level of security for a memory address range as described in embodiments of the present disclosure.Computer system 1200 may further include a network interface device 1208 communicatively coupled to network 1220 . Computer system 1200 may also include a video display unit 1210 (eg, a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 1212 (eg, a keyboard), a cursor control device 1214 (eg, a mouse), and a signal A device 1216 (eg, a speaker) is generated. In addition, the computer system 1200 may include a graphics processing unit 1222, a video processing unit 1228, and an audio processing unit 1232.Data storage device 1218 may include a machine-accessible storage medium 1224 on which is stored software 1226 , any one or more of software 1226 methods of implementing the functions described herein, such as implementing silent memory instructions and miss rate tracking to optimize The above described switching strategy for processing threads in the device. During execution of the software 1226 by the computer system 1200, the software 1226 may also reside, completely or at least partially, within the main memory 1204 as instructions 1226 and/or as processing logic 1226 residing within the processing device 1202; Memory 1204 and processing device 1202 also constitute machine-accessible storage media.The machine-readable storage medium 1224 may also be used to store instructions 1226 that implement silent memory instructions and miss rate tracking to optimize the switching strategy for threads in a processing device such as described with reference to processing device 100 in FIG. 1 , and/or Or a software library containing methods for calling the above application. Although the machine-accessible storage medium 1128 is shown as a single medium in the exemplary embodiment, the term "machine-accessible storage medium" should be taken to include a single medium or multiple mediums storing one or more sets of instructions (eg, centralized Or distributed databases and/or associated caches and servers). It should also be understood that the term "machine accessible storage medium" includes any medium that can store, encode, or carry a set of instructions that are executed by a machine and that cause the machine to perform any one or more of the methods of the present disclosure. The term "machine accessible storage medium" should be considered accordingly to include, but is not limited to, solid state memory as well as optical and magnetic media.The following examples relate to further embodiments.Example 1 is a processor including: a) a processing core; b) a memory controller operatively coupled to a processing core for accessing data in off-chip memory; and c) a memory encryption engine (MEE), Operatively coupled to a memory controller for:memory access operations in response to detection of a memory location identified by a memory address within a range of memory addresses associated with off-chip memory, 1) based on a value stored on a secure range register The value identifies the security level indicator associated with the memory location; and 2) accesses at least a portion of the data item associated with the memory address range of off-chip memory considering the security level indicator.In Example 2, the subject matter of Example 1, wherein the security level indicator identifies an encryption only memory range and a full protection memory range of off-chip memory.In Example 3, the subject matter of any of Examples 1-2, wherein the security level indicator includes one or more memory addresses for identifying at least one of an encryption only memory range and a full protection memory range of off-chip memory. range.In Example 4, the subject matter of any one of Examples 1-3, wherein the security level indicator comprises a memory address that separates off-chip memory into an encryption-only memory range and a fully-protected memory range.In Example 5, the subject matter of any of Examples 1-4, wherein the MEE is further for encrypting data associated with the data item in response to detecting an encrypted storage-only range of data items to be transferred to off-chip memory.In Example 6, the subject matter of any of Examples 1-5, wherein the MEE is further for decrypting data associated with the data item in response to detecting that the data item is to be transmitted from the only encrypted memory range of the off-chip memory.In Example 7, the subject matter of any of Examples 1-6, wherein in response to detecting that the data item is to be transferred to a fully protected memory range of off-chip memory, the MEE is further for storing cryptographic elements associated with the data item. data.In Example 8, the subject matter of any of Examples 1-7, wherein in response to detecting that the data item is to be transferred from a fully protected memory range of off-chip memory, the MEE is further for retrieving the cryptographic metadata associated with the data item. .In Example 9, the subject matter of any of Examples 1-8, wherein data is protected by instruction set architecture (ISA) instructions associated with the processor core, the ISA instructions protect data from software attacks, and is related to ISA instructions The scope of the connected memory is stored in a fully protected memory range to protect the data from active and replay attacks.Various embodiments may have different combinations of the structural features described above. For example, all optional features of the processors described above may also be implemented with reference to the methods or processes described herein, and the details in the examples may be used anywhere in one or more embodiments.Example 10 is a method comprising: 1) a memory access operation in response to detecting a memory location identified by a memory address within a range of memory addresses associated with an off-chip memory device, using a processing device based on storage on a security range register The value of to identify the security level indicator associated with the memory location; and 2) to use the processing device to access at least a portion of the data item associated with the memory address range of the off-chip memory considering the security level indicator.In Example 11, the subject matter of Example 10, wherein the security level indicator identifies an encryption only memory range and a full protection memory range of an off-chip memory device.In Example 12, the subject matter of any of Examples 10-11, wherein the security level indicator includes one or more memory address ranges that identify at least one of an encryption only memory range and a full protection memory range of an off-chip memory device. .In Example 13, the subject matter of any one of Examples 10-12, wherein the security level indicator includes a memory address that separates off-chip memory into an encrypted-only memory range and a fully-protected memory range.In Example 14, the subject matter of any one of Examples 10-13, further comprising encrypting data associated with the data item in response to detecting an encryption only memory range of the data item to be transferred to the off-chip memory device.In Example 15, the subject matter of any of Examples 10-14, further comprising decrypting the data associated with the data item in response to detecting that the data item is to be transmitted from the encryption only memory range of the off-chip memory device.In Example 16, the subject matter of any of Examples 10-15, further comprising generating cryptographic metadata associated with the data item in response to detecting that the data item is to be transferred to a fully protected memory range of off-chip memory.In Example 17, the subject matter of any of Examples 10-16, further comprising retrieving the encrypted metadata associated with the data item in response to detecting that the data item is to be transmitted from a fully protected memory range of the off-chip memory.In Example 18, the subject matter of any of Examples 10-17, wherein data is protected by Instruction Set Architecture (ISA) instructions associated with a processor core, the ISA instructions protect data from software attacks and is related to ISA instructions The range of attached memory is stored in a fully protected memory range to protect data from active and replay attacks.Various embodiments may have different combinations of the operating features described above. For example, all optional features of the methods described above may also be implemented with respect to non-transitory computer-readable storage media. Details in these examples may be used anywhere in one or more embodiments.Example 19 is a non-transitory computer-readable storage medium comprising instructions that, when executed by a processor, cause a processor to: 1) respond to detecting a memory address within a range of memory addresses associated with off-chip memory a memory access operation of the identified memory location, identifying the security level indicator associated with the memory location based on the value stored on the security scope register; and 2) accessing the memory address range of the off-chip memory in consideration of the security level indicator At least part of the associated data item.In Example 20, the subject matter of Example 19, wherein the security level indicator identifies an encryption only memory range and a full protection memory range of off-chip memory.In Example 21, the subject matter of any of Examples 19-20, wherein the security level indicator includes one or more memory addresses for identifying at least one of an encrypted-only memory range and a fully-protected memory range of off-chip memory. range.In Example 22, the subject matter of any of Examples 19-21, wherein the security level indicator comprises a memory address that separates off-chip memory into an encryption-only memory range and a fully-protected memory range.In Example 23, the subject matter of any of Examples 19-22, wherein the executable instructions further cause the processing system to use encryption and data items in response to detecting that the data item is to be transferred to the off-chip memory only encrypted memory range. Associated data.In Example 24, the subject matter of any of Examples 19-23, wherein in response to detecting that the data item is to be transferred from the only cryptographic memory range of off-chip memory, the executable instructions further cause the processing system to use to decrypt the data item-related Linked data.In Example 25, the subject matter of any of Examples 19-24, wherein in response to detecting that the data item is to be transferred to a fully protected memory range of off-chip memory, the executable instructions further cause the processing system to generate the data item. Associated encryption metadata.In Example 26, the subject matter of any of Examples 19-25, wherein in response to detecting that the data item is to be transferred from a fully protected memory range of off-chip memory, the executable instructions further cause the processing system to use to retrieve the data item-related Linked encryption metadata.In Example 27, the subject matter of any of Examples 19-26, wherein the data is protected by Instruction Set Architecture (ISA) instructions associated with the processor core, the ISA instructions protect the data from software attacks and is related to ISA instructions The range of attached memory is stored in a fully protected memory range to protect data from active and replay attacks.Various embodiments may have different combinations of the structural features described above. For example, all optional features of the processors and methods described above may also be implemented with reference to the systems described herein, and the details in the examples may be used anywhere in one or more embodiments.Example 28 is a non-transitory computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the subject matter of any of Examples 9-16.Example 29 is a system on chip (SoC) including a plurality of functional units, and a memory controller unit (MCU) coupled to the plurality of functional units, wherein the MCU includes a memory encryption engine (MEE), wherein the MEE is configured to execute the example 10 - The theme of any of 18.In Example 30, the subject matter of Example 29, wherein the SoC further includes the subject matter of any of Examples 1-9 and 19-27.Example 31 is an apparatus comprising: a) a plurality of functional units of a processor; b) a memory access operation for responding to a detection of a memory location identified with respect to a memory address within a range of memory addresses associated with off-chip memory a means for identifying a security level indicator associated with a memory location based on a value stored on a security scope register; and c) accessing data associated with a memory address range of off-chip memory in view of a security level indicator The device of at least part of the item.In Example 32, the subject matter of Example 31 further includes the subject matter of any of Examples 1-9 and 19-27.Example 33 is a system comprising: a memory device and a processor including a memory encryption engine (MEE), wherein the processor is configured to perform the subject matter of any of Examples 10-18.In Example 34, the subject matter of Example 30 further includes the subject matter of any of Examples 1-9 and 19-27.Although the present disclosure has been described with reference to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. The appended claims are intended to cover all such modifications and changes that fall within the true spirit and scope of the present disclosure.Design goes through multiple stages, from creation to simulation to manufacturing. The data representing the design can be expressed in a variety of ways. First, the hardware may be represented using a hardware description language or other functional description language as would be useful in simulation. In addition, circuit-level models with logic and/or transistor gates can be generated at certain stages of the design process. In addition, most designs reach data levels that represent the physical arrangement of multiple devices in a hardware model at some stage. Where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be data specifying the presence or absence of various features on different mask layers used to fabricate the mask of the integrated circuit. In any design representation, data may be stored on any form of machine readable medium. A memory or magnetic/optical memory, such as a disk, may be a machine-readable medium that stores information that is transmitted via optical or electrical waves that are modulated or otherwise generated to convey such information. When the indicated or carried code or designed electrical carrier is sent to the extent that electrical signal duplication, buffering or retransmission is achieved, a new copy is generated. Thus, the communications provider or network provider may at least temporarily store items (such as information encoded in a carrier wave) embodying the techniques of the disclosed embodiments on a tangible machine-readable medium.Modules as used herein refer to any combination of hardware, software, and/or firmware. As an example, a module includes hardware, such as a microcontroller, associated with a non-transitory medium for storing code suitable for execution by the microcontroller. Thus, in one embodiment, a reference to a module refers to hardware that is specifically configured to identify and/or execute code that is to be saved on a non-transitory medium. In addition, in another embodiment, the use of a module refers to a non-transitory medium that includes code that is specifically adapted to be executed by a microcontroller to perform a predetermined operation. And it may be inferred that in yet another embodiment, the term module (in this example) may refer to a combination of a microcontroller and a non-transitory medium. In general, the module boundaries shown as separate are generally different and potentially overlap. For example, the first and second modules may share hardware, software, firmware, or a combination thereof while potentially retaining some separate hardware, software, or firmware. In one embodiment, use of the term logic includes hardware such as transistors, registers, or other hardware such as programmable logic devices.In one embodiment, use of the phrase "configured to" refers to arranging, putting together, manufacturing, providing sales, importing, and/or designing devices, hardware, logic, or elements to perform assigned or determined tasks. In this example, if the device or elements thereof that are not in operation are designed, coupled, and/or interconnected to perform the specified task, the device or element thereof that is not operating is still "configured to" perform the specified task. . As a purely illustrative example, a logic gate may provide 0 or 1 during operation. However, a logic gate that is configured to provide an enable signal to the clock does not include every potential logic gate that can provide 1 or 0. Instead, the logic gate is a logic gate that is coupled in some way to enable the clock during the output of the 1 or 0 during operation. Note again that the use of the term "configured to" does not require operation, but focuses on the potential state of the device, hardware, and/or elements in which the device, hardware, and/or elements are designed to be Devices, hardware, and/or components are performing specific tasks while they are operating.Furthermore, in one embodiment, the use of the terms 'for', 'can/can be used', and/or 'can be used' refer to some device, logic, hardware, and/or element designed as follows: Enable use of the device, logic, hardware, and/or elements in a specified manner. As noted above, in one embodiment, the use of, for, or for use refers to the potential state of a device, logic, hardware, and/or element, wherein the device, logic, hardware, and/or The element is not operating, but is designed in such a way as to enable use of the device in a specified manner.As used herein, a value includes any known representation of a number, state, logic state, or binary logic state. In general, the use of logic levels, logic values, or multiple logic values ​​is also referred to as 1 and 0, which simply represents a binary logic state. For example, 1 refers to a logic high level and 0 refers to a logic low level. In one embodiment, memory cells such as transistors or flash memory cells can hold a single logic value or multiple logic values. However, other representations of the values ​​in the computer system are also used. For example, decimal tens can also be represented as binary value 910 and hexadecimal letter A. Thus, the value includes any representation of the information that can be saved in the computer system.Moreover, the state can also be represented by a value or a part of a value. As an example, a first value such as logic 1 may represent a default or initial state, while a second value such as logic 0 may represent a non-default state. Additionally, in one embodiment, the terms reset and set refer to default and updated values ​​or states, respectively. For example, the default value potentially includes a high logic value, ie, a reset, while the updated value potentially includes a low logic value, ie, set. Note that any combination of values ​​can be used to represent any number of states.Embodiments of the methods, hardware, software, firmware, or code described above may be implemented via instructions or code stored on a machine-accessible, machine-readable, computer-accessible, or computer-readable medium that may be executed by a processing element. Non-transitory machine-accessible/readable media include any mechanism that provides (ie, stores and/or transmits) information in a machine-readable form, such as a computer or electronic system. For example, non-transitory machine-accessible media include random access memory (RAM) such as static RAM (SRAM) or dynamic RAM (DRAM); ROM; magnetic or optical storage media; flash memory devices; Devices; acoustic storage devices; other forms of storage devices used to hold information received from transient (propagation) signals (eg, carrier waves, infrared signals, digital signals); etc. These are non-transients from which information can be received. The state medium is different.The instructions used to program the logic to perform the various embodiments of the present disclosure may be stored in a memory, such as a DRAM, cache, flash memory, or other storage device, in the system. Further, the instructions may be distributed over a network or through other computer-readable media. Thus, a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (such as a computer), but is not limited to: a floppy disk, an optical disk, compact disk read-only memory (CD-ROM), magneto-optical disk, Read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), magnetic or optical card, flash memory, or on via The Internet sends tangible machine-readable memory for use in information via electrical, optical, acoustic, or other forms of propagated signals such as carrier waves, infrared signals, digital signals, and the like. Thus, a computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (eg, a computer).Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. In addition, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.In the above description, specific embodiments have been given with reference to specific exemplary embodiments. However, it will be apparent that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the disclosure as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. Moreover, the above uses of the embodiments and other exemplary languages ​​are not necessarily referring to the same embodiment or the same example, but may refer to different and unique embodiments, and possibly the same embodiment.
A method for manufacturing a semiconductor device includes forming a buried layer of a semiconductor substrate. An active region is formed adjacent at least a portion of the buried layer. A first isolation structure is formed adjacent at least a portion of the buried layer. A second isolation structure is formed adjacent at least a portion of the active region. A base layer is formed adjacent at least a portion of the active region. A dielectric layer is formed adjacent at least a portion of the base layer, and then at least part of the dielectric layer is removed at an emitter contact location and at a sinker contact location. An emitter structure is formed at the emitter contact location. Forming the emitter structure includes etching the semiconductor device at the sinker contact location to form a sinker contact region. The sinker contact region has a first depth. The method may also include forming a gate structure. Forming the gate structure includes etching the sinker contact region thereby increasing the first depth of the sinker contact region to a second depth.
A method for manufacturing a semiconductor device, comprising:forming a buried layer of a semiconductor substrate;forming an active region adjacent at least a portion of the buried layer;forming a base layer adjacent at least a portion of the active region;forming a dielectric layer adjacent at least a portion of the base layer;removing at least part of the dielectric layer at an emitter contact location and at a sinker contact location;forming an emitter structure at the emitter contact location; and   wherein forming the emitter structure comprises etching the semiconductor device at the sinker contact location to form a sinker contact region, the sinker contact region having a first depth.The method of Claim 1, further comprising:forming a gate structure;   wherein forming the gate structure comprises etching the sinker contact region thereby increasing the first depth of the sinker contact region to a second depth.The method of Claim 1, further comprising forming a collector contact at the sinker contact region, the collector contact operable to electrically contact the buried layer.The method of Claim 1, wherein the first depth is approximately 0.1 to 0.2 microns.The method of Claim 1, further comprising forming an oxide layer adjacent at least a portion of the buried layer.The method of Claim 2, wherein the second depth is approximately 0.3 to 0.6 microns.The method of Claim 2, further comprising forming a collector contact at the sinker contact region, the collector contact operable to electrically contact the buried layer.The method of Claim 1, further comprising forming a first isolation structure adjacent at least a portion of the buried layer.The method of Claim 8, further comprising forming a second isolation structure adjacent at least a portion of the active region.The method of Claim 8, wherein the first isolation structure comprises a deep trench.The method of Claim 8, further comprising forming a liner oxide adjacent at least a portion of the first isolation structure.The method of Claim 9, wherein the second isolation structure comprises a shallow trench.A semiconductor device, comprising:a buried layer of a semiconductor substrate;an active region adjacent at least a portion of the buried layer;a base layer adjacent at least a portion of the active region;a dielectric portion adjacent at least a portion of the base layer;an emitter structure adjacent at least a portion of the base layer;a sinker contact region of the semiconductor substrate, the sinker contact region formed adjacent at least a portion of the active region when the emitter structure is formed; and   wherein the sinker contact region has a depth.The semiconductor device of Claim 13, wherein the depth is approximately 0.1 to 0.2 microns.The semiconductor device of Claim 13, wherein the depth is approximately 0.3 to 0.6 microns.The semiconductor device of Claim 13, further comprising a collector contact formed at the sinker contact region, the collector contact operable to electrically contact the buried layer.The semiconductor device of Claim 13, further comprising an oxide layer adjacent at least a portion of the buried layer.The semiconductor device of Claim 13, further comprising a first isolation structure adjacent at least a portion of the buried layer.The semiconductor device of Claim 18, further comprising a second isolation structure adjacent at least a portion of the active region.The method of Claim 18, wherein the first isolation structure comprises a deep trench.The method of Claim 18, further comprising a liner oxide adjacent at least a portion of the first isolation structure.The method of Claim 19, wherein the second isolation structure comprises a shallow trench.
RELATED APPLICATIONSThis application is related to Application Serial Number            , entitled"Method for Manufacturing and Structure of Semiconductor Device with Shallow Trench Collector Contact Region,"filed on October 1, 2001.TECHNICAL FIELD OF THE INVENTIONThis invention relates generally to semiconductor devices and, more specifically, to a semiconductor device with a sinker contact region and a method of manufacturing the same.BACKGROUND OF THE INVENTIONIn complementary bipolar technologies for high-precision high speed analog and mixed-signal applications, a sinker contact is generally used to reduce the collector resistance. In a standard process integration sequence, collector sinkers are realized by using high-energy ion implantation of p-type or n-type dopants into the collector epitaxy. Dopant activation and diffusion are then realized by a thermal step (e.g., furnace or rapid thermal anneal). The diffusion penetrates into the collector epitaxial layer to electrically contact the underlying buried layer.One or two lithographic steps are necessary to selectively introduce dopants into the collector epitaxy. Furthermore, high-energy high-dose ion implant capability is used for higher voltage applications in which thick collector epitaxy is used to guarantee high breakdown characteristics.SUMMARY OF THE INVENTIONThe present invention provides a semiconductor device and method for manufacturing the same that substantially eliminates or reduces at least some of the disadvantages and problems associated with previously developed semiconductor devices and methods for manufacturing the same.In accordance with a particular embodiment of the present invention, a method for manufacturing a semiconductor device includes forming a buried layer of a semiconductor substrate. An active region is formed adjacent at least a portion of the buried layer. A first isolation structure is formed adjacent at least a portion of the buried layer. A second isolation structure is formed adjacent at least a portion of the active region. A base layer is formed adjacent at least a portion of the active region. A dielectric layer is formed adjacent at least a portion of the base layer, and then at least part of the dielectric layer is removed at an emitter contact location and at a sinker contact location. An emitter structure is formed at the emitter contact location. Forming the emitter structure includes etching the semiconductor device at the sinker contact location to form a sinker contact region. The sinker contact region has a first depth. The method may also include forming a gate structure. Forming the gate structure includes etching the sinker contact region thereby increasing the first depth of the sinker contact region to a second depth.In accordance with another embodiment, a semiconductor device includes a buried layer of a semiconductor substrate and an active region adjacent at least a portion of the buried layer. A first isolation structure is adjacent at least a portion of the buried layer, and a second isolation structure is adjacent at least a portion of the active region. A base layer is adjacent at least a portion of the active region, and a dielectric portion is adjacent at least a portion of the base layer. The semiconductor device includes an emitter structure adjacent at least a portion of the base layer and a sinker contact region of the semiconductor substrate. The sinker contact region is formed adjacent at least a portion of the second isolation structure when the emitter structure is formed. The sinker contact region may have a depth of approximately 0.3 to 0.6 microns.Technical advantages of particular embodiments of the present invention include a method of manufacturing a semiconductor device with a sinker contact region that requires less lithographic steps to complete the manufacturing process since the sinker contact region is formed when the dielectric layer is defined. Accordingly, the total time it takes to manufacture the semiconductor device and the labor resources required are reduced.Another technical advantage of particular embodiments of the preset invention includes a method of manufacturing a semiconductor device with a sinker contact region that does not require high energy ion implantation to make the electrical contact between a collector contact and the buried layer since the collector contact can be formed within the sinker contact region. This can reduce the amount of time it takes to manufacture semiconductor device. It can also decrease the potential for contamination of critical devices or structures since the use of high energy implants can lead to such contamination during the manufacturing process.Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions and claims. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some or none of the enumerated advantages.BRIEF DESCRIPTION OF THE DRAWINGSFor a more complete understanding of the particular embodiments of the invention and their advantages, reference is now made to the following descriptions, taken in conjunction with the accompanying drawings, in which:FIGURE 1 is a cross-sectional diagram illustrating a semiconductor device with sinker contact region 38 at one stage of a manufacturing process, in accordance with a particular embodiment of the present invention;FIGURE 2 is a cross-sectional diagram illustrating a semiconductor device with an active region and a buried layer at one stage of a manufacturing process, in accordance with a particular embodiment of the present invention;FIGURE 3 is a cross-sectional diagram illustrating the semiconductor device of FIGURE 2 at another stage of a manufacturing process showing an emitter structure, in accordance with a particular embodiment of the present invention;FIGURE 4 is a cross-sectional diagram illustrating the semiconductor device of FIGURE 1 at another stage of a manufacturing process showing a sinker contact region, in accordance with a particular embodiment of the present invention; andFIGURE 5 is a cross-sectional diagram illustrating the semiconductor device of FIGURE 4 at another stage of a manufacturing process showing a collector contact and an emitter contact, in accordance with a particular embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONFIGURE 1 illustrates a semiconductor device 10 at one stage of a manufacturing process, in accordance with an embodiment of the present invention. Semiconductor device 10 includes a sinker contact region 38 formed using methods of the present invention. Sinker contact region 38 provides an area for a collector contact to be subsequently formed. Sinker contact region 38 is formed when portions of a dielectric layer 30 are removed, leaving dielectric portions 42. Since there is no dielectric layer 30 over sinker contact region 38, the etching process used to remove portions of dielectric layer 30 creates sinker contact region 38 in semiconductor substrate 11. Subsequently forming a collector contact within sinker contact region 38 will facilitate an electrical contact between the collector contact and a buried layer 16 of semiconductor substrate 11. Such electrical contact allows for the flow of an electrical current between the collector contact and buried layer 16.Forming sinker contact region 38 in this manner requires less lithographic steps to complete the process since the region is formed when dielectric portions 42 are defined. Furthermore, high energy ion implantation is not required for a collector contact to electrically contact the buried layer since the collector contact can be formed within sinker contact region 38. This can reduce the amount of time it takes to manufacture semiconductor device 10. It can also decrease the potential for contamination of critical devices or structures since the use of high energy implants can lead to such contamination during the manufacturing process.Semiconductor device 10 includes semiconductor substrate 11 which comprises a wafer 13. As discussed in greater detail below, in this embodiment semiconductor substrate 11 also includes an oxide layer 14 and buried layer 16. An active region 18 is disposed adjacent at least a portion of buried layer 16. Deep trench isolation structures 20 are also adjacent at least a portion of buried layer 16 and include a liner oxide 22. Shallow trench isolation structures 24 are adjacent at least a portion of active region 18.A base layer 26 is disposed adjacent at least a portion of semiconductor substrate 11. Emitter structure 35 is disposed adjacent at least a portion of base layer 26. A first gate stack layer 34 is also disposed adjacent at least a portion of semiconductor substrate 11, and a second gate stack layer 36 is disposed adjacent at least a portion of first gate stack layer 34.FIGURE 2 illustrates a particular stage during the manufacturing process of semiconductor device 10 of FIGURE 1. Semiconductor substrate 11 comprises wafer 13, which is formed from a single crystalline silicon material. Semiconductor substrate 11 may comprise other suitable materials or layers without departing from the scope of the present invention. For example, semiconductor substrate 11 may include a recrystallized semiconductor material, a polycrystalline semiconductor material or any other suitable semiconductor material.Semiconductor device 10 includes an oxide layer 14. Oxide layer 14 may be formed by any of a variety of techniques well known to those skilled in the art and may comprise any suitable oxide. Other embodiments of the present invention may not include an oxide layer.Buried layer 16 is formed within semiconductor substrate 11 using any of a variety of techniques well known to those of ordinary skill in the art. Buried layer 16 may either be negatively-doped to form a negative buried layer ("NBL") or positively-doped to form a positive buried layer ("PBL"). In an NBL, electrons conduct electricity during operation of semiconductor device 10, while holes conduct electricity in a PBL. Any of a number of dopants may be used to form an NBL, such as arsenic, phosphorus or antimony; and dopants such as boron or indium may be used to form a PBL.Semiconductor device 10 includes first gate stack layer 34, which is defined to form a gate stack later in the manufacturing process. In the illustrated embodiment, first gate stack layer 34 comprises a polysilicon layer. First gate stack layer 34 may be formed by an of a variety of methods well known to those of ordinary skill in the art. Other embodiments of the present invention may not include a first gate stack layer 34.Active region 18 is formed adjacent at least a portion of buried layer 16. Active region 18 is a substantially undoped or lightly doped region. Active region 18 may contain some diffusion of atoms from buried layer 16 migrating upward. Active region 18 may be formed by any of a variety of techniques well known to those of ordinary skill in the art, such as epitaxial growth.In the illustrated embodiment, deep trench isolation structures 20 are formed adjacent at least a portion of buried layer 16. Deep trench isolation structures 20 provide isolation between elements of semiconductor device 10 during use of semiconductor device 10. Other embodiments of the present invention may or may not include deep trench isolation structures 20 or may provide isolation between elements of a semiconductor device in other ways, such as through diffusion.Deep trench isolation structures 20 may be formed using photoresist and etching, or by any other means known to those of ordinary skill in the art. Deep trench isolation structures 20 may be filled with a semiconductive material such as intrinsic polycrystalline silicon or an insulative material such as silicon dioxide. Such material may be deposited within deep trench isolation structures 20 using a suitable deposition process such as chemical vapor deposition. In the illustrated embodiment, deep trench isolation structures 20 also include a liner oxide 22.Shallow trench isolation structures 24 are formed adjacent at least a portion of active region 18. Shallow trench isolation structures 24 provide isolation between active regions of semiconductor device 10. Other embodiments of the present invention may or may not include shallow trench isolation structures 24 or may provide isolation between active regions of a semiconductor device through other ways, such as local oxidation of silicon, or LOCOS.Shallow trench isolation structures 24 may be formed using photoresist and etching, or by any other means known to those of ordinary skill in the art. Shallow trench isolation structures 24 may be filled with a suitable insulative material such as silicon dioxide. Such material may be deposited within shallow trench isolation structures 24 using a suitable deposition process such as chemical vapor deposition. Shallow trench isolation structures 24 may have a depth of approximately 3,000 to 10,000 angstroms.Base layer 26 is formed adjacent at least a portion of semiconductor substrate 11. Base layer 26 may comprise an in-situ doped or implanted silicon germanium or any other suitable material containing silicon, such as silicon germanium carbon or silicon itself. Base layer 26 may be formed by any of a variety of techniques well known to those of ordinary skill in the art and may have a thickness of approximately 190 nanometers.Second gate stack layer 36 of semiconductor device 10 is formed. Second gate stack layer 34 is defined along with first gate stack layer 34 to form a gate stack later in the manufacturing process. In the illustrated embodiment, second gate stack layer 34 comprises a polysilicon layer. Second gate stack layer 34 may be formed by an of a variety of methods well known to those of ordinary skill in the art. Other embodiments of the present invention may not have a second gate stack layer 36.Dielectric layer 30 is formed adjacent at least a portion of base layer 36. Dielectric layer 30 may comprise an appropriate dielectric, such as a suitable nitride or oxide. Dielectric layer 30 may be formed by any of a variety of techniques well known to those of ordinary skill in the art.Dielectric layer 30 may originally have a portion 31 disposed above an area where an emitter structure will be formed and a portion 33 disposed above an area 28 which will become a sinker contact region. However, portions 31 and 33 are removed using a lithography and etching process or any another suitable process known to those of ordinary skill in the art.The location of portion 33 partially overlaps shallow trench isolation structures 24 so that when a sinker contact region is formed it will be self-aligned with area 28 between shallow trench isolation structures 24. Ion implantation may be used at this stage to define a selectively implanted collector.After a clean up process, an emitter layer 32 is disposed at least partially upon dielectric layer 30. In the illustrated embodiment, emitter layer 32 comprises a polysilicon material, but other embodiments may include an amorphous emitter layer. Emitter layer 32 may be formed by any of a variety of techniques well known to those of ordinary skill in the art.Dopants are selectively implanted to provide a high concentration dopant source for emitter diffusion. An ultra-shallow emitter base metallurgic junction can be achieved by diffusing the implanted dopant with a rapid thermal process.FIGURE 3 illustrates semiconductor device 10 of FIGURE 2 at a further stage in the manufacturing process. Emitter layer 32 of FIGURE 2 is subjected to a lithographic step and etching process to define and form emitter structure 35. Other processes well known to those of ordinary skill in the art may be used to remove portions of emitter layer 32 to form emitter structure 35. Dielectric portions 42 of dielectric layer 30 are under emitter structure 35.Referring back to FIGURE 1, semiconductor device 10 of FIGURE 3 is illustrated at a further stage in the manufacturing process. Portions of dielectric layer 30 are removed, leaving dielectric portions 42 under emitter structure 35. Such removal is completed by subjecting dielectric layer 30 to a lithographic step and etching process. Using such a process, sinker contact region 38 is formed within semiconductor substrate 11 since there is no dielectric layer 30 to etch directly above the area where sinker contact region 38 is formed. Sinker contact region 38 provides a location where a collector contact can be formed later in the manufacturing process. In this embodiment, sinker contact region 38 has a depth of approximately 0.1 microns, but other embodiments may include a sinker contact region having other depths.Forming sinker contact region 38 in this manner provides several technical advantages. Less lithographic steps may be needed to complete the process since the region is formed when removing portions of dielectric 30. Furthermore, high energy ion implantation may not be required for a subsequently-formed collector contact to electrically contact buried layer 16 since the collector contact can be formed within sinker contact region 38.FIGURE 4 illustrates semiconductor device 10 of FIGURE 1 at a further stage in the manufacturing process, in accordance with another embodiment of the present invention. The illustrated portions of first gate stack layer 34 and second gate stack layer 36 of FIGURE 3 have been removed to define a gate stack. Such removal can be achieved through a lithographic step and etching process.Such an etching process increases the depth of sinker contact region 38 providing a deeper contact region for the formation of the collector contacts. In this embodiment, sinker contact region 38 has a suitable depth, such as approximately 0.3 to 0.6 microns. This will enable the collector contacts to be formed closer to buried layer 16, reducing the need for ion implantation steps or other methods to facilitate the connection between the collector contacts and buried layer 16.FIGURE 5 illustrates semiconductor device 10 of FIGURE 4 at a further stage in the manufacturing process, in accordance with another embodiment of the present invention. FIGURE 5 includes collector contact 52 formed within sinker contact region 38. Collector contact 52 may electrically contact buried layer 16. Silicide layers 48 are formed, and source/drain implant 44 is made to facilitate the electrical connection between collector contact 50 and buried layer 16. Emitter contact 52 is formed adjacent silicide layer 48 proximate emitter structure 35. Spacers 46 are formed on semiconductor device 10 as well, using any of a variety of techniques known to those of ordinary skill in the art.Standard processing steps are undertaken to complete the manufacture of semiconductor device 10. Appropriate metal interconnections are formed and passivation is undertaken. Source/drain or extrinsic base ion implants and diffusion may be performed to further complete the connection with buried layer 16. Other appropriate methods or steps may be performed to complete the manufacture of semiconductor device 10.The illustrated embodiments incorporate embodiments of the invention in a bipolar complementary metal oxide semiconductor (BiCMOS) technology. Particular embodiments of the present invention may be incorporated into complementary metal oxide semiconductor (CMOS) and complementary bipolar complementary metal oxide semiconductor (CBiCMOS) technologies as well. Other technologies well known to those of ordinary skill in the art may utilize particular embodiments of the present invention as well.Although particular configurations and methods have been illustrated for semiconductor device 10, other embodiments of the present invention may include other configurations and/or methods. For example, other embodiments may utilize a highly selective overetch at the removal of the emitter layer. This might require a highly selective silicon/nitride or silicon/oxide etch but has the advantage of allowing the depth of the sinker contact region to be customized to the requirements and specifications of the particular technology being developed.Although the present invention has been described in detail, various changes and modifications may be suggested to one skilled in the art. It is intended that the present invention encompass such changes and modifications as falling within the scope of the appended claims.
A method of manufacturing a semiconductor device (300) is provided herein, where the width effect is reduced in the resulting semiconductor device (300). The method involves providing a substrate (200) having semiconductor material (202), forming an isolation trench (212) in the semiconductor material (202), and lining the isolation trench (212) with a liner material (214) that substantially inhibits formation of high-k material thereon. The lined trench (216) is then filled with an insulating material (218). Thereafter, a layer of high-k gate material (232) is formed over at least a portion of the insulating material (218) and over at least a portion of the semiconductor material (202). The liner material (214) divides the layer of high-k gate material (232), which prevents the migration of oxygen over the active region of the semiconductor material (202).
CLAIMS What is claimed is: 1. A method of manufacturing a semiconductor device structure (300), the method comprising: providing a substrate (200) having semiconductor material (202); forming an isolation trench (212) in the semiconductor material (202); lining the isolation trench (212) with a liner material (214) that substantially inhibits formation of high-k material thereon, resulting in a lined trench (216); at least partially filling the lined trench (216) with an insulating material (218); and forming a layer of high-k gate material (232) overlying at least a portion of the insulating material (218) and overlying at least a portion of the semiconductor material (202), such that the layer of high-k gate material (232) is divided by the liner material (214). 2. The method of claim 1 , wherein: at least partially filling the lined trench (216) results in a filled isolation trench (220) and an exposed rim (222) of the liner material (214); and forming the layer of high-k gate material (232) comprises depositing the high-k gate material (232) over the filled isolation trench (220) such that the exposed rim (222) of the liner material (214) remains substantially void of the high-k gate material (232). 3. The method of claim 2, further comprising forming a metal gate layer (234) over the high-k gate material (232) and over the liner material (214). 4. The method of claim 3, further comprising forming a polysilicon gate layer (236) over the metal gate layer (234). 5. The method of claim 1, wherein lining the isolation trench (212) comprises lining the isolation trench (212) with a nitride material. 6. The method of claim 1 , wherein at least partially filling the lined trench (216) comprises at least partially filling the lined trench (216) with an oxide material. 7. A semiconductor device (300) comprising: a layer of semiconductor material (202) having an active transistor region (308, 310) defined therein; an isolation trench (212) formed in the layer of semiconductor material (202) adjacent to the active transistor region (308, 310); a trench liner (214) lining the isolation trench (212), wherein the isolation trench (212) and the trench liner (214) together form a lined trench (216); an insulating material (218) in the lined trench (216); and a layer of high-k gate material (232) overlying at least a portion of the insulating material (218) and overlying at least a portion of the active transistor region (308, 310), the layer of high-k gate material (232) being divided by the trench liner (214). 8. The semiconductor device (300) of claim 7, wherein: the trench liner (214) comprises a material that substantially inhibits nucleation of high-k material thereon; and the layer of high-k gate material (232) is formed by deposition over the insulating material (218) and over the active transistor region (308, 310). 9. The semiconductor device (300) of claim 7, wherein the trench liner (214) forms an oxygen barrier between the high-k gate material (232) overlying the portion of the insulating material (218) and the high-k gate material (232) overlying the portion of the active transistor region (308, 310). 10. The semiconductor device (300) of claim 7, wherein: the trench liner (214) includes an upper rim (222); and the upper rim (222) is substantially void of the high-k gate material (232). 11. The semiconductor device (300) of claim 7, further comprising: a metal gate layer (234) overlying the high-k gate material (232) and overlying the trench liner (214); and a polysilicon gate layer (236) overlying the metal gate layer (234). 12. The semiconductor device (300) of claim 7, wherein the trench liner (214) is formed from a nitride material. 13. The semiconductor device (300) of claim 7, wherein the insulating material (218) is an oxide material. 14. A shallow trench isolation method for a semiconductor device structure (300), the method comprising: providing a semiconductor substrate (200) having a layer of semiconductor material (202), a pad oxide layer (208) overlying the layer of semiconductor material (202), and a pad nitride layer (210) overlying the pad oxide layer (208); forming an isolation trench (212) in the semiconductor substrate (200) by selective removal of a portion of the pad nitride layer (210), a portion of the pad oxide layer (208), and a portion of the layer of semiconductor material (202); depositing a liner material (214) in the isolation trench (212) and on exposed portions of the pad nitride layer (210), wherein the liner material (214) substantially inhibits nucleation of high-k material thereon; and depositing an insulating material (218) over the liner material (214) such that the insulating material (218) fills the isolation trench (212). 15. The method of claim 14, further comprising polishing the insulating material (218) to a height approximately corresponding to the liner material (214) overlying the pad nitride layer (210). 16. The method of claim 15, further comprising removing the pad nitride layer (210) and a portion of the liner material (214), leaving the insulating material (218) substantially intact. 17. The method of claim 16, wherein the removing step forms an exposed upper rim (222) of the liner material (214). 18. The method of claim 17, further comprising depositing a high-k gate material (232) over the insulating material (218), wherein the exposed upper rim (222) of the liner material (214) remains void of the high-k gate material (232). 19. The method of claim 18, wherein the liner material (214) blocks migration of oxygen from the high-k gate material (232). 20. The method of claim 18, further comprising: forming a metal gate layer (234) over the high-k gate material (232) and over the liner material (214); and forming a polysilicon gate layer (236) over the metal gate layer (234).
SEMICONDUCTOR DEVICE WITH ISOLATION TRENCH LINER, AND RELATED FABRICATION METHODS TECHNICAL FIELD [0001] Embodiments of the subject matter described herein relate generally to semiconductor devices. More particularly, embodiments of the subject matter relate to the use of isolation regions between metal oxide semiconductor transistors. BACKGROUND [0002] The majority of present day integrated circuits (ICs) are implemented by using a plurality of interconnected field effect transistors (FETs), which may be realized as metal oxide semiconductor field effect transistors (MOSFETs or MOS transistors). A MOS transistor may be realized as a p-type device (i.e., a PMOS transistor) or an n-type device (i.e., an NMOS transistor). Moreover, a semiconductor device can include both PMOS and NMOS transistors, and such a device is commonly referred to as a complementary MOS or CMOS device. A MOS transistor includes a gate electrode as a control electrode that is formed over a semiconductor substrate, and spaced-apart source and drain regions formed within the semiconductor substrate and between which a current can flow. The source and drain regions are typically accessed via respective conductive contacts formed on the source and drain regions. Bias voltages applied to the gate electrode, the source contact, and the drain contact control the flow of current through a channel in the semiconductor substrate between the source and drain regions beneath the gate electrode. Conductive metal interconnects (plugs) formed in an insulating layer are typically used to deliver bias voltages to the gate, source, and drain contacts. [0003] FIG. 1 is a simplified diagram of a CMOS transistor device structure 100 that has been fabricated using conventional techniques. The upper portion of FIG. 1 (FIG. IA) represents a top view of device structure 100, and the lower portion of FIG. 1 (FIG. IB) represents a cross section of device structure 100 as viewed from line IB- IB in the upper portion of FIG. 1. Device structure 100 includes an n-type active region 102 of semiconductor material, a p-type active region 104 of semiconductor material, shallow trench isolation (STI) 106 surrounding and separating n-type region 102 and p-type region 104, and a gate structure 108 overlying n-type region 102, p-type region 104, and STI 106. Device structure 100 is formed on a silicon-on-insulator (SOI) substrate havinga physical support substrate 110 and an insulating material 112 (typically a buried oxide) on support substrate 110. Gate structure 108 includes a gate insulator layer 114, which is formed from a dielectric material having a relatively high dielectric constant (i.e., a high- k material). Gate structure 108 also includes a gate metal layer 116 overlying gate insulator layer 114, and a layer of poly crystalline silicon 118 overlying gate metal layer 116. [0004] FIG. 2 is a detailed view of a region 120 of device structure 100 (this region 120 is surrounded by the dashed circle in FIG. 1). FIG. 2 shows a divot 122 that can be formed as a result of one or more process steps that lead to the formation of device structure 100. Gate insulation layer 114, gate metal layer 116, and polycrystalline silicon 118 generally follow the contour of divot 122 as they are formed. The arrows in FIG. 2 represent the liberation of oxygen from STI 106 into gate insulator layer 114. The diffusion of oxygen through the high-k gate insulator layer 114 and over p-type region 104 causes the "width effect," which can degrade device performance. Although not shown in FIG. 2, the oxygen also diffuses in over the adjacent n-type region, which would be located to the right of the portion of STI 106 shown in FIG. 2. Notably, devices with shorter channel region lengths are more susceptible to the width effect. [0005] The width effect can be reduced using a number of known techniques. One known approach for reducing the width effect adds silicon to the high-k material. However, this adds control issues to dielectric deposition, and adversely impacts scaling. Another known approach for reducing the width effect employs nitridation of the high-k material. However, excess nitridation degrades device performance and can adversely affect the threshold voltage of the device. Yet another approach utilizes oxygen scavenging metals to create the metal gate layer. Unfortunately, oxygen scavenging metals have inherent control issues, which lead to excess variability in the process. The width effect can also be addressed by attempting to minimize the amount of overlap between the underlying STI material and the high-k gate material. Such techniques require additional masking layers, and such techniques might violate existing controls and rules mandated by the particular manufacturing process node. One additional approach encapsulates the STI material with a nitride diffusion barrier prior to the deposition of the high-k material. This approach is unproven, and it leads to significant process complexity for the isolation module and variability to subsequent process modules.BRIEF SUMMARY [0006] A method of manufacturing a semiconductor device structure is provided. The method begins by providing a substrate having semiconductor material. An isolation trench is formed in the semiconductor material, and the trench is lined with a liner material that substantially inhibits formation of high-k material thereon. The lined trench is filled with an insulating material, over which is formed a layer of high-k gate material. The high-k gate material is formed such that it overlies at least a portion of the insulating material and at least a portion of the semiconductor material, and such that the layer of high-k gate material is divided by the liner material. [0007] A semiconductor device is also provided. The semiconductor device includes a layer of semiconductor material having an active transistor region defined therein, an isolation trench formed in the layer of semiconductor material adjacent to the active transistor region, a trench liner lining the isolation trench, an insulating material in the lined trench, and a layer of high-k gate material overlying at least a portion of the insulating material and overlying at least a portion of the active transistor region. The layer of high-k gate material is divided by the trench liner. [0008] Also provided is a shallow trench isolation method for a semiconductor device structure. This method begins by providing a semiconductor substrate having a layer of semiconductor material, a pad oxide layer overlying the layer of semiconductor material, and a pad nitride layer overlying the pad oxide layer. The method then forms an isolation trench in the semiconductor substrate by selective removal of a portion of the pad nitride layer, a portion of the pad oxide layer, and a portion of the layer of semiconductor material. A liner material is deposited in the isolation trench and on exposed portions of the pad nitride layer, wherein the liner material substantially inhibits nucleation of high-k material thereon. In addition, an insulating material is deposited over the liner material such that the insulating material fills the isolation trench. [0009] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. BRIEF DESCRIPTION OF THE DRAWINGS [0010] A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with thefollowing figures, wherein like reference numbers refer to similar elements throughout the figures. [0011] FIG. 1 is a simplified diagram of a CMOS transistor device structure that has been fabricated using conventional techniques; [0012] FIG. 2 is a detailed view of a region of the CMOS transistor device structure shown in FIG. 1; [0013] FIGS. 3-12 are cross sectional views that illustrate the fabrication of a semiconductor device structure; and [0014] FIG. 13 is a cross sectional view of a semiconductor device structure fabricated in accordance with the process depicted in FIGS. 3-12. DETAILED DESCRIPTION [0015] The following detailed description is merely illustrative in nature and is not intended to limit the embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. [0016] For the sake of brevity, conventional techniques related to semiconductor device fabrication may not be described in detail herein. Moreover, the various tasks and process steps described herein may be incorporated into a more comprehensive procedure or process having additional steps or functionality not described in detail herein. In particular, various steps in the manufacture of semiconductor based transistors are well known and so, in the interest of brevity, many conventional steps will only be mentioned briefly herein or will be omitted entirely without providing the well known process details. [0017] The techniques and technologies described herein may be utilized to fabricate MOS transistor devices, including NMOS transistor devices, PMOS transistor devices, and CMOS transistor devices. Although the term "MOS device" properly refers to a device having a metal gate electrode and an oxide gate insulator, that term will be used throughout to refer to any semiconductor device that includes a conductive gate electrode (whether metal or other conductive material) that is positioned over a gate insulator(whether oxide or other insulator) which, in turn, is positioned over a semiconductor substrate. [0018] The fabrication process described herein can be utilized to manufacture semiconductor devices having a high-k gate insulator and a metal gate overlying the high- k gate insulator. In particular, a semiconductor device fabricated in accordance with this process includes an STI liner that serves as an oxygen migration barrier between the STI oxide material and the high-k gate insulator. The STI liner eliminates (or significantly reduces) the diffusion of oxygen into that portion of the high-k gate insulator that overlies the active transistor region, thus minimizing the impact of the phenomena known as the width effect. As described in more detail below, the STI liner material is selected such that the high-k material does not nucleate on the STI liner material, which causes the STI liner to separate the high-k gate insulator into a first section (located over the STI material) and a second section (located over the active transistor region). [0019] Referring now to FIG. 3, fabrication of a semiconductor device structure begins by providing an appropriate semiconductor substrate 200 having a layer of semiconductor material 202. This fabrication process represents one implementation of a trench isolation method that is suitable for use with a semiconductor device, such as a CMOS transistor device. For this embodiment, semiconductor substrate 200 is realized as a silicon-on-insulator (SOI) substrate, where semiconductor material 202 is disposed on a layer of insulator material 204 that, in turn, is supported by a carrier layer 206. More specifically, semiconductor material 202 is a silicon material, and insulator material 204 is a buried oxide layer. The term "silicon material" is used herein to encompass the generally monocrystalline and relatively pure silicon materials typically used in the semiconductor industry. Semiconductor material 202 can originally be either N-type or P-type silicon, but is typically P-type, and semiconductor material 202 is subsequently doped in an appropriate manner to form active regions. For this embodiment, insulator material 204 is realized as a layer of silicon oxide (SiO2). In alternate embodiments, the semiconductor device structure can be formed on a bulk silicon substrate rather than an SOI substrate. [0020] FIG. 3 depicts semiconductor substrate 200 after formation of a pad oxide layer 208 on semiconductor material 202, and after formation of a pad nitride layer 210 on pad oxide layer 208. The resulting structure includes pad oxide layer 208 overlying semiconductor material 202, along with pad nitride layer 210 overlying pad oxide layer 208. Conventional process steps can be used to arrive at the structure depicted in FIG. 3.For example, pad oxide layer 208 is grown to the desired thickness, then pad nitride layer 210 is deposited over pad oxide layer 208 using an appropriate chemical vapor deposition (CVD) technique. [0021] Semiconductor substrate 200 is then processed in an appropriate manner to form a suitably sized isolation trench 212 in semiconductor material 202 (FIG. 4). As depicted in FIG. 4, isolation trench 212 can be formed by selectively removing a portion of pad nitride layer 210, a portion of pad oxide layer 208, and a portion of semiconductor material 202. For this SOI implementation, formation of isolation trench 212 also involves the selective removal of a portion of insulator material 204 underlying semiconductor material 202. FIG. 4 depicts the state of semiconductor substrate 200 after completion of a number of known process steps, including photolithography, masking, and etching steps. Notably, isolation trench 212 extends into insulator material 204 to provide sufficient isolation between the portions of semiconductor material 202 on either side of isolation trench 212. [0022] Although other fabrication steps or sub-processes may be performed after the step in the process depicted in FIG. 4, this example continues by lining isolation trench 212 with an appropriate liner material 214. Liner material 214 can be deposited in isolation trench 212 and on any exposed portions of pad nitride layer 210 using any suitable technique, such as CVD, low pressure CVD (LPCVD), or plasma enhanced CVD (PECVD). Although preferred embodiments utilize a CVD material, liner material 214 could be a thermally grown material in alternate embodiments. Notably, liner material 214 is a material that substantially inhibits formation of high-k materials thereon. In other words, the composition of liner material 214 is such that high-k materials (the deposition of which is highly surface selective) do not nucleate on exposed surfaces of liner material 214. In practice, liner material 214 is a dielectric material such as a nitride, preferably, silicon nitride, and liner material 214 is formed with a typical thickness of about 20-100 Angstroms. [0023] As illustrated in FIG. 5, liner material 214 forms a lined trench 216 in semiconductor substrate 200. Although other fabrication steps or sub-processes may be performed after lining isolation trench 212, this example continues by at least partially filling lined trench 216 with a suitable insulating material, referred to herein as STI material 218 (FIG. 6). In practice, the dielectric STI material 218 fills lined trench 216 and is also formed over the other sections of liner material 214 (i.e., the sections overlying pad nitride layer 210) using, for example, an appropriate deposition techniquesuch as CVD. In certain embodiments, STI material 218 is an oxide material, such as silicon dioxide deposited using tetraethyl orthosilicate (TEOS) as a silicon source (commonly referred to as TEOS oxide). As another example, silane is a very common precursor for the silicon source, and the resulting STI material 218 is commonly referred to as high density plasma (HDP) oxide. [0024] At the stage of the process depicted in FIG. 6, STI material 218 creates a filled isolation trench 220 in semiconductor substrate 200. Thereafter, STI material 218 is polished using, for example, a chemical mechanical polishing (CMP) tool. STI material 218 is preferably polished to a height approximately corresponding to the height of the liner material 214 overlying pad nitride layer 210. In practice, the nitride liner material 214 may serve as a CMP stop layer such that the top of STI material 218 is substantially continuous with the exposed surface of liner material 214. FIG. 7 illustrates the condition of semiconductor substrate 200 after STI material 218 has been polished or planarized to the desired height. [0025] Although other fabrication steps or sub-processes may be performed after polishing STI material 218, this example continues by removing pad nitride layer 210 and a portion of liner material 214, while leaving STI material 218 substantially intact (FIG. 8). The nitride and liner material can be removed using a technique that is selective to nitride, for example, a hot phosphoric acid strip. As depicted in FIG. 8, this step is controlled such that pad nitride layer 210 is completely removed and such that an exposed upper rim 222 of liner material 214 remains. Referring again to the top view of FIG. 1, upper rim 222 would roughly correspond to the boundary defined by the outline of region 102 or region 104. The selective nature of this stripping step ensures that STI material 218 and pad oxide layer 208 are not removed. Accordingly, the portion of liner material 214 underlying STI material 218 is protected. [0026] A number of process steps or sub-steps may be performed following completion of the step depicted in FIG. 8. For example, FIG. 9 depicts the state of semiconductor substrate 200 after further processing that may be needed prior to formation of the gate stack. Such further process steps may include, without limitation: removing pad oxide layer 208; forming a layer of sacrificial oxide 224 that replaces pad oxide layer 208; forming well implants with sacrificial oxide 224 in place; and wet etching. These process steps recess the height of STI material 218, but leave liner material 214 substantially intact. Moreover, STI material 218 may be subjected to an isotropic oxide etchant, resulting in divots 226 formed on the sides of STI material 218.Importantly, upper rim 222 of liner material 214 remains uncovered and exposed after semiconductor substrate 200 reaches the state shown in FIG. 9. [0027] Sacrificial oxide 224, which may be removed during the wet etching described above, is replaced with an interfacial insulator layer 228 is formed (FIG. 10). Interfacial insulator layer 228 is preferably formed from an oxide material. FIG. 10 is a detailed view of a region 230 of semiconductor substrate 200 (this region 230 is surrounded by the dashed circle in FIG. 9). The scale used in FIG. 10 is exaggerated for ease of illustration. Moreover, although the height of upper rim 222 corresponds to the height of interfacial insulator layer 228 in the illustrated embodiment, liner material 214 may protrude above the height of interfacial insulator layer 228, or it may be level with the height of semiconductor material 202 and level with the height of STI material 218. [0028] Although other fabrication steps or sub-processes may be performed after formation of interfacial insulator layer 228, this example continues by forming a layer of high-k gate material 232 overlying at least a portion of semiconductor material 202 and overlying at least a portion of STI material 218. In practice, high-k gate material can be deposited using any suitable technique, such as atomic layer deposition (ALD) or atomic layer chemical vapor deposition (ALCVD), which enables selective deposition of the high-k material on interfacial insulator layer 228 and on STI material 218, while resulting in little to no deposition on upper rim 222 of liner material 214. ALD and ALCVD are very surface-sensitive processes in that the exposed surface on which the high-k material is to be deposited must have certain material properties (e.g., chemical bonds and molecular structure), otherwise, the high-k material will not nucleate. In practice, high-k gate material 232 can be any material having a high dielectric constant relative to silicon dioxide, and such high-k materials are well known in the semiconductor industry. Depending upon the embodiment, high-k gate material 232 may be, without limitation: HfO2, ZrO2, HfZrOx, HfSiOx, HfSiON, HfTiOx, ZrTiOx, ZrSiOx, ZrSiON, HfLaOx, ZrLaOx, LaAlOx, La2O3, HfAlOx, ZrAlOx, Al2O3, Y2O3, MgO, DyO, TiO2, Ta2O5, or the like. High-k gate material 232 is preferably deposited to a thickness of about 14-22 Angstroms. [0029] As mentioned previously, liner material 214 is chosen to substantially inhibit nucleation of high-k materials thereon, and this property causes the exposed upper rim 222 to remain void (for all practical purposes) of high-k gate material 232, as depicted in FIG. 10. Notably, the layer of high-k gate material 232 is divided by liner material 214, and liner material 214 creates a discontinuity in the layer of high-k gate material 232. Inthe illustrated embodiment, the section of high-k material overlying interfacial insulator layer 228 terminates before it overlaps upper rim 222, and the section of high-k material overlying STI material 218 follows the contour of divot 226 and terminates at or near the sidewall of liner material 214. [0030] Although other fabrication steps or sub-processes may be performed after the deposition of high-k gate material 232, this example continues by completing the gate stack in a conventional manner. In this regard, a metal gate layer 234 is formed over high-k gate material 232 and over the exposed portions of liner material 214 (FIG. 11) and, thereafter, a polysilicon gate layer 236 is formed over metal gate layer 234 (FIG. 12). Unlike high-k gate material 232, metal gate layer 234 can and does form on the exposed surfaces of liner material 214. Accordingly, metal gate layer 234 generally follows the contour of high-k gate material 232 and liner material 214 near divot 226. Moreover, polysilicon gate layer 236 is deposited to a desired thickness such that it fills divot 226, as shown in FIG. 12. [0031] The arrows in FIG. 12 represent the liberation of oxygen from STI material 218 into high-k gate material 232. Unlike the conventional device structure shown in FIG. 2, the oxygen does not migrate or diffuse into the section of high-k gate material 232 that overlies semiconductor material 202. In other words, liner material 214 blocks the migration of oxygen from the section of high-k gate material 232 that overlies STI material 218. Consequently, liner material 214 can be used to reduce the width effect, which might otherwise degrade device performance (as explained above). It should be appreciated that even if a very thin layer of high-k gate material 232 forms on liner material 214, the migration of oxygen will be substantially impeded and, therefore, the same benefits will be obtained. [0032] After the stage in the fabrication process depicted in FIG. 12, any number of known process steps can be performed to complete the fabrication of the device structures. Moreover, the process techniques described herein can be utilized with a "gate first" process or with a "gate last" process (which replaces polysilicon gate layer 236 with a different metal material). [0033] FIG. 13 is a cross sectional view of a semiconductor device 300 fabricated in accordance with the process depicted in FIGS. 3-12. Most of the features and characteristics of semiconductor device 300 are similar or identical to those described above with reference to FIGS. 3-12, and such common features and characteristics will not be redundantly described in detail here. This embodiment of semiconductor device300 is formed on an SOI substrate 302 having a support layer 304 and a buried oxide layer 306 overlying support layer 304. The layer of semiconductor material overlying buried oxide layer 306 has active transistor regions defined therein; FIG. 13 depicts an n- type active transistor region 308 and a p-type active transistor region 310. [0034] The active transistor regions 308 and 310 are separated by an adjacent isolation trench 312, which is formed in the layer of semiconductor material and in buried oxide layer 306. Isolation trench 312 is lined with a trench liner 314 (e.g., a nitride material), and an insulating material such as an STI oxide 316 is located in the lined trench. Semiconductor device 300 also includes a layer of high-k gate material 318 overlying at least a portion of STI oxide 316 and overlying at least a portion of active transistor regions 308 and 310. It is important to note that the layer of high-k gate material 318 is divided by trench liner 314 because, as described above, the high-k gate material 318 cannot nucleate on the upper rim of trench liner 314. For simplicity and ease of illustration, the interfacial oxide layer between high-k gate material 318 and the active transistor regions 308 and 310 (see FIG. 12), and the divots on either side of STI oxide 316, are not shown in FIG. 13. [0035] Semiconductor device 300 also includes a metal gate layer 320 overlying high-k gate material 318, and overlying the upper rim of trench liner 314. In addition, semiconductor device 300 includes a polysilicon gate layer 322 overlying metal gate layer 320. The combination of high-k gate material 318, metal gate layer 320, and polysilicon gate layer 322 may be referred to as a gate stack or a gate structure. The gate stack cooperates with active transistor regions 308 and 310 in a conventional manner to form NMOS and PMOS transistor devices. [0036] In lieu of a trench liner that inhibits nucleation of high-k material, a semiconductor device may employ a layer of high-k material that is formed in an alternative manner that still reduces the width effect. More specifically, the high-k material can be formed using a suitably controlled plasma vapor deposition (PVD) technique. The PVD process will naturally form the high-k material over the exposed surface of the interfacial oxide and over the exposed surface of the STI oxide. However, due to the directional nature of the PVD process, the amount of high-k material formed on the vertical sidewall of the divot (see FIG. 2) will be significantly less than the amount of high-k material formed elsewhere. Consequently, the very thin layer of high-k material on this sidewall of the divot will impede the migration of oxygen from the STI oxide side to the side overlying the active transistor region.[0037] While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or embodiments described herein are not intended to limit the scope, applicability, or configuration of the claimed subject matter in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the described embodiment or embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope defined by the claims, which includes known equivalents and foreseeable equivalents at the time of filing this patent application.
A test cell and method of operation are disclosed. The test cell may be cascaded with other test cells to form a test structure that spans across any number of slices and/or tiles in a programmable logic device. The test structure behaves like a register, and may be used to test direct interconnects and any number their fan-out lines simultaneously.
1. A test structure including a first plurality of cascaded test cells, each of the test cells comprising:a flip-flop having an input and an output;an XOR gate having a first input coupled to a corresponding fan-out of a direct line, a second input coupled to an output terminal of the test structure, and an output; anda multiplexer having a first input coupled to the output of the flip-flop, a second input coupled to the output of the flip-flop in a previous test cell, a select terminal coupled to the output of the XOR gate, and an output coupled to the input of the flip-flop.2. The test structure of claim 1, wherein the XOR gate and the multiplexer are implemented in a function generator.3. The test structure of claim 2, wherein the function generator comprises a look-up table.4. The test structure of claim 1, wherein the output of the flip-flop in the final test cell is connected to the output terminal of the test structure.5. The test structure of claim 4, wherein the second input of the multiplexer in the first test cell receives the output of the flip-flop in the final test cell.6. The test structure of claim 1, wherein the flip-flops in adjacent test cells are initialized to opposite logic states.7. The test structure of claim 6, wherein the second input of the multiplexer in the first test cell comprises an inverter when there are an odd number of test cells cascaded together.8. The test structure of claim 1, wherein the first plurality of test cells are implemented within a first tile of a programmable logic device (PLD).9. The test structure of claim 8, wherein the direct line provides direct signal routing between the first tile and another tile of the PLD.10. The test structure of claim 9, further comprising a second plurality of test cells implemented in a second tile of the PLD, wherein the first and second plurality of test cells are cascaded together.11. A test structure implemented within a programmable logic device (PLD), the test structure including a plurality of cascaded test cells, and generating an output test signal responsive to an input test signal, each of the test cells comprising:a flip-flop having an input and an output;an XOR gate having a first input to receive the input test signal, a second input to receive the output test signal, and an output to generate a select signal; anda multiplexer having a first input coupled to the output of the flip-flop, a second input coupled to the output of the flip-flop in a previous test cell, a select terminal to receive the select signal, and an output coupled to the input of the flip-flop.12. The test structure of claim 11, wherein the first input of each XOR gate receives the input signal from a corresponding fan-out of a direct line.13. The test structure of claim 11, wherein the direct line provides direct signal routing between selected tiles of the PLD.14. The test structure of claim 11, wherein the XOR gate and multiplexer are implemented in a function generator.15. The test structure of claim 14, wherein the function generator comprises a look-up table.16. The test structure of claim 11, wherein pairs of the test cells are implemented in corresponding tile slices of the PLD.17. The test structure of claim 16, wherein the test cells form a shift register.18. The test structure of claim 17, wherein the shift register spans a plurality of tiles of the PLD.19. A test structure including a plurality of test cells implemented in a tile of a programmable logic device (PLD) and connected together to generate an output test signal responsive to an input test signal, each of the test cells modeling a register and comprising:a flip-flop to store a test bit corresponding to the test cell;a first input to receive the input test signal;a second input to receive the output test signal;a third input to receive the test bit; anda fourth input to receive an output of a previous test cell;wherein the test cell maintains the test bit in its current logic state when the input test signal equals the output test signal and toggles the test bit when the input test signal does not equal the output test signal.20. The test structure of claim 19, wherein the first input of each test cell comprises a corresponding fan-out of a direct line coupled to the tile.21. The test structure of claim 19, wherein the test structure is connected to other test structures implemented in other tiles of the PLD.22. The test structure of claim 19, wherein the test cell further comprises:an XOR gate having first and second inputs to receive the test input and test output signals, respectively, and having an output to generate a select signal; anda multiplexer having a first input to receive the test bit, a second input to receive the test bit corresponding to the previous test cell, a select terminal to receive the select signal, and an output coupled an input of the flip-flop.23. The test structure of claim 22, wherein the XOR gate and the multiplexer are implemented within a look-up table.24. The test structure of claim 19, wherein the test cell further comprises:an XNOR gate having first and second inputs to receive the test input and test output signals, respectively, and having an output coupled to a clock enable terminal of the flip-flop; andan inverter coupled between an input and an output of the flip-flop.25. The test structure of claim 24, wherein the XOR gate and the inverter are implemented within a look-up table.26. The test structure of claim 19, wherein the test cell further comprises:an XNOR gate having first and second inputs to receive the test input and test output signals, respectively, and having an output coupled to a set/reset terminal of the flip-flop; andan inverter coupled between an input and an output of the flip-flop.27. The test structure of claim 26, wherein the XOR gate and the inverter are implemented within a look-up table.28. A test structure including a plurality of test cells connected in a chain to generate an output test signal responsive to an input test signal, each of the test cells comprising:a storage element for storing a test bit corresponding to the test cell;means for comparing the input test signal to the output test signal to generate a select signal; andmeans for providing either the test bit or the test bit corresponding to a previous test cell to the storage element in response to the select signal.29. The test structure of claim 28, wherein the means for comparing and the means for providing are implemented in a look-up table within a tile of a programmable logic device (PLD).30. The test structure of claim 29, wherein the storage element comprises a flip-flop.31. The test structure of claim 29, wherein each test cell receives the input test signal from a corresponding fan-out of a direct line coupled to the tile.32. The test structure of claim 29, wherein the test cells are cascaded across a plurality of tile slices.33. In a programmable logic device (PLD), a method of testing a direct line and any number of its fan-outs for faults, comprising:implementing a plurality of test cells within a tile of the PLD;cascading the test cells together to form a shift register;connecting one input of each test cell to a corresponding fan-out of the direct line;within each test cell:comparing a test signal to an output signal of the test structure to generate a select signal; andselectively toggling a test bit corresponding to the test cell in response to the select signal; andinitializing the test bits of adjacent test cells to opposite logic states.34. The method of claim 33, further comprising:implementing a pair of the test cells in each of a plurality of slices within a tile of the PLD.35. The method of claim 34, further comprising:cascading the test cells in the tile to other test cells implemented in other tiles of the PLD.36. The method of claim 33, further comprising:providing a test signal simultaneously to each of the test cells using the corresponding fan-outs of the direct line; andshifting the test signal through the shift register.37. The method of claim 33, wherein the comparing and selectively toggling are implemented within a function generator.38. The method of claim 37, wherein the function generator comprises a look-up table.
FIELD OF INVENTIONThe present invention relates generally to testing semiconductor circuits, and specifically to testing interconnections among integrated circuit elements.DESCRIPTION OF RELATED ARTA programmable logic device (PLD) is a general purpose device that can be programmed by a user to implement a variety of selected functions. One type of PLD is the Field Programmable Gate Array (FPGA), which typically includes an array of configurable logic blocks (CLBs) surrounded by a plurality of input/output blocks (IOBs). The CLBs are individually programmable and can be configured to perform a variety of logic functions on a few input signals. The IOBs can be configured to drive output signals from the CLBs to external pins of the FPGA and/or to receive input signals from the external FPGA pins. The FPGA also includes a general interconnect structure that can be programmed to selectively route signals among the various CLBs and IOBs to produce more complex functions of many input signals. The CLBs, IOBs, and the general interconnect structure are programmed by loading configuration data into associated memory cells that control various switches and multiplexers within the CLBs, IOBs, and the interconnect structure to implement logic and routing functions specified by the configuration data.Early CLB architectures typically included two function generators, each having four inputs to receive signals from other CLBs and/or IOBs, and at least one output that may be connected to a corresponding flip-flop. These function generators, which are typically implemented as look-up tables, may be programmed to perform any Boolean logic function of their inputs. The two function generators and their corresponding flip-flops are commonly referred to as a CLB slice. Further information regarding various FPGA architectures may be found in "The Programmable Logic DataBook 1998," Chapter 4, pages 1-374, available from Xilinx, Inc. of San Jose, Calif., and incorporated by reference herein.As FPGAs become larger and more complex, the general interconnect structure must also become larger and more complex. To improve performance, direct line connections are provided to route signals between adjacent CLBs without having to access the general interconnect structure. To ensure proper operation and reliability, these direct lines between CLBs are tested for stuck-at-high faults and stuck-at-low faults.As known in the art, stuck-at-high faults may be detected using logic OR functions, and stuck-at-low faults may be detected using logic AND functions. Thus, conventional techniques for testing the direct lines and their fan-outs typically involve configuring an upper portion of a slice (e.g., a first function generator) to implement a logic AND function, and configuring a lower portion of the slice (e.g., a second function generator) to implement a logic OR function. The AND function generates an output signal indicative of stuck-at-low faults for a first fan-out, and the OR function generates an output signal indicative of stuck-at-high faults for a second fan-out. Then, the configurations of the function generators are reversed such that the upper portion of the slice tests the first fan-out for stuck-at-high faults and the lower portion of the slice tests the second fan-out for stuck-at-low faults. For each configuration, the output signals are routed to some observable point and examined to determine whether any faults exist.Although this technique is effective in detecting stuck-at faults, two test configurations are required to test each direct line fan-out. Thus, as CLB architectures become more complex and include increasing numbers of slices, the number of direct line fan-outs also increases, which in turn results in a corresponding increase in the number of test patterns required for fault testing.SUMMARYAn apparatus and method are disclosed that allow any number of the fan-outs associated with a direct line connected between adjacent CLB tiles to be tested simultaneously for faults. In accordance with the present invention, a test cell is implemented in a portion of a CLB slice using existing slice resources. For some embodiments, the test cell is implemented using a function generator and a flip-flop in the slice. The test cell receives an input signal (e.g., a test vector) via a corresponding direct line fan-out and generates a registered output signal that tracks changes in the input signal. The test cell behaves like a register, and therefore can be connected with another test cell implemented in the slice to form a 2-bit test structure, with each test cell receiving the input signal via a corresponding direct line fan-out. The 2-bit test structure generates a single output signal that tracks the input signal, and thus also behaves like a register. As a result, these test structures can be implemented in various slices and then connected together to form a shift register that spans an entire CLB tile, thereby allowing a direct line and all of its slice fan-outs to be simultaneously tested using the same configuration. In this manner, test structures of the present invention can significantly reduce the number of test vectors required to test the direct line fan-outs, thereby reducing testing time and costs.Further, the test structures implemented in the various CLB tiles can be connected together to form a shift register that spans an entire CLB array, thereby allowing a test vector to be shifted through any number of CLB tiles.For other embodiments, test cells of the present invention can be used to test the flip-flop inputs within CLB slices. Thus, for one embodiment, the test cells can be used to test the flip-flop clock enable inputs. For another embodiment, the test cells can be used to test the flip-flop set and/or reset inputs.BRIEF DESCRIPTION OF THE DRAWINGSThe features and advantages of the present invention are illustrated by way of example and are by no means intended to limit the scope of the present invention to the particular embodiments shown, and in which:FIG. 1 is a block diagram illustrating the general layout of an FPGA within which embodiments of the present invention can be implemented;FIG. 2 is simplified block diagram illustrating representative interconnections between some of the CLB tiles of the FPGA layout of FIG. 1;FIG. 3 is a simplified block diagram of a CLB tile of FIG. 2;FIG. 4 is a simplified block diagram of one embodiment of a slice within the tile embodiment of FIG. 3;FIG. 4A is block diagram illustrating the fan-outs for a direct line connection to the tile of FIG. 3;FIG. 5 is a circuit diagram illustrating one embodiment of a test cell that can be implemented within the slice of FIG. 4;FIG. 6 is a qualitative timing diagram for an exemplary test vector applied to the test cell of FIG. 5;FIG. 7 is a circuit diagram illustrating the cascade connection of two of the test cells of FIG. 5;FIG. 8 is a qualitative timing diagram for an exemplary test vector applied to the test structure of FIG. 7;FIG. 9 is a circuit diagram illustrating another embodiment of a test cell that can be implemented within the slice of FIG. 4;FIG. 9A is a circuit diagram illustrating the cascade connection of two of the test cells of FIG. 9;FIG. 10 is a circuit diagram illustrating yet another embodiment of a test cell that can be implemented within the slice of FIG. 4;FIG. 10A is a circuit diagram illustrating the cascade connection of two of the test cells of FIG. 10;FIG. 11 is a circuit diagram illustrating still another embodiment of a test cell that can be implemented within the slice of FIG. 4; andFIG. 11A is a circuit diagram illustrating the cascade connection of two of the test cells of FIG. 11.Like reference numerals refer to corresponding parts throughout the drawing figures.DETAILED DESCRIPTIONEmbodiments of the present invention are discussed below in the context of testing the direct interconnect lines between adjacent tiles of an FPGA for simplicity only. It is to be understood that embodiments of the present invention are equally applicable for testing other interconnect lines, and can be used for testing any suitable device that has repeatable logic tiles including, for example, programmable logic arrays (PLAs) and complex programmable logic devices (CPLDs). In the following description, for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that these specific details may not be required to practice the present invention. In other instances, well-known circuits and devices are shown in block diagram form to avoid obscuring the present invention. Additionally, the interconnection between circuit elements or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be a single signal line, and each of the single signal lines may alternatively be a bus. Further, the logic levels assigned to various signals in the description below are arbitrary and, thus, may be modified (e.g., reversed polarity) as desired. Accordingly, the present invention is not to be construed as limited to specific examples described herein but rather includes within its scope all embodiments defined by the appended claims.FIG. 1 illustrates the general layout of IOBs, CLBs, and block RAMs (BRAMs) of a field programmable gate array (FPGA) 100 within which some embodiments of the present invention can be implemented. IOBs 102 are well-known, and are located around the perimeter of FPGA 100. CLBs 104 are well-known, and are arranged in columns in FPGA 100. Block RAMs 106 are well-known, and are arranged in columns between adjacent CLB columns. A well-known general interconnect circuitry (not shown for simplicity) is provided to programmably connect the IOBs 102, CLBs 104, and block RAMs 106. Corner blocks 108 are well-known, and can contain configuration circuitry and/or can be used to provide additional routing resources. Although a particular FPGA layout is illustrated in FIG. 1, it is to be understood that many other FPGA layouts are possible, and are considered to fall within the scope of the present invention. For example, other embodiments can have other numbers of IOBs 102, CLBs 104, and block RAMs 106, and can have other types of blocks, such as multipliers or processors. A more detailed description of the general operation of FPGA 100 is provided in "The Programmable Logic Databook 1998"Chapter 4, pages 1-374, available from Xilinx, Inc. of San Jose, Calif., and incorporated by reference herein.FIG. 2 shows a portion 200 of FPGA 100 in more detail. FPGA portion 200 is shown to include nine CLB tiles 202, where each CLB tile includes a CLB 104, a switch matrix 204, and signal lines 206 connecting the CLB 104 to the switch matrix 204. Switch matrix 204 is well-known, and connects the CLB 104 to other tiles using general routing resources. For some embodiments, switch matrix 204 is of the type disclosed in U.S. Pat. No. 6,292,022 to Young et al., which is incorporated herein by reference, although other switch matrices can be used. Signal lines 208, which extend across one CLB tile to connect switch matrices 204 in adjacent tiles 202, are commonly known as single-length lines. Interconnect lines 210, which extend across multiple CLB tiles and can be selectively connected to switch matrices 204 via well-known programmable interconnect points (PIPs) 212, are representative of intermediate length lines (e.g., double-length lines, quad-length lines, hex-length lines, octal-length lines, and so on) and long lines (e.g., lines that extend across an entire row or column of FPGA 100). For simplicity, only two lines 210 are shown. Together, lines 208, lines 210, and PIPs 212 form the general interconnect circuitry for FPGA 100.For one embodiment, the general interconnect circuitry is of the type disclosed in U.S. Pat. No. 5,469,003 to Kean, which is incorporated herein by reference. For another embodiment, the tile-based interconnect structure disclosed in U.S. Pat. No. 5,581,199 to Pierce, which is incorporated herein by reference, can be used. For other embodiments, other general interconnect circuitry can be used.In addition, FPGA 100 includes a number of signal lines 214 that route signals directly between CLBs 104 in adjacent tiles, e.g., without going through switch matrices 204 and the general interconnect circuitry. Direct lines 214 are well-known, and improve performance by providing fast signal routing between CLBs. For some embodiments, direct lines 214 are of the type disclosed in U.S. Pat. No. 4,642,487 to Carter, which is incorporated herein by reference, although other direct lines can be used. As shown in FIG. 2, each CLB 104 is connected directly via lines 214 to adjacent CLBs in the same row and the same column, e.g., to CLBs in adjacent tiles to the north, south, east, and west. For example, CLB 104A includes direct connections to the CLBs 104 in adjacent tiles to the north, south, east, and west via direct lines 214N, 214S, 214E, and 214W, respectively. However, although not shown in FIG. 2, each CLB 104 can include direct lines connecting to the CLBs in diagonally adjacent tiles, i.e., to CLBs in tiles to the northeast, northwest, southwest, and southeast.FIG. 3 is a block diagram of a CLB 300 that is one embodiment of CLB 104 of FIG. 2. CLB 300 includes an input multiplexer (IMUX) 302, a configurable logic element (CLE) 304, and an output multiplexer (OMUX) 306. IMUX 302 is well-known, and routes input signals to CLE 304 via lines 308. IMUX 302 receives input signals from switch matrix 204 (not shown in FIG. 3) via lines 206, and receives input signals from adjacent tiles via input four direct lines 214. For other embodiments, other numbers of direct lines 214 can be provided to CLB 300. CLE 304 includes four well-known CLE slices that can be programmed to implement various logic functions and/or memory elements. For other embodiments, CLE 304 can include other numbers of slices.OMUX 306 is well-known, and routes output signals from CLE 304 via lines 310. OMUX 306 provides output signals from CLE 304 to switch matrix 204 (not shown in FIG. 3) via lines 206 and to adjacent tiles via four direct connect lines 214. The CLE output signals provided by OMUX 306 on lines 206 can be routed back to corresponding IMUX 302 or to other tiles via the general interconnect circuitry. Feedback lines 312 are provided to route selected output signals from CLE 304 to IMUX 302 via OMUX 306 without going through the general interconnect circuitry. Further, although not shown in FIG. 3 for simplicity, additional feedback lines can be provided to route signals from CLE slices 304 to IMUX 302 without going through OMUX 306.As mentioned above, IMUX 302, CLE slices 304, and OMUX 306 are well-known logic structures. For some embodiments, IMUX 302, CLE slices 304, and OMUX 306 are of the type disclosed in U.S. Pat. No. 6,292,022 to Young et al, which is referenced above. For other embodiments, other well-known structures can be used for IMUX 302, CLE slices 304, and OMUX 306. Thus, for example, the CLE slices described in the "Virtex-II Pro Platform FPGA Handbook" October 2002, pages 49-64, available from Xilinx, Inc. of San Jose, Calif., which is incorporated herein by reference, can be used in present embodiments.FIG. 4 is a simplified functional diagram of a CLE slice 400 that is one embodiment of CLE slices 304 of FIG. 3. Slice 400 is shown to include two function generators F and G, and two flip-flops FF1 and FF2. Function generators F and G are well-known 4-input look-up tables that can be configured to implement any Boolean logic function of their input signals to generate respective output signals X and Y. For some embodiments, each function generator F and G can also be configured as a 16-bit shift register or as a 16*1 RAM. The four inputs F1-F4 of function generator F and the four inputs G1-G4 of function generator G can receive input signals from lines 206, from direct lines 214, and/or from feedback lines 312 via IMUX 302 (see also FIG. 3).Flip-flop FF1 has a data input D to receive an output signal X from function generator F, and provides a registered output signal XQ. Similarly, flip-flop FF2 has a data input D to receive an output signal Y from function generator G, and provides a registered output signal YQ. In addition, flip-flops FF1 and FF2 each have a clock input (>) to receive a clock signal CLK, a clock enable input CE to receive a clock enable signal CLK-EN, and S/R inputs to receive set and reset signals, respectively.Other well-known elements of slice 400 such as additional function generators, signal routing multiplexers, cascade logic, fast carry chains, and control signals, are not shown in FIG. 4 for simplicity. For more information on the architecture and operations of slice 400, refer to the above referenced, "The Programmable Logic Databook 1998," Chapter 4, pages 1-374.FIG. 4A illustrates the direct line fan-outs for an exemplary embodiment of FIGS. 3 and 4, where each direct line 214 can be connected to a corresponding F input and a corresponding G input in all four slices 400 of CLB 300. For example, a first direct line DL1 is shown coupled to the first input F1 of each function generator F in each slice S0-S3 and to the first input G1 of each function generator G in each slice S0-S3, thereby resulting in 8 fan-outs for the direct line DL1. Although not shown in FIG. 4A for simplicity, the other direct lines DL2-DL4 are each connected to corresponding F inputs and G inputs of each slice S0-S3, and therefore also have 8 fan-outs. Of course, for other embodiments, each direct line 214 can have a fewer or a greater number of fan-outs.In accordance with the present invention, corresponding pairs of function generators and flip-flops within CLE slices are configured to implement test cells that can be connected together to form a shift register (e.g., a test counter) that can be used to test a direct line and any number of its fan-outs simultaneously. FIG. 5 illustrates one embodiment of a test cell that can be implemented in either the upper or lower portion of slice 400. Test cell 500 is implemented using a function generator 510 and a flip-flop 520. Function generator 510 can be either function generator F or G of slice 400, and flip-flop 520 can be the corresponding flip-flop FF1 or FF2 of slice 400. Function generator 510, which is configured to implement a clock enable circuit for flip-flop (FF) 520, has a first input I1 to receive an input test signal IN provided by a selected direct line fan-out, and three inputs I2-I4 to receive an output test bit OUT from corresponding flip-flop (FF) 520. Thus, inputs I1-I4 can correspond to either inputs F1-F4 of function generator F or inputs G1-G4 of function generator G. The output signal OUT can be routed from FF 520 to function generator 510 through OMUX 306 and IMUX 302 via feedback lines 312, or through lines 206 and the general interconnect circuitry (see also FIG. 3). For this configuration, FF 520 has its clock enable input tied to Vcc (e.g., asserted to logic high), and has its S/R inputs grounded (e.g., de-asserted to logic low).The clock enable circuit implemented by function generator 510 is functionally represented by a multiplexer (MUX) 512 and an XOR gate 514 configured as shown in FIG. 5. MUX 512 has a first input to receive OUT from FF 520 via 14, a second inverted input to receive {overscore (OUT)} via I3, an output coupled to the D-input of FF 520, and a select terminal to receive a select signal SEL generated by XOR gate 514. The second input to MUX 512 is inverted so that the two input signals to MUX 512 are opposite logic states. XOR gate 514 has inputs to receive IN via I1 and OUT via I2, and an output to generate SEL.During operation, XOR gate 514 compares IN and OUT to generate SEL. Specifically, if IN equals OUT, then XOR gate 514 drives SEL to logic high, and MUX 512 provides OUT to the D-input of FF 520, thereby maintaining OUT in its current state. Conversely, if IN does not equal OUT, XOR gate 514 drives SEL to logic low, and MUX 512 provides {overscore (OUT)} to the D-input of FF 520, thereby toggling OUT to the opposite state. Accordingly, the test cell 500 of FIG. 5 behaves like a register, whereby FF 520 updates (e.g., toggles) the test bit OUT only when the sampled input signal IN changes.To test the path from the direct line fan-out at I1 to the FF output Q for stuck-at faults, SEL is set to logic high, and IN is toggled from logic low to logic high, and then from logic high to logic low. Conversely, to test the feedback path from I4 to output Q for stuck-at faults, SEL is set to logic low, and IN is maintained at logic high for two test clock cycles and then transitioned to logic low for two test cycles. In this manner, a test vector can be shifted through test cell 500 and observed to detect faults in the direct line fan-out under test.For example, one test vector that tests both paths through test cell 500 for stuck-at faults can be "00110," the application of which to test cell 500 is illustrated in the qualitative timing diagram of FIG. 6 (of course, other suitable test vectors can be used). Referring to FIG. 6, FF 520 is initialized to logic low so that OUT=0. The test vector is then applied to test cell 500 from a direct line fan-out via I1. Prior to time t1, the initial logic low value of IN is provided to I1. Thus, because IN=OUT=0, XOR gate 514 drives SEL to logic low, and MUX 512 routes the logic low signal OUT to the D-input of FF 520. At time t1, the logic low signal from MUX 512 is clocked into FF 520. At time t2, IN remains logic low, which causes MUX 512 to recycle the logic low signal OUT to FF 520, thereby maintaining OUT in the logic low state. Prior to time t3, IN toggles to logic high, which causes XOR gate 514 to drive SEL to logic high. In response thereto, MUX 512 routes the inverted output signal (i.e., {overscore (OUT)}=1) to FF 520. At time t3, the rising clock edge clocks the logic high signal into FF 520, thereby updating OUT to logic high.Because OUT equals IN (i.e., IN=OUT=1), XOR gate 514 returns SEL to logic low. IN is maintained in the logic high state for another clock cycle, which causes XOR gate 514 to maintain SEL in its logic low state. Thus, at time t4, IN remains logic high, which causes MUX 512 to recycle the logic high signal OUT to FF 520, thereby maintaining OUT in the logic high state. Just before time t5, IN toggles to logic low, which causes XOR gate 514 to drive SEL to logic high. IN response thereto, MUX 512 routes the inverted output signal (i.e., {overscore (OUT)}=0) to FF 520. At time t5, the rising clock edge clocks the logic low value into FF 520, which in turn updates OUT to logic low. In response thereto, XOR gate 514 returns SEL to logic low.For other embodiments, FF 520 can be a negative edge-triggered device.Test cell 500 can be connected (e.g., cascaded) with another test cell in the same slice 400 to form a 2-bit test register capable of simultaneously testing 2 direct line fan-outs for faults. For example, FIG. 7 illustrates the cascade configuration of two test cells 500A and 500B that can be implemented in slice 400 of FIG. 4, where logic function 510A is implemented in the F function generator and logic function 510B is implemented in the G function generator. The input signal IN is applied on a direct line 214 under test, and simultaneously provided to input F1 of test cell 500A and to input G1 of test cell 500B via corresponding fan-outs FO-A and FO-B, respectively. The test structure's output signal OUT is generated at register output YQ and provided as feedback to XOR gates 514A and 514B via inputs F2 and G2, respectively. XOR gates 514A and 514B compare IN and OUT to generate multiplexer select signals SEL1 and SEL2, respectively. MUX 512A has a first input to receive a first test bit OUT1 from FF 520A's output XQ via input F4, a second input to receive the test structure output signal OUT (e.g., a second test bit OUT2) from FF 520B's output YQ via input F3, an output coupled to the D-input of FF 520A, and a select terminal to receive SEL1. MUX 512B has a first input to receive OUT from FF 520B via input G4, a second input to receive OUT1 from FF 520A via input G3, an output coupled to the D-input of FF 520B, and a select terminal to receive SEL2. For simplicity, the clock enable and S/R signals for FF 520A and FF 520B are not shown in FIG. 7.Thus, during test operations, the input test signal IN is simultaneously provided to test cells 500A and 500B via respective function generator inputs F1 and G1 to generate a single output test signal OUT at slice output YQ. The output test signal OUT at YQ can then be observed (e.g., compared to the input test signal) to detect faults in both direct line fan-outs FO-A and FO-B. The various feedback signals can be routed from FF outputs XQ and YQ to the F2-F4 inputs of test cell 500A and to the G2-G4 inputs of test cell 500B using feedback lines 312 and/or the general interconnect circuitry described above with respect to FIGS. 2 and 3, thereby not interfering with the fan-outs under test.The fan-outs of other direct lines 214 can be simultaneously tested in a similar manner by re-configuring the test cells 500A and 500B to receive the input test vector IN from other direct lines. For example, to test the fan-outs of a second direct line, test cells 500A and 500B can be re-configured so that IN is provided to XOR gates 514A and 514B via function generator inputs F2 and G2, respectively. For some embodiments, the same test vectors may be used to test each of the direct lines and their fan-outs.FIG. 8 shows a qualitative timing diagram for an exemplary test vector "01100110" applied to test structure 700, where the second FF 520B is initialized to logic low and the first FF 520A is initialized to logic high. In addition, note that the F3 input to MUX 512A and the G3 input to MUX 512B are non-inverted. In this manner, the first and second inputs to MUXes 512A and 514B are opposite logic states.Because test structure 700 behaves like a register in a manner similar to that of test cell 500, any number of test structures 700 (as well as individual test cells 500) can be cascaded together across slice boundaries to form a test register that can be used to test any number of direct line fan-outs simultaneously for faults. For example, referring also to FIG. 4A, test structures 700 implemented in all four slices S0-S3 of an FPGA can be connected together as described above to form a shift register that spans an entire CLB tile to generate a single output signal, thereby allowing all 8 fan-outs of a direct line to be simultaneously tested for faults. Further, the test cells 500 and/or test structures 700 implemented in various CLB tiles can be connected together to form a shift register that spans an entire CLB array.In general, when cascading multiple test cells 500 together to form a shift register, adjacent test cells are initialized to opposite states. Thus, for some embodiments, the flip-flop in the last test cell is initialized to logic zero, the flip-flop in the second-to-last test cell is initialized to logic one, the flip-flop in the third-to-last test cell is initialized to logic zero, and so on, as described above for the exemplary operation of the test structure embodied in FIG. 7. For other embodiments, the flip-flop in the last test cell is initialized to logic one, the flip-flop in the second-to-last test cell is initialized to logic zero, the flip-flop in the third-to-last test cell is initialized to logic one, and so on. Further, if there are an odd number of test cells that form the shift register, the second MUX input for the first test cell is inverted, as shown in FIG. 5. Conversely, if there are an even number of test cells that form the shift register, the second MUX input for the first test cell is not inverted, as shown in FIG. 7.Applicants have found that embodiments of the present invention can result in a significant reduction in testing time and costs. For the exemplary FPGA embodiment of FIG. 3, where each CLB tile includes four slices 400 and receives input signals on 4 direct lines 214, the test cells and structures of the present invention can test the direct lines and each of their corresponding 8 fan-outs using approximately 34 test patterns. In contrast, the conventional fault testing technique described in the background section of this disclosure requires approximately 200 test patterns to test the direct lines and their corresponding fan-outs for a similar FPGA architecture.Embodiments of the present invention can also be used to test the clock enable input, the set input, and the reset input to the flip-flops within each CLE slice. FIG. 9 illustrates a test cell configuration 900 that can be used to test the clock enable input to each flip-flop in slices 400 of FPGA 100. Test cell 900 is implemented using a function generator 910 (e.g., function generator F) and corresponding flip-flop 920 (e.g., FF1) in a slice. Function generator 910 is configured to implement an inverter 912 and an XOR gate 914. XOR gate 914has a first input to receive an input signal IN provided by a direct line 214, a second input to receive the output signal OUT from FF 920, and an output coupled to the clock enable input of FF 920. Inverter 912 is coupled between the output Q of FF 920 and the D-input of FF 920. The clock input to FF 920 receives the clock signal CLK. The set and reset inputs of FF 920 are grounded (e.g., de-asserted).FIG. 10 illustrates a test cell configuration 1000 that can be used to test the set input to each flip-flop in slices 400 of FPGA 100. Test cell 1000 is implemented using a function generator 1010 (e.g., function generator F) and corresponding flip-flop 1020 (e.g., FF1) in a slice. Function generator 1010 is configured to implement an inverter 1012 and an XNOR gate 1014. XNOR gate 1014 has a first input to receive an input signal IN provided by a direct line 214, a second input to receive the output signal OUT from FF 1020, and an output coupled to the set input of FF 1020. Inverter 1012 is coupled between the output Q of FF 1020 and the D-input of FF 1020. The clock input to FF 1020 receives the clock signal CLK. The clock enable input to FF 1020 is tied to Vcc, (e.g., asserted), and the reset input of FF 1020 is grounded (e.g., de-asserted).FIG. 11 illustrates a test cell configuration 1100 that can be used to test the reset input to each flip-flop in slices 400 of FPGA 100. Test cell 1100 is implemented using a function generator 1110 (e.g., function generator F) and corresponding flip-flop 1120 (e.g., FF1) in a slice. Function generator 1100 is configured to implement an inverter 1112 and an XNOR gate 1114. XNOR gate 1114 has a first input to receive an input signal IN provided by a direct line 214, a second input to receive the output signal OUT from FF 1120, and an output coupled to the reset input of FF 1120. Inverter 1112 is coupled between the output Q of FF 1120 and the D-input of FF 1120. The clock input to FF 1120 receives the clock signal CLK. The clock enable input to FF 1120 is tied to Vcc (e.g., asserted), and the set input of FF 1120 is grounded (e.g., de-asserted).The test cells 900, 1000, and 1100 each behave like a register, and therefore can be cascaded with other test cells configured in accordance with the present invention in the manner described above. For example, FIG. 9A shows two test cells 900A and 900B of FIG. 9 cascaded together, FIG. 10A show two test cells 1000A and 1000B of FIG. 10 cascaded together, and FIG. 11B shows two test cells 1100A and 1100B of FIG. 11 cascaded together.While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that changes and modifications may be made without departing from this invention in its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as fall within the true spirit and scope of this invention.
Embodiments are generally directed to compression for compression for sparse data structures utilizing mode search approximation. An embodiment of an apparatus includes one or more processors including a graphics processor to process data; and a memory for storage of data, including compressed data. The one or more processors are to provide for compression of a data structure, including identification of a mode in the data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation operation, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.
An apparatus comprising:one or more processors including a graphics processor to process data; anda memory for storage of data, including compressed data;wherein the one or more processors are to provide for compression of a data structure, including:identification of a mode in the data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation operation, andencoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.The apparatus of claim 1, wherein the mode approximation operation includes a hierarchy of comparison levels, each comparison level of the hierarchy of comparison levels including one or more comparisons of at least a portion of two or more values of the plurality of values.The apparatus of claim 2, wherein each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values, wherein each ternary comparison of the one or more ternary comparisons includes:comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match;comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; andreturning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.The apparatus of claim 3, wherein each bit slice is two bits in length.The apparatus of claim 2, wherein the mode approximation operation includes comparison of less than all values of the plurality of values.The apparatus of claim 5, wherein the data structure includes 128 8-bit values, and the hierarchy of comparison levels includes 4 levels to compare 81 of the 128 values.The apparatus of claim 1, wherein the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.The apparatus of claim 1, wherein the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.The apparatus of claim 1, wherein the data structure is a data structure for machine learning.One or more non-transitory computer-readable storage mediums having stored thereon executable computer program instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:performing a compression operation including:identifying a mode in a data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation algorithm, andencoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.The one or more storage mediums of claim 10, wherein the mode approximation algorithm includes a hierarchy of comparison levels, each comparison level including one or more comparisons of at least a portion of two or more values of the plurality of values.The one or more storage mediums of claim 11, wherein each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values, wherein each ternary comparison of the one or more ternary comparisons includes:comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match;comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; andreturning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.The one or more storage mediums of claim 10, wherein the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.The one or more mediums of claim 10, further comprising executable computer program instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:performing a second compression operation in parallel with the compression operation; andselecting an output vector from either the compression operation or the second compression operation.The one or more mediums of claim 10, further comprising executable computer program instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising:decompressing the encoded output vector, including:parsing the output vector to obtain the significance map, mode, and uncompressed data; andinserting either mode or a next uncompressed data value at each of a plurality of locations based on the significance map.
TECHNICAL FIELDEmbodiments described herein generally relate to the field of electronic devices and, more particularly, compression for sparse data structures utilizing mode search approximation.BACKGROUNDIn computer operations, the compression of data, in which a certain set of data is reduced in size, is commonly performed in order to minimize the amount of data to be transmitted or stored in memory. The resulting reduction can provide significant savings in required storage capacity, transmission time, and data handling.Delta compression refers to a manner of storing or communicating data utilizing data encoding in the form of differences (or deltas) between sequential data, as opposed to complete files. The differences may be in the form of archival histories of changes in a set of data, wherein the discrete files expressing the differences are commonly referred to as deltas or similar terms. Delta compression including data encoding of the delta values, which may have many zero values if a particular data set includes many values that do not change significantly between data sets.However, conventional delta compression does not provide satisfactory compression in all circumstances. For examples, certain sparse data structures such as the sparse matrices used in machine learning contain very large differences between neighboring numerical values. There are generally very few smooth gradients such as those that as are seen in image data. Such data structures can effectively defeat a conventional delta compression based scheme when applied in machine learning or other processing.BRIEF DESCRIPTION OF THE DRAWINGSEmbodiments described here are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.FIG. 1 is a block diagram of a processing system, according to some embodiments;FIG. 2 is a block diagram of an embodiment of a processor having one or more processor cores, an integrated memory controller, and an integrated graphics processor;FIG. 3 is a block diagram of a graphics processor according to some embodiments;FIG. 4 is a block diagram of a graphics processing engine of a graphics processor in accordance with some embodiments;FIG. 5 is a block diagram of hardware logic of a graphics processor core, according to some embodiments;FIGS. 6A-6B illustrate thread execution logic including an array of processing elements employed in a graphics processor core according to some embodiments;FIG. 7 is a block diagram illustrating graphics processor instruction formats according to some embodiments;FIG. 8 is a block diagram of another embodiment of a graphics processor;FIG. 9A is a block diagram illustrating a graphics processor command format according to some embodiments;FIG. 9B is a block diagram illustrating a graphics processor command sequence according to an embodiment;FIG. 10 illustrates exemplary graphics software architecture for a data processing system according to some embodiments;FIG. 11A is a block diagram illustrating an IP core development system that may be used to manufacture an integrated circuit to perform operations according to an embodiment;FIG. 11B illustrates a cross-section side view of an integrated circuit package assembly according to some embodiments;FIG. 12 is a block diagram illustrating an exemplary system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment;FIG. 13A illustrates an exemplary graphics processor of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment;FIG. 13B illustrates an additional exemplary graphics processor of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment;FIG. 14A illustrates a graphics core that may be included within a graphics processor according to some embodiments;FIG. 14B illustrates a highly-parallel general-purpose graphics processing unit suitable for deployment on a multi-chip module according to some embodiments;FIG. 15 illustrates a machine learning software stack, according to an embodiment;FIGS. 16A-16B illustrate layers of exemplary deep neural networks;FIG. 17 illustrates an exemplary recurrent neural network;FIG. 18 illustrates training and deployment of a deep neural network;FIG. 19 is a block diagram illustrating distributed learning;FIG. 20A is an illustration of compression for sparse data structures according to some embodiments;FIG. 20B is a flowchart to illustrate a process for selection of a compressed output vector according to some embodiments;FIG. 21 is a flowchart to illustrate a process for compression of sparse data structures according to some embodiments;FIG. 22 is a flowchart to illustrate a process for decompression of sparse data structures according to some embodiments;FIG. 23 is an illustration of an uncompressed data vector and a resulting compressed data vector according to some embodiments;FIG. 24 is an illustration of a portion of a mode approximation algorithm according to some embodiments;FIG. 25 is an illustration of a mode approximation algorithm according to some embodiments; andFIG. 26 is an illustration of an apparatus or system to provide for compression for sparse data structures utilizing mode search approximation, according to some embodiments.DETAILED DESCRIPTIONEmbodiments described herein are generally directed to compression for sparse data structures utilizing mode search approximation.In some embodiments, an apparatus, system, or process provides a data compression (which may be referred to herein as machine learning (ML) compression) in which a most repeated value (referred to as the mode) is extracted from data, and is encoded with a significance map that indicates locations of the mode in the data. In some embodiments, the mode extraction is performed utilizing an approximation algorithm, the approximation algorithm including ternary tree processing to rapidly generate a mode approximation with parallel machine processing.In some embodiments, machine learning compression (a first compression algorithm) may be complementary to a second compression algorithm, such as delta compression, and the machine learning compression may operate together with (in parallel with) the second compression algorithm. A compressed vector may be taken from whichever of the compression algorithms is successful in the compression operation. In some embodiments, there may be a preference for use of the machine learning algorithm if both compression algorithms are successful.System OverviewFIG. 1 is a block diagram of a processing system 100, according to an embodiment. In various embodiments the system 100 includes one or more processors 102 and one or more graphics processors 108, and may be a single processor desktop system, a multiprocessor workstation system, or a server system having a large number of processors 102 or processor cores 107. In one embodiment, the system 100 is a processing platform incorporated within a system-on-a-chip (SoC) integrated circuit for use in mobile, handheld, or embedded devices.In one embodiment the system 100 can include, or be incorporated within a server-based gaming platform, a game console, including a game and media console, a mobile gaming console, a handheld game console, or an online game console. In some embodiments the system 100 is a mobile phone, smart phone, tablet computing device or mobile Internet device. The processing system 100 can also include, couple with, or be integrated within a wearable device, such as a smart watch wearable device, smart eyewear device, augmented reality device, or virtual reality device. In some embodiments, the processing system 100 is a television or set top box device having one or more processors 102 and a graphical interface generated by one or more graphics processors 108.In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions which, when executed, perform operations for system and user software. In some embodiments, each of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, instruction set 109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). Multiple processor cores 107 may each process a different instruction set 109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 107 may also include other processing devices, such a Digital Signal Processor (DSP).In some embodiments, the processor 102 includes cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 107 using known cache coherency techniques. A register file 106 is additionally included in processor 102 which may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 102.In some embodiments, one or more processor(s) 102 are coupled with one or more interface bus(es) 110 to transmit communication signals such as address, data, or control signals between processor 102 and other components in the system 100. The interface bus 110, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI Express), memory busses, or other types of interface busses. In one embodiment the processor(s) 102 include an integrated memory controller 116 and a platform controller hub 130. The memory controller 116 facilitates communication between a memory device and other components of the system 100, while the platform controller hub (PCH) 130 provides connections to I/O devices via a local I/O bus.The memory device 120 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 120 can operate as system memory for the system 100, to store data 122 and instructions 121 for use when the one or more processors 102 executes an application or process. Memory controller 116 also couples with an optional external graphics processor 112, which may communicate with the one or more graphics processors 108 in processors 102 to perform graphics and media operations. In some embodiments a display device 111 can connect to the processor(s) 102. The display device 111 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device 111 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.In some embodiments the platform controller hub 130 enables peripherals to connect to memory device 120 and processor 102 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 146, a network controller 134, a firmware interface 128, a wireless transceiver 126, touch sensors 125, a data storage device 124 (e.g., hard disk drive, flash memory, etc.). The data storage device 124 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI Express). The touch sensors 125 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 126 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, or Long-Term Evolution (LTE) transceiver. The firmware interface 128 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 134 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 110. The audio controller 146, in one embodiment, is a multi-channel high definition audio controller. In one embodiment the system 100 includes an optional legacy I/O controller 140 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 130 can also connect to one or more Universal Serial Bus (USB) controllers 142 connect input devices, such as keyboard and mouse 143 combinations, a camera 144, or other USB input devices.It will be appreciated that the system 100 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 116 and platform controller hub 130 may be integrated into a discreet external graphics processor, such as the external graphics processor 112. In one embodiment the platform controller hub 130 and/or memory controller 160 may be external to the one or more processor(s) 102. For example, the system 100 can include an external memory controller 116 and platform controller hub 130, which may be configured as a memory controller hub and peripheral controller hub within a system chipset that is in communication with the processor(s) 102.FIG. 2 is a block diagram of an embodiment of a processor 200 having one or more processor cores 202A-202N, an integrated memory controller 214, and an integrated graphics processor 208. Those elements of FIG. 2 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. Processor 200 can include additional cores up to and including additional core 202N represented by the dashed lined boxes. Each of processor cores 202A-202N includes one or more internal cache units 204A-204N. In some embodiments each processor core also has access to one or more shared cached units 206.The internal cache units 204A-204N and shared cache units 206 represent a cache memory hierarchy within the processor 200. The cache memory hierarchy may include at least one level of instruction and data cache within each processor core and one or more levels of shared mid-level cache, such as a Level 2 (L2), Level 3 (L3), Level 4 (L4), or other levels of cache, where the highest level of cache before external memory is classified as the LLC. In some embodiments, cache coherency logic maintains coherency between the various cache units 206 and 204A-204N.In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. The one or more bus controller units 216 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 210 provides management functionality for the various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).In some embodiments, one or more of the processor cores 202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core 210 includes components for coordinating and operating cores 202A-202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 202A-202N and graphics processor 208.In some embodiments, processor 200 additionally includes graphics processor 208 to execute graphics processing operations. In some embodiments, the graphics processor 208 couples with the set of shared cache units 206, and the system agent core 210, including the one or more integrated memory controllers 214. In some embodiments, the system agent core 210 also includes a display controller 211 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 208.In some embodiments, a ring based interconnect unit 212 is used to couple the internal components of the processor 200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 couples with the ring interconnect 212 via an I/O link 213.The exemplary I/O link 213 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 218, such as an eDRAM module. In some embodiments, each of the processor cores 202A-202N and graphics processor 208 use embedded memory modules 218 as a shared Last Level Cache.In some embodiments, processor cores 202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 202A-202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. Additionally, processor 200 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.FIG. 3 is a block diagram of a graphics processor 300, which may be a discrete graphics processing unit, or may be a graphics processor integrated with a plurality of processing cores. In some embodiments, the graphics processor communicates via a memory mapped I/O interface to registers on the graphics processor and with commands placed into the processor memory. In some embodiments, graphics processor 300 includes a memory interface 314 to access memory. Memory interface 314 can be an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory.In some embodiments, graphics processor 300 also includes a display controller 302 to drive display output data to a display device 320. Display controller 302 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device 320 can be an internal or external display device. In one embodiment the display device 320 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor 300 includes a video codec engine 306 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine 304 to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 310. In some embodiments, GPE 310 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.In some embodiments, GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 312 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media sub-system 315. While 3D pipeline 312 can be used to perform media operations, an embodiment of GPE 310 also includes a media pipeline 316 that is specifically used to perform media operations, such as video post-processing and image enhancement.In some embodiments, media pipeline 316 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 306. In some embodiments, media pipeline 316 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media sub-system 315. The spawned threads perform computations for the media operations on one or more graphics execution units included in 3D/Media sub-system 315.In some embodiments, 3D/Media subsystem 315 includes logic for executing threads spawned by 3D pipeline 312 and media pipeline 316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics execution units to process the 3D and media threads. In some embodiments, 3D/Media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.Graphics Processing EngineFIG. 4 is a block diagram of a graphics processing engine 410 of a graphics processor in accordance with some embodiments. In one embodiment, the graphics processing engine (GPE) 410 is a version of the GPE 310 shown in FIG. 3 . Elements of FIG. 4 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. For example, the 3D pipeline 312 and media pipeline 316 of FIG. 3 are illustrated. The media pipeline 316 is optional in some embodiments of the GPE 410 and may not be explicitly included within the GPE 410. For example and in at least one embodiment, a separate media and/or image processor is coupled to the GPE 410.In some embodiments, GPE 410 couples with or includes a command streamer 403, which provides a command stream to the 3D pipeline 312 and/or media pipelines 316. In some embodiments, command streamer 403 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from the memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 312 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and media pipeline 316 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core array 414. In one embodiment the graphics core array 414 include one or more blocks of graphics cores (e.g., graphics core(s) 415A, graphics core(s) 415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic.In various embodiments the 3D pipeline 312 includes fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader programs, by processing the instructions and dispatching execution threads to the graphics core array 414. The graphics core array 414 provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic (e.g., execution units) within the graphics core(s) 415A-414B of the graphic core array 414 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.In some embodiments the graphics core array 414 also includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the execution units additionally include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s) 107 of FIG. 1 or core 202A-202N as in FIG. 2 .Output data generated by threads executing on the graphics core array 414 can output data to memory in a unified return buffer (URB) 418. The URB 418 can store data for multiple threads. In some embodiments the URB 418 may be used to send data between different threads executing on the graphics core array 414. In some embodiments the URB 418 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic 420.In some embodiments, graphics core array 414 is scalable, such that the array includes a variable number of graphics cores, each having a variable number of execution units based on the target power and performance level of GPE 410. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.The graphics core array 414 couples with shared function logic 420 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 420 are hardware logic units that provide specialized supplemental functionality to the graphics core array 414. In various embodiments, shared function logic 420 includes but is not limited to sampler 421, math 422, and inter-thread communication (ITC) 423 logic. Additionally, some embodiments implement one or more cache(s) 425 within the shared function logic 420.A shared function is implemented where the demand for a given specialized function is insufficient for inclusion within the graphics core array 414. Instead a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 420 and shared among the execution resources within the graphics core array 414. The precise set of functions that are shared between the graphics core array 414 and included within the graphics core array 414 varies across embodiments. In some embodiments, specific shared functions within the shared function logic 420 that are used extensively by the graphics core array 414 may be included within shared function logic 416 within the graphics core array 414. In various embodiments, the shared function logic 416 within the graphics core array 414 can include some or all logic within the shared function logic 420. In one embodiment, all logic elements within the shared function logic 420 may be duplicated within the shared function logic 416 of the graphics core array 414. In one embodiment the shared function logic 420 is excluded in favor of the shared function logic 416 within the graphics core array 414.FIG. 5 is a block diagram of hardware logic of a graphics processor core 500, according to some embodiments described herein. Elements of FIG. 5 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. The illustrated graphics processor core 500, in some embodiments, is included within the graphics core array 414 of FIG. 4 . The graphics processor core 500, sometimes referred to as a core slice, can be one or multiple graphics cores within a modular graphics processor. The graphics processor core 500 is exemplary of one graphics core slice, and a graphics processor as described herein may include multiple graphics core slices based on target power and performance envelopes. Each graphics core 500 can include a fixed function block 530 coupled with multiple sub-cores 501A-501F, also referred to as sub-slices, that include modular blocks of general-purpose and fixed function logic.In some embodiments the fixed function block 530 includes a geometry/fixed function pipeline 536 that can be shared by all sub-cores in the graphics processor 500, for example, in lower performance and/or lower power graphics processor implementations. In various embodiments, the geometry/fixed function pipeline 536 includes a 3D fixed function pipeline (e.g., 3D pipeline 312 as in FIG. 3 and FIG. 4 ) a video front-end unit, a thread spawner and thread dispatcher, and a unified return buffer manager, which manages unified return buffers, such as the unified return buffer 418 of FIG. 4 .In one embodiment the fixed function block 530 also includes a graphics SoC interface 537, a graphics microcontroller 538, and a media pipeline 539. The graphics SoC interface 537 provides an interface between the graphics core 500 and other processor cores within a system on a chip integrated circuit. The graphics microcontroller 538 is a programmable sub-processor that is configurable to manage various functions of the graphics processor 500, including thread dispatch, scheduling, and pre-emption. The media pipeline 539 (e.g., media pipeline 316 of FIG. 3 and FIG. 4 ) includes logic to facilitate the decoding, encoding, pre-processing, and/or post-processing of multimedia data, including image and video data. The media pipeline 539 implement media operations via requests to compute or sampling logic within the sub-cores 501-501F.In one embodiment the SoC interface 537 enables the graphics core 500 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC, including memory hierarchy elements such as a shared last level cache memory, the system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 537 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics core 500 and CPUs within the SoC. The SoC interface 537 can also implement power management controls for the graphics core 500 and enable an interface between a clock domain of the graphic core 500 and other clock domains within the SoC. In one embodiment the SoC interface 537 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 539, when media operations are to be performed, or a geometry and fixed function pipeline (e.g., geometry and fixed function pipeline 536, geometry and fixed function pipeline 514) when graphics processing operations are to be performed.The graphics microcontroller 538 can be configured to perform various scheduling and management tasks for the graphics core 500. In one embodiment the graphics microcontroller 538 can perform graphics and/or compute workload scheduling on the various graphics parallel engines within execution unit (EU) arrays 502A-502F, 504A-504F within the sub-cores 501A-501F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics core 500 can submit workloads one of multiple graphic processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller 538 can also facilitate low-power or idle states for the graphics core 500, providing the graphics core 500 with the ability to save and restore registers within the graphics core 500 across low-power state transitions independently from the operating system and/or graphics driver software on the system.The graphics core 500 may have greater than or fewer than the illustrated sub-cores 501A-501F, up to N modular sub-cores. For each set of N sub-cores, the graphics core 500 can also include shared function logic 510, shared and/or cache memory 512, a geometry/fixed function pipeline 514, as well as additional fixed function logic 516 to accelerate various graphics and compute processing operations. The shared function logic 510 can include logic units associated with the shared function logic 420 of FIG. 4 (e.g., sampler, math, and/or inter-thread communication logic) that can be shared by each N sub-cores within the graphics core 500. The shared and/or cache memory 512 can be a last-level cache for the set of N sub-cores 501A-501F within the graphics core 500, and can also serve as shared memory that is accessible by multiple sub-cores. The geometry/fixed function pipeline 514 can be included instead of the geometry/fixed function pipeline 536 within the fixed function block 530 and can include the same or similar logic units.In one embodiment the graphics core 500 includes additional fixed function logic 516 that can include various fixed function acceleration logic for use by the graphics core 500. In one embodiment the additional fixed function logic 516 includes an additional geometry pipeline for use in position only shading. In position-only shading, two geometry pipelines exist, the full geometry pipeline within the geometry/fixed function pipeline 516, 536, and a cull pipeline, which is an additional geometry pipeline which may be included within the additional fixed function logic 516. In one embodiment the cull pipeline is a trimmed down version of the full geometry pipeline. The full pipeline and the cull pipeline can execute different instances of the same application, each instance having a separate context. Position only shading can hide long cull runs of discarded triangles, enabling shading to be completed earlier in some instances. For example and in one embodiment the cull pipeline logic within the additional fixed function logic 516 can execute position shaders in parallel with the main application and generally generates critical results faster than the full pipeline, as the cull pipeline fetches and shades only the position attribute of the vertices, without performing rasterization and rendering of the pixels to the frame buffer. The cull pipeline can use the generated critical results to compute visibility information for all the triangles without regard to whether those triangles are culled. The full pipeline (which in this instance may be referred to as a replay pipeline) can consume the visibility information to skip the culled triangles to shade only the visible triangles that are finally passed to the rasterization phase.In one embodiment the additional fixed function logic 516 can also include machine-learning acceleration logic, such as fixed function matrix multiplication logic, for implementations including optimizations for machine learning training or inferencing.Within each graphics sub-core 501A-501F includes a set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics sub-cores 501A-501F include multiple EU arrays 502A-502F, 504A-504F, thread dispatch and inter-thread communication (TD/IC) logic 503A-503F, a 3D (e.g., texture) sampler 505A-505F, a media sampler 506A-506F, a shader processor 507A-507F, and shared local memory (SLM) 508A-508F. The EU arrays 502A-502F, 504A-504F each include multiple execution units, which are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute shader programs. The TD/IC logic 503A-503F performs local thread dispatch and thread control operations for the execution units within a sub-core and facilitate communication between threads executing on the execution units of the sub-core. The 3D sampler 505A-505F can read texture or other 3D graphics related data into memory. The 3D sampler can read texture data differently based on a configured sample state and the texture format associated with a given texture. The media sampler 506A-506F can perform similar read operations based on the type and format associated with media data. In one embodiment, each graphics sub-core 501A-501F can alternately include a unified 3D and media sampler. Threads executing on the execution units within each of the sub-cores 501A-501F can make use of shared local memory 508A-508F within each sub-core, to enable threads executing within a thread group to execute using a common pool of on-chip memory.Execution UnitsFIGS. 6A-6B illustrate thread execution logic 600 including an array of processing elements employed in a graphics processor core according to embodiments described herein. Elements of FIGS. 6A-6B having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such. FIG. 6A illustrates an overview of thread execution logic 600, which can include a variant of the hardware logic illustrated with each sub-core 501A-501F of FIG. 5 . FIG. 6B illustrates exemplary internal details of an execution unit.As illustrated in FIG. 6A , in some embodiments thread execution logic 600 includes a shader processor 602, a thread dispatcher 604, instruction cache 606, a scalable execution unit array including a plurality of execution units 608A-608N, a sampler 610, a data cache 612, and a data port 614. In one embodiment the scalable execution unit array can dynamically scale by enabling or disabling one or more execution units (e.g., any of execution unit 608A, 608B, 608C, 608D, through 608N-1 and 608N) based on the computational requirements of a workload. In one embodiment the included components are interconnected via an interconnect fabric that links to each of the components. In some embodiments, thread execution logic 600 includes one or more connections to memory, such as system memory or cache memory, through one or more of instruction cache 606, data port 614, sampler 610, and execution units 608A-608N. In some embodiments, each execution unit (e.g. 608A) is a stand-alone programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In various embodiments, the array of execution units 608A-608N is scalable to include any number individual execution units.In some embodiments, the execution units 608A-608N are primarily used to execute shader programs. A shader processor 602 can process the various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 604. In one embodiment the thread dispatcher includes logic to arbitrate thread initiation requests from the graphics and media pipelines and instantiate the requested threads on one or more execution unit in the execution units 608A-608N. For example, a geometry pipeline can dispatch vertex, tessellation, or geometry shaders to the thread execution logic for processing. In some embodiments, thread dispatcher 604 can also process runtime thread spawning requests from the executing shader programs.In some embodiments, the execution units 608A-608N support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., Direct 3D and OpenGL) are executed with a minimal translation. The execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders) and general-purpose processing (e.g., compute and media shaders). Each of the execution units 608A-608N is capable of multi-issue single instruction multiple data (SIMD) execution and multi-threaded operation enables an efficient execution environment in the face of higher latency memory accesses. Each hardware thread within each execution unit has a dedicated high-bandwidth register file and associated independent thread-state. Execution is multi-issue per clock to pipelines capable of integer, single and double precision floating point operations, SIMD branch capability, logical operations, transcendental operations, and other miscellaneous operations. While waiting for data from memory or one of the shared functions, dependency logic within the execution units 608A-608N causes a waiting thread to sleep until the requested data has been returned. While the waiting thread is sleeping, hardware resources may be devoted to processing other threads. For example, during a delay associated with a vertex shader operation, an execution unit can perform operations for a pixel shader, fragment shader, or another type of shader program, including a different vertex shader.Each execution unit in execution units 608A-608N operates on arrays of data elements. The number of data elements is the "execution size," or the number of channels for the instruction. An execution channel is a logical unit of execution for data element access, masking, and flow control within instructions. The number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating-Point Units (FPUs) for a particular graphics processor. In some embodiments, execution units 608A-608N support integer and floating-point data types.The execution unit instruction set includes SIMD instructions. The various data elements can be stored as a packed data type in a register and the execution unit will process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the execution unit operates on the vector as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible.In one embodiment one or more execution units can be combined into a fused execution unit 609A-609N having thread control logic (607A-607N) that is common to the fused EUs. Multiple EUs can be fused into an EU group. Each EU in the fused EU group can be configured to execute a separate SIMD hardware thread. The number of EUs in a fused EU group can vary according to embodiments. Additionally, various SIMD widths can be performed per-EU, including but not limited to SIMD8, SIMD16, and SIMD32. Each fused graphics execution unit 609A-609N includes at least two execution units. For example, fused execution unit 609A includes a first EU 608A, second EU 608B, and thread control logic 607A that is common to the first EU 608A and the second EU 608B. The thread control logic 607A controls threads executed on the fused graphics execution unit 609A, allowing each EU within the fused execution units 609A-609N to execute using a common instruction pointer register.One or more internal instruction caches (e.g., 606) are included in the thread execution logic 600 to cache thread instructions for the execution units. In some embodiments, one or more data caches (e.g., 612) are included to cache thread data during thread execution. In some embodiments, a sampler 610 is included to provide texture sampling for 3D operations and media sampling for media operations. In some embodiments, sampler 610 includes specialized texture or media sampling functionality to process texture or media data during the sampling process before providing the sampled data to an execution unit.During execution, the graphics and media pipelines send thread initiation requests to thread execution logic 600 via thread spawning and dispatch logic. Once a group of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within the shader processor 602 is invoked to further compute output information and cause results to be written to output surfaces (e.g., color buffers, depth buffers, stencil buffers, etc.). In some embodiments, a pixel shader or fragment shader calculates the values of the various vertex attributes that are to be interpolated across the rasterized object. In some embodiments, pixel processor logic within the shader processor 602 then executes an application programming interface (API)-supplied pixel or fragment shader program. To execute the shader program, the shader processor 602 dispatches threads to an execution unit (e.g., 608A) via thread dispatcher 604. In some embodiments, shader processor 602 uses texture sampling logic in the sampler 610 to access texture data in texture maps stored in memory. Arithmetic operations on the texture data and the input geometry data compute pixel color data for each geometric fragment, or discards one or more pixels from further processing.In some embodiments, the data port 614 provides a memory access mechanism for the thread execution logic 600 to output processed data to memory for further processing on a graphics processor output pipeline. In some embodiments, the data port 614 includes or couples to one or more cache memories (e.g., data cache 612) to cache data for memory access via the data port.As illustrated in FIG. 6B , a graphics execution unit 608 can include an instruction fetch unit 637, a general register file array (GRF) 624, an architectural register file array (ARF) 626, a thread arbiter 622, a send unit 630, a branch unit 632, a set of SIMD floating point units (FPUs) 634, and in one embodiment a set of dedicated integer SIMD ALUs 635. The GRF 624 and ARF 626 includes the set of general register files and architecture register files associated with each simultaneous hardware thread that may be active in the graphics execution unit 608. In one embodiment, per thread architectural state is maintained in the ARF 626, while data used during thread execution is stored in the GRF 624. The execution state of each thread, including the instruction pointers for each thread, can be held in thread-specific registers in the ARF 626.In one embodiment the graphics execution unit 608 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per execution unit, where execution unit resources are divided across logic used to execute multiple simultaneous threads.In one embodiment, the graphics execution unit 608 can co-issue multiple instructions, which may each be different instructions. The thread arbiter 622 of the graphics execution unit thread 608 can dispatch the instructions to one of the send unit 630, branch unit 642, or SIMD FPU(s) 634 for execution. Each execution thread can access 128 general-purpose registers within the GRF 624, where each register can store 32 bytes, accessible as a SIMD 8-element vector of 32-bit data elements. In one embodiment, each execution unit thread has access to 4 Kbytes within the GRF 624, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In one embodiment up to seven threads can execute simultaneously, although the number of threads per execution unit can also vary according to embodiments. In an embodiment in which seven threads may access 4 Kbytes, the GRF 624 can store a total of 28 Kbytes. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via "send" instructions that are executed by the message passing send unit 630. In one embodiment, branch instructions are dispatched to a dedicated branch unit 632 to facilitate SIMD divergence and eventual convergence.In one embodiment the graphics execution unit 608 includes one or more SIMD floating point units (FPU(s)) 634 to perform floating-point operations. In one embodiment, the FPU(s) 634 also support integer computation. In one embodiment the FPU(s) 634 can SIMD execute up to M number of 32-bit floating-point (or integer) operations, or SIMD execute up to 2M 16-bit integer or 16-bit floating-point operations. In one embodiment, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In some embodiments, a set of 8-bit integer SIMD ALUs 635 are also present, and may be specifically optimized to perform operations associated with machine learning computations.In one embodiment, arrays of multiple instances of the graphics execution unit 608 can be instantiated in a graphics sub-core grouping (e.g., a sub-slice). For scalability, product architects can choose the exact number of execution units per sub-core grouping. In one embodiment the execution unit 608 can execute instructions across a plurality of execution channels. In a further embodiment, each thread executed on the graphics execution unit 608 is executed on a different channel.FIG. 7 is a block diagram illustrating graphics processor instruction formats 700 according to some embodiments. In one or more embodiment, the graphics processor execution units support an instruction set having instructions in multiple formats. The solid lined boxes illustrate the components that are generally included in an execution unit instruction, while the dashed lines include components that are optional or that are only included in a sub-set of the instructions. In some embodiments, instruction format 700 described and illustrated are macro-instructions, in that they are instructions supplied to the execution unit, as opposed to micro-operations resulting from instruction decode once the instruction is processed.In some embodiments, the graphics processor execution units natively support instructions in a 128-bit instruction format 710. A 64-bit compacted instruction format 730 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 730. The native instructions available in the 64-bit format 730 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 713. The execution unit hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 710.For each format, instruction opcode 712 defines the operation that the execution unit is to perform. The execution units execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the execution unit performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the execution unit performs each instruction across all data channels of the operands. In some embodiments, instruction control field 714 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 710 an exec-size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 716 is not available for use in the 64-bit compact instruction format 730.Some execution unit instructions have up to three operands including two source operands, src0 720, src1 722, and one destination 718. In some embodiments, the execution units support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 724), where the instruction opcode 712 determines the number of source operands. An instruction's last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.In some embodiments instructions are grouped based on opcode 712 bit-fields to simplify Opcode decode 740. For an 8-bit opcode, bits 4, 5, and 6 allow the execution unit to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 742 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 742 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 744 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0x20). A miscellaneous instruction group 746 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0x30). A parallel math instruction group 748 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0x40). The parallel math group 748 performs the arithmetic operations in parallel across data channels. The vector math group 750 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0x50). The vector math group performs arithmetic such as dot product calculations on vector operands.Graphics PipelineFIG. 8 is a block diagram of another embodiment of a graphics processor 800. Elements of FIG. 8 having the same reference numbers (or names) as the elements of any other figure herein can operate or function in any manner similar to that described elsewhere herein, but are not limited to such.In some embodiments, graphics processor 800 includes a geometry pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a render output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 800 via a ring interconnect 802. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 802 are interpreted by a command streamer 803, which supplies instructions to individual components of the geometry pipeline 820 or the media pipeline 830.In some embodiments, command streamer 803 directs the operation of a vertex fetcher 805 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 803. In some embodiments, vertex fetcher 805 provides vertex data to a vertex shader 807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex-processing instructions by dispatching execution threads to execution units 852A-852B via a thread dispatcher 831.In some embodiments, execution units 852A-852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, execution units 852A-852B have an attached L1 cache 851 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.In some embodiments, geometry pipeline 820 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 811 configures the tessellation operations. A programmable domain shader 817 provides back-end evaluation of tessellation output. A tessellator 813 operates at the direction of hull shader 811 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline 820. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 811, tessellator 813, and domain shader 817) can be bypassed.In some embodiments, complete geometric objects can be processed by a geometry shader 819 via one or more threads dispatched to execution units 852A-852B, or can proceed directly to the clipper 829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled, the geometry shader 819 receives input from the vertex shader 807. In some embodiments, geometry shader 819 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.Before rasterization, a clipper 829 processes vertex data. The clipper 829 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component 873 in the render output pipeline 870 dispatches pixel shaders to convert the geometric objects into per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, an application can bypass the rasterizer and depth test component 873 and access un-rasterized vertex data via a stream out unit 823.The graphics processor 800 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, execution units 852A-852B and associated logic units (e.g., L1 cache 851, sampler 854, texture cache 858, etc.) interconnect via a data port 856 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 854, caches 851, 858 and execution units 852A-852B each have separate memory access paths. In one embodiment the texture cache 858 can also be configured as a sampler cache.In some embodiments, render output pipeline 870 contains a rasterizer and depth test component 873 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 878 and depth cache 879 are also available in some embodiments. A pixel operations component 877 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g. bit block image transfers with blending) are performed by the 2D engine 841, or substituted at display time by the display controller 843 using overlay display planes. In some embodiments, a shared L3 cache 875 is available to all graphics components, allowing the sharing of data without the use of main system memory.In some embodiments, graphics processor media pipeline 830 includes a media engine 837 and a video front-end 834. In some embodiments, video front-end 834 receives pipeline commands from the command streamer 803. In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, video front-end 834 processes media commands before sending the command to the media engine 837. In some embodiments, media engine 837 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 850 via thread dispatcher 831.In some embodiments, graphics processor 800 includes a display engine 840. In some embodiments, display engine 840 is external to processor 800 and couples with the graphics processor via the ring interconnect 802, or some other interconnect bus or fabric. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 843 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.In some embodiments, the geometry pipeline 820 and media pipeline 830 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.Graphics Pipeline ProgrammingFIG. 9A is a block diagram illustrating a graphics processor command format 900 according to some embodiments. FIG. 9B is a block diagram illustrating a graphics processor command sequence 910 according to an embodiment. The solid lined boxes in FIG. 9A illustrate the components that are generally included in a graphics command while the dashed lines include components that are optional or that are only included in a sub-set of the graphics commands. The exemplary graphics processor command format 900 of FIG. 9A includes data fields to identify a client 902, a command operation code (opcode) 904, and data 906 for the command. A sub-opcode 905 and a command size 908 are also included in some commands.In some embodiments, client 902 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 904 and, if present, sub-opcode 905 to determine the operation to perform. The client unit performs the command using information in data field 906. For some commands an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word.The flow diagram in FIG. 9B illustrates an exemplary graphics processor command sequence 910. In some embodiments, software or firmware of a data processing system that features an embodiment of a graphics processor uses a version of the command sequence shown to set up, execute, and terminate a set of graphics operations. A sample command sequence is shown and described for purposes of example only as embodiments are not limited to these specific commands or to this command sequence. Moreover, the commands may be issued as batch of commands in a command sequence, such that the graphics processor will process the sequence of commands in at least partially concurrence.In some embodiments, the graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked 'dirty' can be flushed to memory. In some embodiments, pipeline flush command 912 can be used for pipeline synchronization or before placing the graphics processor into a low power state.In some embodiments, a pipeline select command 913 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 913 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command 912 is required immediately before a pipeline switch via the pipeline select command 913.In some embodiments, a pipeline control command 914 configures a graphics pipeline for operation and is used to program the 3D pipeline 922 and the media pipeline 924. In some embodiments, pipeline control command 914 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.In some embodiments, return buffer state commands 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations.The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 920, the command sequence is tailored to the 3D pipeline 922 beginning with the 3D pipeline state 930 or the media pipeline 924 beginning at the media pipeline state 940.The commands to configure the 3D pipeline state 930 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state 930 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.In some embodiments, 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 932 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 932 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 922 dispatches shader execution threads to graphics processor execution units.In some embodiments, 3D pipeline 922 is triggered via an execute 934 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a 'go' or 'kick' command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back end operations may also be included for those operations.In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 924 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.In some embodiments, media pipeline 924 is configured in a similar manner as the 3D pipeline 922. A set of commands to configure the media pipeline state 940 are dispatched or placed into a command queue before the media object commands 942. In some embodiments, commands for the media pipeline state 940 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state 940 also support the use of one or more pointers to "indirect" state elements that contain a batch of state settings.In some embodiments, media object commands 942 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 942. Once the pipeline state is configured and media object commands 942 are queued, the media pipeline 924 is triggered via an execute command 944 or an equivalent execute event (e.g., register write). Output from media pipeline 924 may then be post processed by operations provided by the 3D pipeline 922 or the media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.Graphics Software ArchitectureFIG. 10 illustrates exemplary graphics software architecture for a data processing system 1000 according to some embodiments. In some embodiments, software architecture includes a 3D graphics application 1010, an operating system 1020, and at least one processor 1030. In some embodiments, processor 1030 includes a graphics processor 1032 and one or more general-purpose processor core(s) 1034. The graphics application 1010 and operating system 1020 each execute in the system memory 1050 of the data processing system.In some embodiments, 3D graphics application 1010 contains one or more shader programs including shader instructions 1012. The shader language instructions may be in a high-level shader language, such as the High Level Shader Language (HLSL) or the OpenGL Shader Language (GLSL). The application also includes executable instructions 1014 in a machine language suitable for execution by the general-purpose processor core 1034. The application also includes graphics objects 1016 defined by vertex data.In some embodiments, operating system 1020 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 1020 can support a graphics API 1022 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 1020 uses a front-end shader compiler 1024 to compile any shader instructions 1012 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 1010. In some embodiments, the shader instructions 1012 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.In some embodiments, user mode graphics driver 1026 contains a back-end shader compiler 1027 to convert the shader instructions 1012 into a hardware specific representation. When the OpenGL API is in use, shader instructions 1012 in the GLSL high-level language are passed to a user mode graphics driver 1026 for compilation. In some embodiments, user mode graphics driver 1026 uses operating system kernel mode functions 1028 to communicate with a kernel mode graphics driver 1029. In some embodiments, kernel mode graphics driver 1029 communicates with graphics processor 1032 to dispatch commands and instructions.IP Core ImplementationsOne or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as "IP cores," are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.FIG. 11A is a block diagram illustrating an IP core development system 1100 that may be used to manufacture an integrated circuit to perform operations according to an embodiment. The IP core development system 1100 may be used to generate modular, re-usable designs that can be incorporated into a larger design or used to construct an entire integrated circuit (e.g., an SOC integrated circuit). A design facility 1130 can generate a software simulation 1110 of an IP core design in a high-level programming language (e.g., C/C++). The software simulation 1110 can be used to design, test, and verify the behavior of the IP core using a simulation model 1112. The simulation model 1112 may include functional, behavioral, and/or timing simulations. A register transfer level (RTL) design 1115 can then be created or synthesized from the simulation model 1112. The RTL design 1115 is an abstraction of the behavior of the integrated circuit that models the flow of digital signals between hardware registers, including the associated logic performed using the modeled digital signals. In addition to an RTL design 1115, lower-level designs at the logic level or transistor level may also be created, designed, or synthesized. Thus, the particular details of the initial design and simulation may vary.The RTL design 1115 or equivalent may be further synthesized by the design facility into a hardware model 1120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 1165 using non-volatile memory 1140 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 1150 or wireless connection 1160. The fabrication facility 1165 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.FIG. 11B illustrates a cross-section side view of an integrated circuit package assembly 1170, according to some embodiments described herein. The integrated circuit package assembly 1170 illustrates an implementation of one or more processor or accelerator devices as described herein. The package assembly 1170 includes multiple units of hardware logic 1172, 1174 connected to a substrate 1180. The logic 1172, 1174 may be implemented at least partly in configurable logic or fixed-functionality logic hardware, and can include one or more portions of any of the processor core(s), graphics processor(s), or other accelerator devices described herein. Each unit of logic 1172, 1174 can be implemented within a semiconductor die and coupled with the substrate 1180 via an interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between the logic 1172, 1174 and the substrate 1180, and can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic 1172, 1174. In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. The package assembly 1170 may include other suitable types of substrates in other embodiments. The package assembly 1170 can be connected to other electrical devices via a package interconnect 1183. The package interconnect 1183 may be coupled to a surface of the substrate 1180 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.In some embodiments, the units of logic 1172, 1174 are electrically coupled with a bridge 1182 that is configured to route electrical signals between the logic 1172, 1174. The bridge 1182 may be a dense interconnect structure that provides a route for electrical signals. The bridge 1182 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic 1172, 1174.Although two units of logic 1172, 1174 and a bridge 1182 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge 1182 may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations.Exemplary System on a Chip Integrated CircuitFIGS. 12-14 illustrate exemplary integrated circuits and associated graphics processors that may be fabricated using one or more IP cores, according to various embodiments described herein. In addition to what is illustrated, other logic and circuits may be included, including additional graphics processors/cores, peripheral interface controllers, or general-purpose processor cores.FIG. 12 is a block diagram illustrating an exemplary system on a chip integrated circuit 1200 that may be fabricated using one or more IP cores, according to an embodiment. Exemplary integrated circuit 1200 includes one or more application processor(s) 1205 (e.g., CPUs), at least one graphics processor 1210, and may additionally include an image processor 1215 and/or a video processor 1220, any of which may be a modular IP core from the same or multiple different design facilities. Integrated circuit 1200 includes peripheral or bus logic including a USB controller 1225, UART controller 1230, an SPI/SDIO controller 1235, and an I2S/I2C controller 1240. Additionally, the integrated circuit can include a display device 1245 coupled to one or more of a high-definition multimedia interface (HDMI) controller 1250 and a mobile industry processor interface (MIPI) display interface 1255. Storage may be provided by a flash memory subsystem 1260 including flash memory and a flash memory controller. Memory interface may be provided via a memory controller 1265 for access to SDRAM or SRAM memory devices. Some integrated circuits additionally include an embedded security engine 1270.FIGS. 13A-13B are block diagrams illustrating exemplary graphics processors for use within an SoC, according to embodiments described herein. FIG. 13A illustrates an exemplary graphics processor 1310 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. FIG. 13B illustrates an additional exemplary graphics processor 1340 of a system on a chip integrated circuit that may be fabricated using one or more IP cores, according to an embodiment. Graphics processor 1310 of FIG. 13A is an example of a low power graphics processor core. Graphics processor 1340 of FIG. 13B is an example of a higher performance graphics processor core. Each of the graphics processors 1310, 1340 can be variants of the graphics processor 1210 of FIG. 12 .As shown in FIG. 13A , graphics processor 1310 includes a vertex processor 1305 and one or more fragment processor(s) 1315A-1315N (e.g., 1315A, 1315B, 1315C, 1315D, through 1315N-1, and 1315N). Graphics processor 1310 can execute different shader programs via separate logic, such that the vertex processor 1305 is optimized to execute operations for vertex shader programs, while the one or more fragment processor(s) 1315A-1315N execute fragment (e.g., pixel) shading operations for fragment or pixel shader programs. The vertex processor 1305 performs the vertex processing stage of the 3D graphics pipeline and generates primitives and vertex data. The fragment processor(s) 1315A-1315N use the primitive and vertex data generated by the vertex processor 1305 to produce a framebuffer that is displayed on a display device. In one embodiment, the fragment processor(s) 1315A-1315N are optimized to execute fragment shader programs as provided for in the OpenGL API, which may be used to perform similar operations as a pixel shader program as provided for in the Direct 3D API.Graphics processor 1310 additionally includes one or more memory management units (MMUs) 1320A-1320B, cache(s) 1325A-1325B, and circuit interconnect(s) 1330A-1330B. The one or more MMU(s) 1320A-1320B provide for virtual to physical address mapping for the graphics processor 1310, including for the vertex processor 1305 and/or fragment processor(s) 1315A-1315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 1325A-1325B. In one embodiment the one or more MMU(s) 1320A-1320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 1205, image processor 1215, and/or video processor 1220 of FIG. 12 , such that each processor 1205-1220 can participate in a shared or unified virtual memory system. The one or more circuit interconnect(s) 1330A-1330B enable graphics processor 1310 to interface with other IP cores within the SoC, either via an internal bus of the SoC or via a direct connection, according to embodiments.As shown FIG. 13B , graphics processor 1340 includes the one or more MMU(s) 1320A-1320B, caches 1325A-1325B, and circuit interconnects 1330A-1330B of the graphics processor 1310 of FIG. 13A . Graphics processor 1340 includes one or more shader core(s) 1355A-1355N (e.g., 1455A, 1355B, 1355C, 1355D, 1355E, 1355F, through 1355N-1, and 1355N), which provides for a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code to implement vertex shaders, fragment shaders, and/or compute shaders. The exact number of shader cores present can vary among embodiments and implementations. Additionally, graphics processor 1340 includes an inter-core task manager 1345, which acts as a thread dispatcher to dispatch execution threads to one or more shader cores 1355A-1355N and a tiling unit 1358 to accelerate tiling operations for tile-based rendering, in which rendering operations for a scene are subdivided in image space, for example to exploit local spatial coherence within a scene or to optimize use of internal caches.FIGS. 14A-14B illustrate additional exemplary graphics processor logic according to embodiments described herein. FIG. 14A illustrates a graphics core 1400 that may be included within the graphics processor 1210 of FIG. 12 , and may be a unified shader core 1355A-1355N as in FIG. 13B . FIG. 14B illustrates a highly-parallel general-purpose graphics processing unit 1430 suitable for deployment on a multi-chip module.As shown in FIG. 14A , the graphics core 1400 includes a shared instruction cache 1402, a texture unit 1418, and a cache/shared memory 1420 that are common to the execution resources within the graphics core 1400. The graphics core 1400 can include multiple slices 1401A-1401N or partition for each core, and a graphics processor can include multiple instances of the graphics core 1400. The slices 1401A-1401N can include support logic including a local instruction cache 1404A-1404N, a thread scheduler 1406A-1406N, a thread dispatcher 1408A-1408N, and a set of registers 1410A. To perform logic operations, the slices 1401A-1401N can include a set of additional function units (AFUs 1412A-1412N), floating-point units (FPU 1414A-1414N), integer arithmetic logic units (ALUs 1416-1416N), address computational units (ACU 1413A-1413N), double-precision floating-point units (DPFPU 1415A-1415N), and matrix processing units (MPU 1417A-1417N).Some of the computational units operate at a specific precision. For example, the FPUs 1414A-1414N can perform single-precision (32-bit) and half-precision (16-bit) floating point operations, while the DPFPUs 1415A-1415N perform double precision (64-bit) floating point operations. The ALUs 1416A-1416N can perform variable precision integer operations at 8-bit, 16-bit, and 32-bit precision, and can be configured for mixed precision operations. The MPUs 1417A-1417N can also be configured for mixed precision matrix operations, including half-precision floating point and 8-bit integer operations. The MPUs 1417-1417N can perform a variety of matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated general matrix to matrix multiplication (GEMM). The AFUs 1412A-1412N can perform additional logic operations not supported by the floating-point or integer units, including trigonometric operations (e.g., Sine, Cosine, etc.).As shown in FIG. 14B , a general-purpose processing unit (GPGPU) 1430 can be configured to enable highly-parallel compute operations to be performed by an array of graphics processing units. Additionally, the GPGPU 1430 can be linked directly to other instances of the GPGPU to create a multi-GPU cluster to improve training speed for particularly deep neural networks. The GPGPU 1430 includes a host interface 1432 to enable a connection with a host processor. In one embodiment the host interface 1432 is a PCI Express interface. However, the host interface can also be a vendor specific communications interface or communications fabric. The GPGPU 1430 receives commands from the host processor and uses a global scheduler 1434 to distribute execution threads associated with those commands to a set of compute clusters 1436A-1436H. The compute clusters 1436A-1436H share a cache memory 1438. The cache memory 1438 can serve as a higher-level cache for cache memories within the compute clusters 1436A-1436H.The GPGPU 1430 includes memory 1434A-1434B coupled with the compute clusters 1436A-1436H via a set of memory controllers 1442A-1442B. In various embodiments, the memory 1434A-1434B can include various types of memory devices including dynamic random-access memory (DRAM) or graphics random access memory, such as synchronous graphics random access memory (SGRAM), including graphics double data rate (GDDR) memory.In one embodiment the compute clusters 1436A-1436H each include a set of graphics cores, such as the graphics core 1400 of FIG. 14A , which can include multiple types of integer and floating-point logic units that can perform computational operations at a range of precisions including suited for machine learning computations. For example and in one embodiment at least a subset of the floating-point units in each of the compute clusters 1436A-1436H can be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating-point units can be configured to perform 64-bit floating point operations.Multiple instances of the GPGPU 1430 can be configured to operate as a compute cluster. The communication mechanism used by the compute cluster for synchronization and data exchange varies across embodiments. In one embodiment the multiple instances of the GPGPU 1430 communicate over the host interface 1432. In one embodiment the GPGPU 1430 includes an I/O hub 1439 that couples the GPGPU 1430 with a GPU link 1440 that enables a direct connection to other instances of the GPGPU. In one embodiment the GPU link 1440 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of the GPGPU 1430. In one embodiment the GPU link 1440 couples with a high-speed interconnect to transmit and receive data to other GPGPUs or parallel processors. In one embodiment the multiple instances of the GPGPU 1430 are located in separate data processing systems and communicate via a network device that is accessible via the host interface 1432. In one embodiment the GPU link 1440 can be configured to enable a connection to a host processor in addition to or as an alternative to the host interface 1432.While the illustrated configuration of the GPGPU 1430 can be configured to train neural networks, one embodiment provides alternate configuration of the GPGPU 1430 that can be configured for deployment within a high performance or low power inferencing platform. In an inferencing configuration the GPGPU 1430 includes fewer of the compute clusters 1436A-1436H relative to the training configuration. Additionally, the memory technology associated with the memory 1434A-1434B may differ between inferencing and training configurations, with higher bandwidth memory technologies devoted to training configurations. In one embodiment the inferencing configuration of the GPGPU 1430 can support inferencing specific instructions. For example, an inferencing configuration can provide support for one or more 8-bit integer dot product instructions, which are commonly used during inferencing operations for deployed neural networks.Machine Learning OverviewA machine learning algorithm is an algorithm that can learn based on a set of data. Embodiments of machine learning algorithms can be designed to model high-level abstractions within a data set. For example, image recognition algorithms can be used to determine which of several categories to which a given input belong; regression algorithms can output a numerical value given an input; and pattern recognition algorithms can be used to generate translated text or perform text to speech and/or speech recognition.An exemplary type of machine learning algorithm is a neural network. There are many types of neural networks; a simple type of neural network is a feedforward network. A feedforward network may be implemented as an acyclic graph in which the nodes are arranged in layers. Typically, a feedforward network topology includes an input layer and an output layer that are separated by at least one hidden layer. The hidden layer transforms input received by the input layer into a representation that is useful for generating output in the output layer. The network nodes are fully connected via edges to the nodes in adjacent layers, but there are no edges between nodes within each layer. Data received at the nodes of an input layer of a feedforward network are propagated (i.e., "fed forward") to the nodes of the output layer via an activation function that calculates the states of the nodes of each successive layer in the network based on coefficients ("weights") respectively associated with each of the edges connecting the layers. Depending on the specific model being represented by the algorithm being executed, the output from the neural network algorithm can take various forms.Before a machine learning algorithm can be used to model a particular problem, the algorithm is trained using a training data set. Training a neural network involves selecting a network topology, using a set of training data representing a problem being modeled by the network, and adjusting the weights until the network model performs with a minimal error for all instances of the training data set. For example, during a supervised learning training process for a neural network, the output produced by the network in response to the input representing an instance in a training data set is compared to the "correct" labeled output for that instance, an error signal representing the difference between the output and the labeled output is calculated, and the weights associated with the connections are adjusted to minimize that error as the error signal is backward propagated through the layers of the network. The network is considered "trained" when the errors for each of the outputs generated from the instances of the training data set are minimized.The accuracy of a machine learning algorithm can be affected significantly by the quality of the data set used to train the algorithm. The training process can be computationally intensive and may require a significant amount of time on a conventional general-purpose processor. Accordingly, parallel processing hardware is used to train many types of machine learning algorithms. This is particularly useful for optimizing the training of neural networks, as the computations performed in adjusting the coefficients in neural networks lend themselves naturally to parallel implementations. Specifically, many machine learning algorithms and software applications have been adapted to make use of the parallel processing hardware within general-purpose graphics processing devices.FIG. 15 is a generalized diagram of a machine learning software stack 1500. A machine learning application 1502 can be configured to train a neural network using a training dataset or to use a trained deep neural network to implement machine intelligence. The machine learning application 1502 can include training and inference functionality for a neural network and/or specialized software that can be used to train a neural network before deployment. The machine learning application 1502 can implement any type of machine intelligence including but not limited to image recognition, mapping and localization, autonomous navigation, speech synthesis, medical imaging, or language translation.Hardware acceleration for the machine learning application 1502 can be enabled via a machine learning framework 1504. The machine learning framework 1504 can provide a library of machine learning primitives. Machine learning primitives are basic operations that are commonly performed by machine learning algorithms. Without the machine learning framework 1504, developers of machine learning algorithms would be required to create and optimize the main computational logic associated with the machine learning algorithm, then re-optimize the computational logic as new parallel processors are developed. Instead, the machine learning application can be configured to perform the necessary computations using the primitives provided by the machine learning framework 1504. Exemplary primitives include tensor convolutions, activation functions, and pooling, which are computational operations that are performed while training a convolutional neural network (CNN). The machine learning framework 1504 can also provide primitives to implement basic linear algebra subprograms performed by many machine-learning algorithms, such as matrix and vector operations.The machine learning framework 1504 can process input data received from the machine learning application 1502 and generate the appropriate input to a compute framework 1506. The compute framework 1506 can abstract the underlying instructions provided to the GPGPU driver 1508 to enable the machine learning framework 1504 to take advantage of hardware acceleration via the GPGPU hardware 1510 without requiring the machine learning framework 1504 to have intimate knowledge of the architecture of the GPGPU hardware 1510. Additionally, the compute framework 1506 can enable hardware acceleration for the machine learning framework 1504 across a variety of types and generations of the GPGPU hardware 1510.Machine Learning Neural Network ImplementationsThe computing architecture provided by embodiments described herein can be configured to perform the types of parallel processing that is particularly suited for training and deploying neural networks for machine learning. A neural network can be generalized as a network of functions having a graph relationship. As is known in the art, there are a variety of types of neural network implementations used in machine learning. One exemplary type of neural network is the feedforward network, as previously described.A second exemplary type of neural network is the Convolutional Neural Network (CNN). A CNN is a specialized feedforward neural network for processing data having a known, grid-like topology, such as image data. Accordingly, CNNs are commonly used for compute vision and image recognition applications, but they also may be used for other types of pattern recognition such as speech and language processing. The nodes in the CNN input layer are organized into a set of "filters" (feature detectors inspired by the receptive fields found in the retina), and the output of each set of filters is propagated to nodes in successive layers of the network. The computations for a CNN include applying the convolution mathematical operation to each filter to produce the output of that filter. Convolution is a specialized kind of mathematical operation performed by two functions to produce a third function that is a modified version of one of the two original functions. In convolutional network terminology, the first function to the convolution can be referred to as the input, while the second function can be referred to as the convolution kernel. The output may be referred to as the feature map. For example, the input to a convolution layer can be a multidimensional array of data that defines the various color components of an input image. The convolution kernel can be a multidimensional array of parameters, where the parameters are adapted by the training process for the neural network.Recurrent neural networks (RNNs) are a family of feedforward neural networks that include feedback connections between layers. RNNs enable modeling of sequential data by sharing parameter data across different parts of the neural network. The architecture for a RNN includes cycles. The cycles represent the influence of a present value of a variable on its own value at a future time, as at least a portion of the output data from the RNN is used as feedback for processing subsequent input in a sequence. This feature makes RNNs particularly useful for language processing due to the variable nature in which language data can be composed.The figures described below present exemplary feedforward, CNN, and RNN networks, as well as describe a general process for respectively training and deploying each of those types of networks. It will be understood that these descriptions are exemplary and non-limiting as to any specific embodiment described herein and the concepts illustrated can be applied generally to deep neural networks and machine learning techniques in general.The exemplary neural networks described above can be used to perform deep learning. Deep learning is machine learning using deep neural networks. The deep neural networks used in deep learning are artificial neural networks composed of multiple hidden layers, as opposed to shallow neural networks that include only a single hidden layer. Deeper neural networks are generally more computationally intensive to train. However, the additional hidden layers of the network enable multistep pattern recognition that results in reduced output error relative to shallow machine learning techniques.Deep neural networks used in deep learning typically include a front-end network to perform feature recognition coupled to a back-end network which represents a mathematical model that can perform operations (e.g., object classification, speech recognition, etc.) based on the feature representation provided to the model. Deep learning enables machine learning to be performed without requiring hand crafted feature engineering to be performed for the model. Instead, deep neural networks can learn features based on statistical structure or correlation within the input data. The learned features can be provided to a mathematical model that can map detected features to an output. The mathematical model used by the network is generally specialized for the specific task to be performed, and different models will be used to perform different task.Once the neural network is structured, a learning model can be applied to the network to train the network to perform specific tasks. The learning model describes how to adjust the weights within the model to reduce the output error of the network. Backpropagation of errors is a common method used to train neural networks. An input vector is presented to the network for processing. The output of the network is compared to the desired output using a loss function and an error value is calculated for each of the neurons in the output layer. The error values are then propagated backwards until each neuron has an associated error value which roughly represents its contribution to the original output. The network can then learn from those errors using an algorithm, such as the stochastic gradient descent algorithm, to update the weights of the of the neural network.FIGS. 16A-16B illustrate an exemplary convolutional neural network. Fig. 16A illustrates various layers within a CNN. As shown in Fig. 16A , an exemplary CNN used to model image processing can receive input 1602 describing the red, green, and blue (RGB) components of an input image. The input 1602 can be processed by multiple convolutional layers (e.g., first convolutional layer 1604, second convolutional layer 1606). The output from the multiple convolutional layers may optionally be processed by a set of fully connected layers 1608. Neurons in a fully connected layer have full connections to all activations in the previous layer, as previously described for a feedforward network. The output from the fully connected layers 1608 can be used to generate an output result from the network. The activations within the fully connected layers 1608 can be computed using matrix multiplication instead of convolution. Not all CNN implementations are make use of fully connected layers 1608. For example, in some implementations the second convolutional layer 1606 can generate output for the CNNThe convolutional layers are sparsely connected, which differs from traditional neural network configuration found in the fully connected layers 1608. Traditional neural network layers are fully connected, such that every output unit interacts with every input unit. However, the convolutional layers are sparsely connected because the output of the convolution of a field is input (instead of the respective state value of each of the nodes in the field) to the nodes of the subsequent layer, as illustrated. The kernels associated with the convolutional layers perform convolution operations, the output of which is sent to the next layer. The dimensionality reduction performed within the convolutional layers is one aspect that enables the CNN to scale to process large images.Fig. 16B illustrates exemplary computation stages within a convolutional layer of a CNN. Input to a convolutional layer 1612 of a CNN can be processed in three stages of a convolutional layer 1614. The three stages can include a convolution stage 1616, a detector stage 1618, and a pooling stage 1620. The convolution layer 1614 can then output data to a successive convolutional layer. The final convolutional layer of the network can generate output feature map data or provide input to a fully connected layer, for example, to generate a classification value for the input to the CNN.In the convolution stage 1616 performs several convolutions in parallel to produce a set of linear activations. The convolution stage 1616 can include an affine transformation, which is any transformation that can be specified as a linear transformation plus a translation. Affine transformations include rotations, translations, scaling, and combinations of these transformations. The convolution stage computes the output of functions (e.g., neurons) that are connected to specific regions in the input, which can be determined as the local region associated with the neuron. The neurons compute a dot product between the weights of the neurons and the region in the local input to which the neurons are connected. The output from the convolution stage 1616 defines a set of linear activations that are processed by successive stages of the convolutional layer 1614.The linear activations can be processed by a detector stage 1618. In the detector stage 1618, each linear activation is processed by a non-linear activation function. The non-linear activation function increases the nonlinear properties of the overall network without affecting the receptive fields of the convolution layer. Several types of non-linear activation functions may be used. One particular type is the rectified linear unit (ReLU), which uses an activation function defined as f(x) = max(0,x), such that the activation is thresholded at zero.The pooling stage 1620 uses a pooling function that replaces the output of the second convolutional layer 1606 with a summary statistic of the nearby outputs. The pooling function can be used to introduce translation invariance into the neural network, such that small translations to the input do not change the pooled outputs. Invariance to local translation can be useful in scenarios where the presence of a feature in the input data is more important than the precise location of the feature. Various types of pooling functions can be used during the pooling stage 1620, including max pooling, average pooling, and 12-norm pooling. Additionally, some CNN implementations do not include a pooling stage. Instead, such implementations substitute and additional convolution stage having an increased stride relative to previous convolution stages.The output from the convolutional layer 1614 can then be processed by the next layer 1622. The next layer 1622 can be an additional convolutional layer or one of the fully connected layers 1608. For example, the first convolutional layer 1604 of Fig. 16A can output to the second convolutional layer 1606, while the second convolutional layer can output to a first layer of the fully connected layers 1608.FIG. 17 illustrates an exemplary recurrent neural network. In a recurrent neural network (RNN), the previous state of the network influences the output of the current state of the network. RNNs can be built in a variety of ways using a variety of functions. The use of RNNs generally revolves around using mathematical models to predict the future based on a prior sequence of inputs. For example, an RNN may be used to perform statistical language modeling to predict an upcoming word given a previous sequence of words. The illustrated RNN 1700 can be described as having an input layer 1702 that receives an input vector, hidden layers 1704 to implement a recurrent function, a feedback mechanism 1705 to enable a 'memory' of previous states, and an output layer 1706 to output a result. The RNN 1700 operates based on time-steps. The state of the RNN at a given time step is influenced based on the previous time step via the feedback mechanism 1705. For a given time step, the state of the hidden layers 1704 is defined by the previous state and the input at the current time step. An initial input (x1) at a first time step can be processed by the hidden layer 1704. A second input (x2) can be processed by the hidden layer 1704 using state information that is determined during the processing of the initial input (x1). A given state can be computed as st = f(Uxt + Wst-1), where U and W are parameter matrices. The function f is generally a nonlinearity, such as the hyperbolic tangent function (Tanh) or a variant of the rectifier function f(x) = max(0,x). However, the specific mathematical function used in the hidden layers 1704 can vary depending on the specific implementation details of the RNN 1700.In addition to the basic CNN and RNN networks described, variations on those networks may be enabled. One example RNN variant is the long short-term memory (LSTM) RNN. LSTM RNNs are capable of learning long-term dependencies that may be necessary for processing longer sequences of language. A variant on the CNN is a convolutional deep belief network, which has a structure similar to a CNN and is trained in a manner similar to a deep belief network. A deep belief network (DBN) is a generative neural network that is composed of multiple layers of stochastic (random) variables. DBNs can be trained layer-by-layer using greedy unsupervised learning. The learned weights of the DBN can then be used to provide pre-train neural networks by determining an optimal initial set of weights for the neural network.FIG. 18 illustrates training and deployment of a deep neural network. Once a given network has been structured for a task the neural network is trained using a training dataset 1802. Various training frameworks have been developed to enable hardware acceleration of the training process. For example, the machine learning framework 1504 of Fig. 15 may be configured as a training framework 1804. The training framework 1804 can hook into an untrained neural network 1806 and enable the untrained neural net to be trained using the parallel processing resources described herein to generate a trained neural network 1808. To start the training process the initial weights may be chosen randomly or by pre-training using a deep belief network. The training cycle then be performed in either a supervised or unsupervised manner.Supervised learning is a learning method in which training is performed as a mediated operation, such as when the training dataset 1802 includes input paired with the desired output for the input, or where the training dataset includes input having known output and the output of the neural network is manually graded. The network processes the inputs and compares the resulting outputs against a set of expected or desired outputs. Errors are then propagated back through the system. The training framework 1804 can adjust to adjust the weights that control the untrained neural network 1806. The training framework 1804 can provide tools to monitor how well the untrained neural network 1806 is converging towards a model suitable to generating correct answers based on known input data. The training process occurs repeatedly as the weights of the network are adjusted to refine the output generated by the neural network. The training process can continue until the neural network reaches a statistically desired accuracy associated with a trained neural network 1808. The trained neural network 1808 can then be deployed to implement any number of machine learning operations.Unsupervised learning is a learning method in which the network attempts to train itself using unlabeled data. Thus, for unsupervised learning the training dataset 1802 will include input data without any associated output data. The untrained neural network 1806 can learn groupings within the unlabeled input and can determine how individual inputs are related to the overall dataset. Unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 1807 capable of performing operations useful in reducing the dimensionality of data. Unsupervised training can also be used to perform anomaly detection, which allows the identification of data points in an input dataset that deviate from the normal patterns of the data.Variations on supervised and unsupervised training may also be employed. Semi-supervised learning is a technique in which in the training dataset 1802 includes a mix of labeled and unlabeled data of the same distribution. Incremental learning is a variant of supervised learning in which input data is continuously used to further train the model. Incremental learning enables the trained neural network 1808 to adapt to the new data 1812 without forgetting the knowledge instilled within the network during initial training.Whether supervised or unsupervised, the training process for particularly deep neural networks may be too computationally intensive for a single compute node. Instead of using a single compute node, a distributed network of computational nodes can be used to accelerate the training process.FIG. 19 is a block diagram illustrating distributed learning. Distributed learning is a training model that uses multiple distributed computing nodes to perform supervised or unsupervised training of a neural network. The distributed computational nodes can each include one or more host processors and one or more of the general-purpose processing nodes. As illustrated, distributed learning can be performed model parallelism 1902, data parallelism 1904, or a combination of model and data parallelism 1904.In model parallelism 1902, different computational nodes in a distributed system can perform training computations for different parts of a single network. For example, each layer of a neural network can be trained by a different processing node of the distributed system. The benefits of model parallelism include the ability to scale to particularly large models. Splitting the computations associated with different layers of the neural network enables the training of very large neural networks in which the weights of all layers would not fit into the memory of a single computational node. In some instances, model parallelism can be particularly useful in performing unsupervised training of large neural networks.In data parallelism 1904, the different nodes of the distributed network have a complete instance of the model and each node receives a different portion of the data. The results from the different nodes are then combined. While different approaches to data parallelism are possible, data parallel training approaches all require a technique of combining results and synchronizing the model parameters between each node. Exemplary approaches to combining data include parameter averaging and update based data parallelism. Parameter averaging trains each node on a subset of the training data and sets the global parameters (e.g., weights, biases) to the average of the parameters from each node. Parameter averaging uses a central parameter server that maintains the parameter data. Update based data parallelism is similar to parameter averaging except that instead of transferring parameters from the nodes to the parameter server, the updates to the model are transferred. Additionally, update based data parallelism can be performed in a decentralized manner, where the updates are compressed and transferred between nodes.Combined model and data parallelism 1906 can be implemented, for example, in a distributed system in which each computational node includes multiple GPUs. Each node can have a complete instance of the model with separate GPUs within each node are used to train different portions of the model.Distributed training has increased overhead relative to training on a single machine. However, the parallel processors and GPGPUs described herein can each implement various techniques to reduce the overhead of distributed training, including techniques to enable high bandwidth GPU-to-GPU data transfer and accelerated remote data synchronization.Exemplary Machine Learning ApplicationsMachine learning can be applied to solve a variety of technological problems, including but not limited to computer vision, autonomous driving and navigation, speech recognition, and language processing. Computer vision has traditionally been one of the most active research areas for machine learning applications. Applications of computer vision range from reproducing human visual abilities, such as recognizing faces, to creating new categories of visual abilities. For example, computer vision applications can be configured to recognize sound waves from the vibrations induced in objects visible in a video. Parallel processor accelerated machine learning enables computer vision applications to be trained using significantly larger training dataset than previously feasible and enables inferencing systems to be deployed using low power parallel processors.Parallel processor accelerated machine learning has autonomous driving applications including lane and road sign recognition, obstacle avoidance, navigation, and driving control. Accelerated machine learning techniques can be used to train driving models based on datasets that define the appropriate responses to specific training input. The parallel processors described herein can enable rapid training of the increasingly complex neural networks used for autonomous driving solutions and enables the deployment of low power inferencing processors in a mobile platform suitable for integration into autonomous vehicles.Parallel processor accelerated deep neural networks have enabled machine learning approaches to automatic speech recognition (ASR). ASR includes the creation of a function that computes the most probable linguistic sequence given an input acoustic sequence. Accelerated machine learning using deep neural networks have enabled the replacement of the hidden Markov models (HMMs) and Gaussian mixture models (GMMs) previously used for ASR.Parallel processor accelerated machine learning can also be used to accelerate natural language processing. Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to erroneous or unfamiliar input. Exemplary natural language processor applications include automatic machine translation between human languages.The parallel processing platforms used for machine learning can be divided into training platforms and deployment platforms. Training platforms are generally highly parallel and include optimizations to accelerate multi-GPU single node training and multi-node, multi-GPU training, while deployed machine learning (e.g., inferencing) platforms generally include lower power parallel processors suitable for use in products such as cameras, autonomous robots, and autonomous vehicles.Compression for Sparse Data Structures Utilizing Mode Search ApproximationIn some embodiments, a system includes compression for sparse data structures utilizing mode search approximation. The sparse data compression (which may also be referred to herein as machine learning (ML) compression) improves memory bandwidth of workloads including machine learning, allowing for increased power efficiency and performance.In some embodiments, machine learning compression (a first compression operation) may be complementary to a second compression operation, such as delta compression, and the machine learning compression operation may operate together with (in parallel with) the second compression operation. A compressed vector may be taken from whichever of the compression operations is successful in generating a compressed output vector. In some embodiments, there may be a preference for use of the output of the machine learning compression operation if both compression operations are successful.FIG. 20A is an illustration of compression for sparse data structures according to some embodiments. In some embodiments, a compression operation for sparse data structures (machine learning compression) includes two main parts:(1) Mode Identification 2000 - Identification of a mode (most repeated value) in the incoming data, wherein the identification of the utilizes a highly efficient approximation operation to provide for rapid computation of the mode 2000.(2) Encoding of Output Vector 2010 - Encoding of a significance map, the identified mode value, and remaining uncompressed data (which may also be referred to herein as incompressible data) into a single outgoing vector of predetermined size.In some embodiments, incoming data is processed identify a mode in such data. In an example, an incoming uncompressed data vector is 128b in size. With each data point being one byte in size, a brute force method to find the mode would require looking at 128 values sequentially and counting all the unique occurrences. This would require excessive processing time in a system, and thus is not a practical operation.In some embodiments, in order to implement a compression process as fixed function hardware, an approximation is applied in identification of the mode. In some embodiments, an approximation algorithm is to efficiently select an approximation of the mode in an operation that examines less than all of the input data, provides a comparison of less than all of the examined data, or both. In some embodiments, an apparatus, system, or process is to construct a ternary tree where at each node there is a 3-way compare (a ternary comparison) that selects a repeating value between 3 inputs, or, if none exists, one of the inputs is returned, with the choice of the non-repeated value to be returned not being critical. In some embodiments, each of these node input values is a portion of a data value, such as a value that is only 2 bits in length, with these values being referred to as bit slices of the original 8-bit data points.In some embodiments, an algorithm for an approximated mode search may be as follows: char find_eq(char a, char b, char c){if(a==b)return a;else if(b==c)return b;elsereturn c;}In some embodiments, because the operation includes a ternary tree, a compromise is applied to choose a subset of the data points for search, such as selecting only 81 input data points out of 128 for search. In this manner the result is a balanced tree and with only 4 comparison levels. In some embodiments, each of the subset of input values (81 input values) is separated into four 2-bit slices. There are four parallel ternary trees that compare these bit slices, each tree eventually producing the 2-bit output value that is most repeating. The four outputs are concatenated into a final 8-bit mode that approximates the most repeating value in the original 128 data points. In some embodiments, the 81 inputs to the mode finding algorithm are arranged in a manner, such as being evenly spread across the 128 data points, to provide a best quality approximation for the mode search.If the three inputs to each node are a/b/c and are denoted with 0∼26 for each top level compare node, then input data to node mapping function can be defined as: ai = data ibi = data i + 42ci = data i + 84 i ε [0,26]In some embodiments, upon the mode being identified, a significance map, identified mode, and remaining uncompressed data are encoded in a vector, and the vector is outputted. In an example in which there are 128 values (a 128 byte uncompressed data structure), a 128-bit significance map is constructed, with one bit per input value. When the input value is equal to the mode, the bit for the respective data value is set. In some embodiments, data values that are not equal to the mode are concatenated into a single uncompressed data vector in the order in which they occur. Stated in another way, the repeating values are removed from the original data vector structure, but the original order of the data values is preserved. Upon completion of the compression operation, the three components, the components being the mode significance map, the mode value, and the uncompressed data, are combined into a compressed output vector.In some embodiments, the compression may be determined to be successful when the significance map, identified mode value, and uncompressed data all fit in the desired output block size. Thus, in one example, in order to achieve 2:1 compression with 128 8-bit input values, it is necessary that there be at least 81 repeating values in the original data values. That is because the 128-bit significance map (8 bytes) and mode (1 byte) consume 17 bytes, leaving 47 bytes to store uncompressed data (for a total of 17 + 47 = 64 bytes).FIG. 20B is a flowchart to illustrate a process for selection of a compressed output vector according to some embodiments. In some embodiments, performance of a data compression operation 2030 may provide for parallel operation of at least two compression operations, wherein a first compression operation 2035 may be a process for machine learning compression (machine learning compression), such as is implemented in FIG. 21 , and a second compression operation 2145 such as delta compression.In some embodiments, determinations may be made whether the first compression operation is successful 2140 and whether the second compression operation is successful 2150, wherein a compressed output vector is generated if the compression is successful. In some embodiments, a special compression control surface (CCS) may be attached to each main surface that is compressed, the CCS including an encoding (such as a 4-bit encoding in one implementation) that is stored for every 128 bytes of main surface. The encoding is to indicate which of the compression operations, if any, is successful.In some embodiments, a determination is made whether both of the compression operations are successful 2155, wherein the determination may be made according to the CCS encoding. If so, one of compression output vectors is selected. In some embodiments, a particular compression operation is preferred, such as the machine learning compression, and the compression output vector of the preferred compression operation is selected 2160. If not, then if one of the compression operations is successful 2155, the compression output vector of the successful compression operation is selected 2170. If neither of the compression operation is successful, then the overall data compression operation fails 2175.FIG. 21 is a flowchart to illustrate a process for compression of sparse data structures according to some embodiments. In some embodiments, a process for compression includes receipt of incoming data 2100, wherein the incoming data includes sparse data structures, including, but not limited to, machine learning data. In some embodiments, the compression operation may be implemented in parallel with one or more other compression operations, such as illustrated in FIG. 20B . A search of each of the received data structures is performed to determine a mode value, the mode value being the most repeated value in the data structure, with the search being perform using an approximation algorithm 2105. The approximation algorithm may include, but is not limited to, the ternary search algorithm illustrated in FIGS. 23 and 24 .In some embodiments, upon the mode value being determine in a data structure through operation the approximation algorithm 2110, there is a determination whether a number of occurrences of the mode value in the data structure is greater than equal to a threshold value X 2115, X being the minimum number of times that a mode must occur in order for the compression operation to be successful. If not, then the compression process fails 2120, and the process would continue as required without the completion of the data compression (such as providing the uncompressed data structure, or other action). Otherwise, the mode value is identified as the mode for the data structure 2125.In some embodiments, a mode significant map is generated 2130, the mode significant map identifying the locations within the data structure where the mode is located. In some embodiments, each bit of the significant map identifies a location, and, if a bit is set, this indicates that the mode is present in this location. If a bit of the significance map is not set, this indicates that other data that remains uncompressed is present at that location. The significance map, the identified mode, and the remaining uncompressed data (wherein the remaining uncompressed data may be the remaining data concatenated together as a string) are encoded within a compressed output vector 2135, the vector having a particular size, and the compressed output vector 2140 is provided for transmission, processing, or storage.FIG. 22 is a flowchart to illustrate a process for decompression of sparse data structures according to some embodiments. In some embodiments, in order to decompress data contained in a compressed vector 2200, such as a 64-byte compressed vector that has been compressed such as illustrated in FIG. 21 , an apparatus, system, or process is to parse the compressed vector to obtain a significance map, a mode, and uncompressed data. The significance map is to identify locations for the mode in an uncompressed data vector, such as a 128-byte vector in an example. In some embodiments, each bit of the significance map that is set indicates that the mode is present at the respective byte location in the original vector, and each bit that is not set indicates that a byte of uncompressed data is present at the respective byte location in the original vector.In some embodiments, the original uncompressed data vector is generated by using each bit of the significance map to determine whether the mode or a byte of uncompressed data is to be placed into the data vector. This is illustrated in FIG. 22 as setting a counter n as zero initially 2210, and determining whether bit location n in the significant map is set 2215. If so, then the mode is inserted at byte location n in a data vector 2220; and, if not, then a next uncompressed data byte from the compressed vector is inserted at byte n in the data vector.In some embodiments, there is then a determination whether the final bit of the significance map has been read, shown as a determination whether n is less than the number of bits minus 1 (S being the number of bits, which may be 128 in an example) 2230. If so, then n is incremented 2235 and the next location is evaluated 2215. If not, then the decompression process is complete, and uncompressed data vector is output 2240.FIG. 23 is an illustration of an uncompressed data vector and a resulting compressed data vector according to some embodiments. As shown in FIG. 23 , an uncompressed data vector 2300 is a vector of a certain size, which is 128-bytes in this example, bytes 0 through 127. It may be assumed that the data vector 2300 includes a certain repeating mode 2320. While for simplicity of illustration the total number of occurrences is shown as fewer, for compression the number of occurrences is required to be a certain number for the compressed data to fit within the compressed output vector, where the number is 81 in this illustrated example.Also illustrated in FIG. 23 is the resulting compressed data vector 2350, such as a data vector generated by the process illustrated in FIG. 21 . The compressed data vector 2350 is a certain predetermined size. In the illustrated example the compressed data vector 2350 is half the size of the uncompressed data vector 2300, thus being 64 bytes. As illustrated, the compressed data vector includes a significance map 2360 (16 bytes) indicating the locations of the mode 2320 (1 byte) in the original uncompressed data vector 2300, and further includes the remaining uncompressed data, which in this example is a maximum of 47 bytes (128 bytes minus at least 81 instances of the mode) in order for the significance map 2360, mode 2310, and uncompressed data 2370 to fit within the compressed data vector 2350.In some embodiments, an approximation algorithm utilizes a parallel search of portions of the input data, such as the data vector 2300, with the search being a ternary search of data slices. It is noted that there are several design choices that may be made in order to adapt the algorithm to a particular data type in an embodiment. The specifics described in the example described below are directed to an 8-bit data type and 128-byte data structure as shown in FIG. 23 .In some embodiments, for the approximation algorithm, a number of input nodes to the mode calculation tree is rounded down to maximum power of 3 number that fits in the total number of input values. In practice this means that the algorithm will only search a subset of input values to determine the mode within the data structure. The size and number of bit slices is dictated by the data size. It may be determined experimentally that 4 slices times 2 bit is an optimal design for 8 bits. In such operation, the small size of the bit slice enables very simple comparators translating into gate count savings.FIG. 24 is an illustration of a portion of a mode approximation algorithm according to some embodiments. FIG. 24 provides a portion of the algorithm that is illustrated in FIG. 25 . In some embodiments, for the mode approximation algorithm 2400, two-bit slices of a data vector are compared in a hierarchy of ternary tree comparisons. For each comparison, three two-bit slices are compared using two comparators, a first comparator to compare a with b, and a second comparator to compare b with c. However, the choice and order of comparisons between the three slices may vary in different embodiments. As shown, the result of either comparison is returned for the next comparison if the bit slices are equal. Otherwise, one of the bit slices (c in the illustrated example) is returned.As further illustrated in FIG. 24 , the results of each of the bit slice comparisons 2410, the comparisons being a first comparison between a0, b0, and c0, a second comparison between a1, b1, and c1, and a third comparison between a2, b2, and c2, are then compared in the same fashion in the next level of the process, shown as the comparison between a, b, and c 2420.FIG. 25 is an illustration of a mode approximation algorithm according to some embodiments. In some embodiments, a mode approximation algorithm 2500 provides for approximation of a mode within a particular data input, the data input 2505 in this example being 128B of data, providing 128 values at 8 bits per pixel format to be compressed. However, embodiments are not limited to this memory structure, and the implementation of the mode approximation algorithm may vary according to the structure of the data value to be compressed.As shown in FIG. 25 , mode approximation algorithm 2500 includes a hierarchy of ternary tree comparisons. For each comparison, three two-bit slices are compared using two comparators, a first comparator to compare a with b, and a second comparator to compare b with c. As illustrated, for a data input 2505 includes 128 8-bit values, there may be four comparison levels in in the hierarchy of comparisons, these being illustrated at level 2510, level 2520, level 2530, and level 2540. FIG. 25 specifically illustrates comparisons for a first set of two bits [1,0] for each value to be compared. As shown in FIG. 24 , for each comparison, three two-bit slices are compared using two comparators, such as a first comparator to compare a with b, and a second comparator to compare b with c. As shown, the result of either comparison is returned for the next comparison if the bit slices are equal. Otherwise, one of the bit slices (c in the illustrated example) is returned.In some embodiments, a subset of the values in the input data are selected for the mode approximation algorithm 2500. In some embodiments, the number of values to be compared are selected to enable searching utilizing the comparisons available in the hierarchy of comparisons. In level 2510 there are a total of 28 sets of three 2-bit slices for comparison, which represents bit slices for 81 (27 x 3) of the 128 values. For efficiency of operation, including enabling parallel processing of all or part of each level of the algorithm, a reduced 81 of the 128 values are utilized for comparison. In some embodiments, the values to be selected for comparison are spread through the full set of values. In the example illustrated in FIG. 25 , this is accomplished by choosing a value ai beginning at a first index i, a value bi beginning at a second index i + 42, and a value ci beginning at a first index i + 84. However, embodiments are not limited to these particular selections.FIG. 25 specifically illustrates comparisons for a first set of two bits [1,0] for each value to be compared, resulting in the Final [1,0] value in the level 2550, wherein there will also be comparisons for each of the other bit slices result in Final [3,2], Final [5,4], and Final [7,6] values to be concatenated into Final Mode [7,0] 2560.Upon generation of the Final Mode [7,0] 2560, a compressed data value 2570 is generated, which may be as illustrated in FIG. 23 . As shown in FIG. 25 , the compressed data value 2570 includes a significance map [127:0] with each bit in the significance map to indicate whether a corresponding 8-bit value in the input data value 2505 is equal to the identified mode; the identified mode value [135:128]; and the uncompressible data [511:135]. As indicated in FIG. 25 , values that are not equal to the mode are placed in the uncompressible data in the original order, and, for the particular implementation illustrated in FIG. 25 , the compression will fail if more than 47 values of the data input 2505 are not equal to the mode.FIG. 26 is an illustration of an apparatus or system to provide for compression for sparse data structures utilizing mode search approximation, according to some embodiments. As illustrated in FIG. 26 , a computing system 2600, such as, for example, system 100 illustrated in Figure 1 , includes one or more processors 2605 for the processing of data, such as processors 102 including graphics processor(s) 108 as shown in Figure 1 . The computing system 2600 further includes memory 2610 for the storage of data and one or more elements for the transfer of data, such as interface bus 2615 and transceiver 2620. In some embodiments, the transceiver 2620 is a wireless transceiver with one or more antennas 2625 for transmission and reception of data, wherein the antennas 2625 may include a dipole antenna or other antenna structures.In some embodiments, the computing system 2600 further includes a data compression mechanism 2630 to perform compression, including a sparse structure compression engine 2635 to provide compression of sparse data structures, the sparse data structures for compression including, but are not limited to, machine learning values. In some embodiments, the sparse structure sparse structure compression engine 2635 provides for mode identification, in which a mode (most repeated value) in the incoming data is identified, the identification of the mode utilizing a highly efficient approximation operation to provide for rapid computation of the mode, and encoding of and output vector, in such a significance map, mode, and remaining uncompressed data and mode value are encoded into a single outgoing vector of predetermined size. The sparse structure compression engine 2635 may provide compression and decompression operations as illustrated in FIGS, 20-25 .In some embodiments, an apparatus includes one or more processors including a graphics processor to process data; and a memory for storage of data, including compressed data, wherein the one or more processors are to provide for compression of a data structure, including identification of a mode in the data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation operation, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.In some embodiments, the mode approximation operation includes a hierarchy of comparison levels, each comparison level of the hierarchy of comparison levels including one or more comparisons of at least a portion of two or more values of the plurality of values.In some embodiments, each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values.In some embodiments, each ternary comparison of the one or more ternary comparisons includes: comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match; comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; and returning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.In some embodiments, each bit slice includes two bits.In some embodiments, the mode approximation operation includes comparison of less than all values of the plurality of values.In some embodiments, the data structure includes 128 8-bit values, and the hierarchy of comparison levels includes 4 levels to compare 81 of the 128 values.In some embodiments, wherein the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.In some embodiments, the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.In some embodiments, the data structure is a data structure for machine learning.In some embodiments, one or more non-transitory computer-readable storage mediums have stored thereon executable computer program instructions that, when executed by one or more processors, cause the one or more processors to perform operations including: identifying a mode in a data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation algorithm, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.In some embodiments, the mode approximation algorithm includes a hierarchy of comparison levels, each comparison level including one or more comparisons of at least a portion of two or more values of the plurality of values.In some embodiments, each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values.In some embodiments, each ternary comparison of the one or more ternary comparisons includes: comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match; comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; and returning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.In some embodiments, each bit slice includes two bits.In some embodiments, the mode approximation algorithm includes comparison of less than all values of the plurality of values.In some embodiments, the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.In some embodiments, the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.In some embodiments, the data structure is a data structure for machine learning.In some embodiments, the one or more mediums include instructions for performing a second compression operation in parallel with the compression operation; and selecting an output vector from either the compression operation or the second compression operation.In some embodiments, the one or more mediums include instructions for decompressing the encoded output vector, including: parsing the output vector to obtain the significance map, mode, and uncompressed data; and inserting either mode or a next uncompressed data value at each of a plurality of locations based on the significance map.In some embodiments, an apparatus includes means for identifying a mode in a data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation algorithm, and means for encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.In some embodiments, the mode approximation algorithm includes a hierarchy of comparison levels, each comparison level including one or more comparisons of at least a portion of two or more values of the plurality of values.In some embodiments, each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values.In some embodiments, each ternary comparison of the one or more ternary comparisons includes: comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match; comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; and returning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.In some embodiments, each bit slice includes two bits.In some embodiments, the mode approximation algorithm includes comparison of less than all values of the plurality of values.In some embodiments, the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.In some embodiments, the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.In some embodiments, the data structure is a data structure for machine learning.In some embodiments, the apparatus further includes means for performing a second compression operation in parallel with the compression operation; and means for selecting an output vector from either the compression operation or the second compression operation.In some embodiments, the apparatus includes means for parsing the output vector to obtain the significance map, mode, and uncompressed data; and means for inserting either mode or a next uncompressed data value at each of a plurality of locations based on the significance map.In some embodiments, a method includes identifying a mode in a data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation algorithm, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.In some embodiments, the mode approximation algorithm includes a hierarchy of comparison levels, each comparison level including one or more comparisons of at least a portion of two or more values of the plurality of values.In some embodiments, each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values.In some embodiments, each ternary comparison of the one or more ternary comparisons includes: comparing the first bit slice to the second bit slice and returning the first bit slice if the first bit slice and the second bit slice match; comparing the second bit slice to the third bit slice and returning the second bit slice if the second bit slice and the third bit slice match; and returning one of the first, second, and third bit slices if the first bit slice and the second bit slice do not match and the second bit slice and the third bit slice do not match.In some embodiments, each bit slice includes two bits.In some embodiments, the mode approximation algorithm includes comparison of less than all values of the plurality of values.In some embodiments, the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.In some embodiments, the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure.In some embodiments, the data structure is a data structure for machine learning.In some embodiments, the method includes performing a second compression operation in parallel with the compression operation; and selecting an output vector from either the compression operation or the second compression operation.In some embodiments, the method includes decompressing the encoded output vector, including: parsing the output vector to obtain the significance map, mode, and uncompressed data; and inserting either mode or a next uncompressed data value at each of a plurality of locations based on the significance map.In some embodiments, a computing system includes one or more processors including a graphics processor to process machine learning data; a dynamic random access memory (DRAM) for storage of data; and a wireless transceiver and dipole antenna for transmission and reception of data, wherein the one or more processors are to provide for compression of a machine learning data structure, including identification of a mode in the data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation operation, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.In some embodiments, the mode approximation operation includes a hierarchy of comparison levels, each comparison level of the hierarchy of comparison levels including one or more comparisons of at least a portion of two or more values of the plurality of values.In some embodiments, each comparison level of the hierarchy of comparison levels includes one or more ternary comparisons between a first bit slice from a first value of the plurality of values, a second bit slice from a second value of the plurality of values, and a third bit slice from a third value of the plurality of values.In some embodiments, the significance map includes a plurality of bits, each bit of the plurality of bit representing a respective value of the plurality of values, and wherein a bit that is set indicates that the respective value for the bit contains the mode.In some embodiments, the remaining uncompressed data includes each value of the plurality of values that does not contain the mode, values within the remaining uncompressed data being stored in order of the values in the data structure..In the description above, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the described embodiments. It will be apparent, however, to one skilled in the art that embodiments may be practiced without some of these specific details. In other instances, well-known structures and devices are shown in block diagram form. There may be intermediate structure between illustrated components. The components described or illustrated herein may have additional inputs or outputs that are not illustrated or described.Various embodiments may include various processes. These processes may be performed by hardware components or may be embodied in computer program or machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor or logic circuits programmed with the instructions to perform the processes. Alternatively, the processes may be performed by a combination of hardware and software.Portions of various embodiments may be provided as a computer program product, which may include a computer-readable medium having stored thereon computer program instructions, which may be used to program a computer (or other electronic devices) for execution by one or more processors to perform a process according to certain embodiments. The computer-readable medium may include, but is not limited to, magnetic disks, optical disks, read-only memory (ROM), random access memory (RAM), erasable programmable read-only memory (EPROM), electrically-erasable programmable read-only memory (EEPROM), magnetic or optical cards, flash memory, or other type of computer-readable medium suitable for storing electronic instructions. Moreover, embodiments may also be downloaded as a computer program product, wherein the program may be transferred from a remote computer to a requesting computer. In some embodiments, a non-transitory computer-readable storage medium has stored thereon data representing sequences of instructions that, when executed by a processor, cause the processor to perform certain operations.Many of the methods are described in their most basic form, but processes can be added to or deleted from any of the methods and information can be added or subtracted from any of the described messages without departing from the basic scope of the present embodiments. It will be apparent to those skilled in the art that many further modifications and adaptations can be made. The particular embodiments are not provided to limit the concept but to illustrate it. The scope of the embodiments is not to be determined by the specific examples provided above but only by the claims below.If it is said that an element "A" is coupled to or with element "B," element A may be directly coupled to element B or be indirectly coupled through, for example, element C. When the specification or claims state that a component, feature, structure, process, or characteristic A "causes" a component, feature, structure, process, or characteristic B, it means that "A" is at least a partial cause of "B" but that there may also be at least one other component, feature, structure, process, or characteristic that assists in causing "B." If the specification indicates that a component, feature, structure, process, or characteristic "may", "might", or "could" be included, that particular component, feature, structure, process, or characteristic is not required to be included. If the specification or claim refers to "a" or "an" element, this does not mean there is only one of the described elements.An embodiment is an implementation or example. Reference in the specification to "an embodiment," "one embodiment," "some embodiments," or "other embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least some embodiments, but not necessarily all embodiments. The various appearances of "an embodiment," "one embodiment," or "some embodiments" are not necessarily all referring to the same embodiments. It should be appreciated that in the foregoing description of exemplary embodiments, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various novel aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed embodiments requires more features than are expressly recited in each claim. Rather, as the following claims reflect, novel aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims are hereby expressly incorporated into this description, with each claim standing on its own as a separate embodiment.
A semiconductor chip comprising memory controller circuitry having interface circuitry to couple to a memory channel. The memory controller includes first logic circuitry to implement a first memory channel protocol on the memory channel. The first memory channel protocol is specific to a first volatile system memory technology. The interface also includes second logic circuitry to implement a second memory channel protocol on the memory channel. The second memory channel protocol is specific to a second non volatile system memory technology. The second memory channel protocol is a transactional protocol.
A memory controller, comprising:an interface to a memory channel that supports double data rate (DDR) volatile memory accesses;circuitry to limit outstanding write requests directed by the memory controller to a DIMM that is coupled to the memory channel, the circuitry to maintain a credit count and to decrement the credit count with a next write request, the circuitry to increment the credit count in response to the memory controller's reception of credit feedback information sent by the DIMM to the memory controller.The memory controller of claim 1 wherein the credit feedback information is received by the memory controller on ECC signal lines of the memory channel.The memory controller of claim 1 wherein the memory controller is integrated into a computing system comprising multiple CPU cores and a networking interface.The memory controller of claim 3 wherein the computing system further comprises DRAM memory coupled to the memory channel.The memory controller of claim 4 wherein the DRAM memory is to act as a memory side cache for the non volatile memory.A DIMM, comprising:non volatile memory;a write buffer to temporarily store write requests directed to the DIMM by a memory controller;circuitry to send updates to the memory controller regarding the write buffer's amount of available storage space.The DIMM of claim 6 further comprising an interface to an industry standard DDR memory channel.The DIMM of claim 7 wherein the updates are sent over ECC signal lines of the memory channel.The DIMM of claim 6 wherein the updates specify credits amounting to an amount of information that has been serviced from the write buffer and written into the non volatile memory.The DIMM of claim 6 wherein the non volatile memory is a three-dimensional memory and preferably comprises resistive storage cells.The DIMM of claim 6 wherein the DIMM is plugged into a computing system.A method, comprising:sending read requests to a first DIMM having volatile memory that is plugged into a memory channel, the sending of the read requests being performed according to an industry standard DDR protocol;sending write requests to a second DIMM having non volatile memory that is also plugged into the channel and maintaining a credit count commensurate with the sending that indicates how much write buffer space is available on the DIMM, the sending of the write requests to cease if the credit count reaches a level that indicates insufficient write buffer space is available to receive a next write request.The method of claim 12 further comprising receiving feedback credit information from the second DIMM that indicates an amount of information that has been written into the non volatile memory.The method of claim 13 wherein the feedback credit information is sent over ECC lines of the memory channel.The method of claim 13 wherein the non volatile memory is a three-dimensional memory and preferably comprises resistive storage cells.
BACKGROUNDField of the InventionThis invention relates generally to the field of computer systems. More particularly, the invention relates to an apparatus and method for implementing a multi-level memory hierarchy including a non-volatile memory tier.Description of the Related ArtA. Current Memory and Storage ConfigurationsOne of the limiting factors for computer innovation today is memory and storage technology. In conventional computer systems, system memory (also known as main memory, primary memory, executable memory) is typically implemented by dynamic random access memory (DRAM). DRAM-based memory consumes power even when no memory reads or writes occur because it must constantly recharge internal capacitors. DRAM-based memory is volatile, which means data stored in DRAM memory is lost once the power is removed. Conventional computer systems also rely on multiple levels of caching to improve performance. A cache is a high speed memory positioned between the processor and system memory to service memory access requests faster than they could be serviced from system memory. Such caches are typically implemented with static random access memory (SRAM). Cache management protocols may be used to ensure that the most frequently accessed data and instructions are stored within one of the levels of cache, thereby reducing the number of memory access transactions and improving performance.With respect to mass storage (also known as secondary storage or disk storage), conventional mass storage devices typically include magnetic media (e.g., hard disk drives), optical media (e.g., compact disc (CD) drive, digital versatile disc (DVD), etc.), holographic media, and/or mass-storage flash memory (e.g., solid state drives (SSDs), removable flash drives, etc.). Generally, these storage devices are considered Input/Output (I/O) devices because they are accessed by the processor through various I/O adapters that implement various I/O protocols. These I/O adapters and I/O protocols consume a significant amount of power and can have a significant impact on the die area and the form factor of the platform. Portable or mobile devices (e.g., laptops, netbooks, tablet computers, personal digital assistant (PDAs), portable media players, portable gaming devices, digital cameras, mobile phones, smartphones, feature phones, etc.) that have limited battery life when not connected to a permanent power supply may include removable mass storage devices (e.g., Embedded Multimedia Card (eMMC), Secure Digital (SD) card) that are typically coupled to the processor via low-power interconnects and I/O controllers in order to meet active and idle power budgets.With respect to firmware memory (such as boot memory (also known as BIOS flash)), a conventional computer system typically uses flash memory devices to store persistent system information that is read often but seldom (or never) written to. For example, the initial instructions executed by a processor to initialize key system components during a boot process (Basic Input and Output System (BIOS) images) are typically stored in a flash memory device. Flash memory devices that are currently available in the market generally have limited speed (e.g., 50 MHz). This speed is further reduced by the overhead for read protocols (e.g., 2.5 MHz). In order to speed up the BIOS execution speed, conventional processors generally cache a portion of BIOS code during the Pre-Extensible Firmware Interface (PEI) phase of the boot process. The size of the processor cache places a restriction on the size of the BIOS code used in the PEI phase (also known as the "PEI BIOS code").B. Phase-Change Memory (PCM) and Related TechnologiesPhase-change memory (PCM), also sometimes referred to as phase change random access memory (PRAM or PCRAM), PCME, Ovonic Unified Memory, or Chalcogenide RAM (C-RAM), is a type of non-volatile computer memory which exploits the unique behavior of chalcogenide glass. As a result of heat produced by the passage of an electric current, chalcogenide glass can be switched between two states: crystalline and amorphous. Recent versions of PCM can achieve two additional distinct states.PCM proivdes higher performance than flash because the memory element of PCM can be switched more quickly, writing (changing individual bits to either 1 or 0) can be done without the need to first erase an entire block of cells, and degradation from writes is slower (a PCM device may survive approximately 100 million write cycles; PCM degradation is due to thermal expansion during programming, metal (and other material) migration, and other mechanisms).BRIEF DESCRIPTION OF THE DRAWINGSThe following description and accompanying drawings are used to illustrate embodiments of the invention. In the drawings:FIG. 1illustrates a cache and system memory arrangement according to one embodiment of the invention;FIG. 2illustrates a memory and storage hierarchy employed in one embodiment of the invention;FIG. 3illustrates a computer system on which embodiments of the invention may be implemented;FIG. 4illustrates an implementation of near memory cache and far memory on a same memory channel;FIG. 5illustrates a write process that can be performed on the near memory/far memory system observed inFIG. 4;FIG. 6illustrates a read process that can be performed on the near memory/far memory system observed inFIG. 4;FIG. 7Aillustrates a "near memory in front of" architecture for integrating near memory cache and far memory on a same memory channel;FIGS. 7B-Dillustrate processes that can be performed by the system ofFIG. 7A;FIG. 8Aillustrates a "near memory in front of" architecture for integrating near memory cache and far memory on a same memory channel;FIGS. 8B-Dillustrate processes that can be performed by the system ofFIG. 8A;FIG. 9Aillustrates application of memory channel wiring to support near memory accesses;FIG. 9Billustrates application of memory channel wiring to support far memory accesses;FIG. 10illustrates a process for accessing near memory;FIG. 11illustrates an embodiment of far memory control logic circuitry;FIGS. 12A-Billustrate atomic processes that may transpire of a memory channel that supports near memory accesses and far memory accesses.DETAILED DESCRIPTIONIn the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot-dash, dots) are sometimes used herein to illustrate optional operations/components that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations/components, and/or that blocks with solid borders are not optional in certain embodiments of the invention.INTRODUCTIONMemory capacity and performance requirements continue to increase with an increasing number of processor cores and new usage models such as virtualization. In addition, memory power and cost have become a significant component of the overall power and cost, respectively, of electronic systems.Some embodiments of the invention solve the above challenges by intelligently subdividing the performance requirement and the capacity requirement between memory technologies. The focus of this approach is on providing performance with a relatively small amount of a relatively higher-speed memory such as DRAM while implementing the bulk of the system memory using significantly cheaper and denser non-volatile random access memory (NVRAM). Embodiments of the invention described below define platform configurations that enable hierarchical memory subsystem organizations for the use of NVRAM. The use of NVRAM in the memory hierarchy also enables new usages such as expanded boot space and mass storage implementations, as described in detail below.FIG. 1illustrates a cache and system memory arrangement according to embodiments of the invention. Specifically,Figure 1shows a memory hierarchy including a set of internal processor caches 120, "near memory" acting as a far memory cache 121, which may include both internal cache(s) 106 and external caches 107-109, and "far memory" 122. One particular type of memory which may be used for "far memory" in some embodiments of the invention is non-volatile random access memory ("NVRAM"). As such, an overview of NVRAM is provided below, followed by an overview of far memory and near memory.A. Non-Volatile Random Access Memory ("NVRAM")There are many possible technology choices for NVRAM, including PCM, Phase Change Memory and Switch (PCMS) (the latter being a more specific implementation of the former), byte-addressable persistent memory (BPRAM), storage class memory (SCM), universal memory, Ge2Sb2Te5, programmable metallization cell (PMC), resistive memory (RRAM), RESET (amorphous) cell, SET (crystalline) cell, PCME, Ovshinsky memory, ferroelectric memory (also known as polymer memory and poly(N-vinylcarbazole)), ferromagnetic memory (also known as Spintronics, SPRAM (spin-transfer torque RAM), STRAM (spin tunneling RAM), magnetoresistive memory, magnetic memory, magnetic random access memory (MRAM)), and Semiconductor-oxide-nitride-oxide-semiconductor (SONOS, also known as dielectric memory).NVRAM has the following characteristics:(1) It maintains its content even if power is removed, similar to FLASH memory used in solid state disks (SSD), and different from SRAM and DRAM which are volatile;(2) lower power consumption than volatile memories such as SRAM and DRAM;(3) random access similar to SRAM and DRAM (also known as randomly addressable);(4) rewritable and erasable at a lower level of granularity (e.g., byte level) than FLASH found in SSDs (which can only be rewritten and erased a "block" at a time - minimally 64 Kbyte in size for NOR FLASH and 16 Kbyte for NAND FLASH);(5) used as a system memory and allocated all or a portion of the system memory address space;(6) capable of being coupled to the processor over a bus using a transactional protocol (a protocol that supports transaction identifiers (IDs) to distinguish different transactions so that those transactions can complete out-of-order) and allowing access at a level of granularity small enough to support operation of the NVRAM as system memory (e.g., cache line size such as 64 or 128 byte). For example, the bus may be a memory bus (e.g., a DDR bus such as DDR3, DDR4, etc.) over which is run a transactional protocol as opposed to the non-transactional protocol that is normally used. As another example, the bus may one over which is normally run a transactional protocol (a native transactional protocol), such as a PCI express (PCIE) bus, desktop management interface (DMI) bus, or any other type of bus utilizing a transactional protocol and a small enough transaction payload size (e.g., cache line size such as 64 or 128 byte); and(7) one or more of the following:a) faster write speed than non-volatile memory/storage technologies such as FLASH;b) very high read speed (faster than FLASH and near or equivalent to DRAM read speeds);c) directly writable (rather than requiring erasing (overwriting with 1s) before writing data like FLASH memory used in SSDs);d) a greater number of writes before failure (more than boot ROM and FLASH used in SSDs); and/orAs mentioned above, in contrast to FLASH memory, which must be rewritten and erased a complete "block" at a time, the level of granularity at which NVRAM is accessed in any given implementation may depend on the particular memory controller and the particular memory bus or other type of bus to which the NVRAM is coupled. For example, in some implementations where NVRAM is used as system memory, the NVRAM may be accessed at the granularity of a cache line (e.g., a 64-byte or 128-Byte cache line), notwithstanding an inherent ability to be accessed at the granularity of a byte, because cache line is the level at which the memory subsystem accesses memory. Thus, when NVRAM is deployed within a memory subsystem, it may be accessed at the same level of granularity as the DRAM (e.g., the "near memory") used in the same memory subsystem. Even so, the level of granularity of access to the NVRAM by the memory controller and memory bus or other type of bus is smaller than that of the block size used by Flash and the access size of the I/O subsystem's controller and bus.NVRAM may also incorporate wear leveling algorithms to account for the fact that the storage cells at the far memory level begin to wear out after a number of write accesses, especially where a significant number of writes may occur such as in a system memory implementation. Since high cycle count blocks are most likely to wear out in this manner, wear leveling spreads writes across the far memory cells by swapping addresses of high cycle count blocks with low cycle count blocks. Note that most address swapping is typically transparent to application programs_because it is handled by hardware, lower-level software (e.g., a low level driver or operating system), or a combination of the two.B. Far MemoryThe far memory 122 of some embodiments of the invention is implemented with NVRAM, but is not necessarily limited to any particular memory technology. Far memory 122 is distinguishable from other instruction and data memory/storage technologies in terms of its characteristics and/or its application in the memory/storage hierarchy. For example, far memory 122 is different from:1) static random access memory (SRAM) which may be used for level 0 and level 1 internal processor caches 101a-b, 102a-b, 103a-b, 103a-b, and 104a-b dedicated to each of the processor cores 101-104, respectively, and lower level cache (LLC) 105 shared by the processor cores;2) dynamic random access memory (DRAM) configured as a cache 106 internal to the processor 100 (e.g., on the same die as the processor 100) and/or configured as one or more caches 107-109 external to the processor (e.g., in the same or a different package from the processor 100); and3) FLASH memory/magnetic disk/optical disc applied as mass storage (not shown); and4) memory such as FLASH memory or other read only memory (ROM) applied as firmware memory (which can refer to boot ROM, BIOS Flash, and/or TPM Flash).(not shown).Far memory 122 may be used as instruction and data storage that is directly addressable by a processor 100 and is able to sufficiently keep pace with the processor 100 in contrast to FLASH/magnetic disk/optical disc applied as mass storage. Moreover, as discussed above and described in detail below, far memory 122 may be placed on a memory bus and may communicate directly with a memory controller that, in turn, communicates directly with the processor 100.Far memory 122 may be combined with other instruction and data storage technologies (e.g., DRAM) to form hybrid memories (also known as Co-locating PCM and DRAM; first level memory and second level memory; FLAM (FLASH and DRAM)). Note that at least some of the above technologies, including PCM/PCMS may be used for mass storage instead of, or in addition to, system memory, and need not be random accessible, byte addressable or directly addressable by the processor when applied in this manner.For convenience of explanation, most of the remainder of the application will refer to "NVRAM" or, more specifically, "PCM," or "PCMS" as the technology selection for the far memory 122. As such, the terms NVRAM, PCM, PCMS, and far memory may be used interchangeably in the following discussion. However it should be realized, as discussed above, that different technologies may also be utilized for far memory. Also, that NVRAM is not limited for use as far memory.C. Near Memory"Near memory" 121 is an intermediate level of memory configured in front of a far memory 122 that has lower read/write access latency relative to far memory and/or more symmetric read/write access latency (i.e., having read times which are roughly equivalent to write times). In some embodiments, the near memory 121 has significantly lower write latency than the far memory 122 but similar (e.g., slightly lower or equal) read latency; for instance the near memory 121 may be a volatile memory such as volatile random access memory (VRAM) and may comprise a DRAM or other high speed capacitor-based memory. Note, however, that the underlying principles of the invention are not limited to these specific memory types. Additionally, the near memory 121 may have a relatively lower density and/or may be more expensive to manufacture than the far memory 122.In one embodiment, near memory 121 is configured between the far memory 122 and the internal processor caches 120. In some of the embodiments described below, near memory 121 is configured as one or more memory-side caches (MSCs) 107-109 to mask the performance and/or usage limitations of the far memory including, for example, read/write latency limitations and memory degradation limitations. In these implementations, the combination of the MSC 107-109 and far memory 122 operates at a performance level which approximates, is equivalent or exceeds a system which uses only DRAM as system memory. As discussed in detail below, although shown as a "cache" inFigure 1,the near memory 121 may include modes in which it performs other roles, either in addition to, or in lieu of, performing the role of a cache.Near memory 121 can be located on the processor die (as cache(s) 106) and/or located external to the processor die (as caches 107-109) (e.g., on a separate die located on the CPU package, located outside the CPU package with a high bandwidth link to the CPU package, for example, on a memory dual in-line memory module (DIMM), a riser/mezzanine, or a computer motherboard). The near memory 121 may be coupled in communicate with the processor 100 using a single or multiple high bandwidth links, such as DDR or other transactional high bandwidth links (as described in detail below).AN EXEMPLARY SYSTEM MEMORY ALLOCATION SCHEMEFigure 1illustrates how various levels of caches 101-109 are configured with respect to a system physical address (SPA) space 116-119 in embodiments of the invention. As mentioned, this embodiment comprises a processor 100 having one or more cores 101-104, with each core having its own dedicated upper level cache (L0) 101a-104a and mid-level cache (MLC) (L1) cache 101b-104b. The processor 100 also includes a shared LLC 105. The operation of these various cache levels are well understood and will not be described in detail here.The caches 107-109 illustrated inFigure 1may be dedicated to a particular system memory address range or a set of non-contiguous address ranges. For example, cache 107 is dedicated to acting as an MSC for system memory address range # 1 116 and caches 108 and 109 are dedicated to acting as MSCs for non-overlapping portions of system memory address ranges # 2 117 and # 3 118. The latter implementation may be used for systems in which the SPA space used by the processor 100 is interleaved into an address space used by the caches 107-109 (e.g., when configured as MSCs). In some embodiments, this latter address space is referred to as a memory channel address (MCA) space. In one embodiment, the internal caches 101a-106 perform caching operations for the entire SPA space.System memory as used herein is memory which is visible to and/or directly addressable by software executed on the processor 100; while the cache memories 101a-109 may operate transparently to the software in the sense that they do not form a directly-addressable portion of the system address space, but the cores may also support execution of instructions to allow software to provide some control (configuration, policies, hints, etc.) to some or all of the cache(s). The subdivision of system memory into regions 116-119 may be performed manually as part of a system configuration process (e.g., by a system designer) and/or may be performed automatically by software.In one embodiment, the system memory regions 116-119 are implemented using far memory (e.g., PCM) and, in some embodiments, near memory configured as system memory. System memory address range # 4 represents an address range which is implemented using a higher speed memory such as DRAM which may be a near memory configured in a system memory mode (as opposed to a caching mode).Figure 2illustrates a memory/storage hierarchy 140 and different configurable modes of operation for near memory 144 and NVRAM according to embodiments of the invention. The memory/storage hierarchy 140 has multiple levels including (1) a cache level 150 which may include processor caches 150A (e.g., caches 101A-105 inFigure 1) and optionally near memory as cache for far memory 150B (in certain modes of operation as described herein), (2) a system memory level 151 which may include far memory 151B (e.g., NVRAM such as PCM) when near memory is present (or just NVRAM as system memory 174 when near memory is not present), and optionally near memory operating as system memory 151A (in certain modes of operation as described herein), (3) a mass storage level 152 which may include a flash/magnetic/optical mass storage 152B and/or NVRAM mass storage 152A (e.g., a portion of the NVRAM 142); and (4) a firmware memory level 153 that may include BIOS flash 170 and/or BIOS NVRAM 172 and optionally trusted platform module (TPM) NVRAM 173.As indicated, near memory 144 may be implemented to operate in a variety of different modes including: a first mode in which it operates as a cache for far memory (near memory as cache for FM 150B); a second mode in which it operates as system memory 151A and occupies a portion of the SPA space (sometimes referred to as near memory "direct access" mode); and one or more additional modes of operation such as a scratchpad memory 192 or as a write buffer 193. In some embodiments of the invention, the near memory is partitionable, where each partition may concurrently operate in a different one of the supported modes; and different embodiments may support configuration of the partitions (e.g., sizes, modes) by hardware (e.g., fuses, pins), firmware, and/or software (e.g., through a set of programmable range registers within the MSC controller 124 within which, for example, may be stored different binary codes to identify each mode and partition).System address space A 190 inFigure 2is used to illustrate operation when near memory is configured as a MSC for far memory 150B. In this configuration, system address space A 190 represents the entire system address space (and system address space B 191 does not exist). Alternatively, system address space B 191 is used to show an implementation when all or a portion of near memory is assigned a portion of the system address space. In this embodiment, system address space B 191 represents the range of the system address space assigned to the near memory 151A and system address space A 190 represents the range of the system address space assigned to NVRAM 174.In addition, when acting as a cache for far memory 150B, the near memory 144 may operate in various sub-modes under the control of the MSC controller 124. In each of these modes, the near memory address space (NMA) is transparent to software in the sense that the near memory does not form a directly-addressable portion of the system address space. These modes include but are not limited to the following:(1) Write-Back Caching Mode: In this mode, all or portions of the near memory acting as a FM cache 150B is used as a cache for the NVRAM far memory (FM) 151B. While in write-back mode, every write operation is directed initially to the near memory as cache for FM 150B (assuming that the cache line to which the write is directed is present in the cache). A corresponding write operation is performed to update the NVRAM FM 151 B only when the cache line within the near memory as cache for FM 150B is to be replaced by another cache line (in contrast to write-through mode described below in which each write operation is immediately propagated to the NVRAM FM 151B).(2) Near Memory Bypass Mode : In this mode all reads and writes bypass the NM acting as a FM cache 150B and go directly to the NVRAM FM 151B. Such a mode may be used, for example, when an application is not cache friendly or requires data to be committed to persistence at the granularity of a cache line. In one embodiment, the caching performed by the processor caches 150A and the NM acting as a FM cache 150B operate independently of one another. Consequently, data may be cached in the NM acting as a FM cache 150B which is not cached in the processor caches 150A (and which, in some cases, may not be permitted to be cached in the processor caches 150A) and vice versa. Thus, certain data which may be designated as "uncacheable" in the processor caches may be cached within the NM acting as a FM cache 150B.(3) Near Memory Read-Cache Write Bypass Mode: This is a variation of the above mode where read caching of the persistent data from NVRAM FM 151B is allowed (i.e., the persistent data is cached in the near memory as cache for far memory 150B for read-only operations). This is useful when most of the persistent data is "Read-Only" and the application usage is cache-friendly.(4) Near Memory Read-Cache Write-Through Mode : This is a variation of the near memory read-cache write bypass mode, where in addition to read caching, write-hits are also cached. Every write to the near memory as cache for FM 150B causes a write to the FM 151B. Thus, due to the write-through nature of the cache, cache-line persistence is still guaranteed.When acting in near memory direct access mode, all or portions of the near memory as system memory 151A are directly visible to software and form part of the SPA space. Such memory may be completely under software control. Such a scheme may create a non-uniform memory address (NUMA) memory domain for software where it gets higher performance from near memory 144 relative to NVRAM system memory 174. By way of example, and not limitation, such a usage may be employed for certain high performance computing (HPC) and graphics applications which require very fast access to certain data structures.In an alternate embodiment, the near memory direct access mode is implemented by "pinning" certain cache lines in near memory (i.e., cache lines which have data that is also concurrently stored in NVRAM 142). Such pinning may be done effectively in larger, multi-way, set-associative caches.Figure 2also illustrates that a portion of the NVRAM 142 may be used as firmware memory. For example, the BIOS NVRAM 172 portion may be used to store BIOS images (instead of or in addition to storing the BIOS information in BIOS flash 170). The BIOS NVRAM portion 172 may be a portion of the SPA space and is directly addressable by software executed on the processor cores 101-104, whereas the BIOS flash 170 is addressable through the I/O subsystem 115. As another example, a trusted platform module (TPM) NVRAM 173 portion may be used to protect sensitive system information (e.g., encryption keys).Thus, as indicated, the NVRAM 142 may be implemented to operate in a variety of different modes, including as far memory 151B (e.g., when near memory 144 is present/operating, whether the near memory is acting as a cache for the FM via a MSC control 124 or not (accessed directly after cache(s) 101A - 105 and without MSC control 124)); just NVRAM system memory 174 (not as far memory because there is no near memory present/operating; and accessed without MSC control 124); NVRAM mass storage 152A; BIOS NVRAM 172; and TPM NVRAM 173. While different embodiments may specify the NVRAM modes in different ways,Figure 3describes the use of a decode table 333.Figure 3illustrates an exemplary computer system 300 on which embodiments of the invention may be implemented. The computer system 300 includes a processor 310 and memory/storage subsystem 380 with a NVRAM 142 used for both system memory, mass storage, and optionally firmware memory. In one embodiment, the NVRAM 142 comprises the entire system memory and storage hierarchy used by computer system 300 for storing data, instructions, states, and other persistent and non-persistent information. As previously discussed, NVRAM 142 can be configured to implement the roles in a typical memory and storage hierarchy of system memory, mass storage, and firmware memory, TPM memory, and the like. In the embodiment ofFigures 3,NVRAM 142 is partitioned into FM 151B, NVRAM mass storage 152A, BIOS NVRAM 173, and TMP NVRAM 173. Storage hierarchies with different roles are also contemplated and the application of NVRAM 142 is not limited to the roles described above.By way of example, operation while the near memory as cache for FM 150B is in the write-back caching is described. In one embodiment, while the near memory as cache for FM 150B is in the write-back caching mode mentioned above, a read operation will first arrive at the MSC controller 124 which will perform a look-up to determine if the requested data is present in the near memory acting as a cache for FM 150B (e.g., utilizing a tag cache 342). If present, it will return the data to the requesting CPU, core 101-104 or I/O device through I/O subsystem 115. If the data is not present, the MSC controller 124 will send the request along with the system memory address to an NVRAM controller 332. The NVRAM controller 332 will use the decode table 333 to translate the system memory address to an NVRAM physical device address (PDA) and direct the read operation to this region of the far memory 151B. In one embodiment, the decode table 333 includes an address indirection table (AIT) component which the NVRAM controller 332 uses to translate between system memory addresses and NVRAM PDAs. In one embodiment, the AIT is updated as part of the wear leveling algorithm implemented to distribute memory access operations and thereby reduce wear on the NVRAM FM 151B. Alternatively, the AIT may be a separate table stored within the NVRAM controller 332.Upon receiving the requested data from the NVRAM FM 151B, the NVRAM controller 332 will return the requested data to the MSC controller 124 which will store the data in the MSC near memory acting as an FM cache 150B and also send the data to the requesting processor core 101-104, or I/O Device through I/O subsystem 115. Subsequent requests for this data may be serviced directly from the near memory acting as a FM cache 150B until it is replaced by some other NVRAM FM data.As mentioned, in one embodiment, a memory write operation also first goes to the MSC controller 124 which writes it into the MSC near memory acting as a FM cache 150B. In write-back caching mode, the data may not be sent directly to the NVRAM FM 151B when a write operation is received. For example, the data may be sent to the NVRAM FM 151B only when the location in the MSC near memory acting as a FM cache 150B in which the data is stored must be re-used for storing data for a different system memory address. When this happens, the MSC controller 124 notices that the data is not current in NVRAM FM 151B and will thus retrieve it from near memory acting as a FM cache 150B and send it to the NVRAM controller 332. The NVRAM controller 332 looks up the PDA for the system memory address and then writes the data to the NVRAM FM 151B.InFigure 3,the NVRAM controller 332 is shown connected to the FM 151B, NVRAM mass storage 152A, and BIOS NVRAM 172 using three separate lines. This does not necessarily mean, however, that there are three separate physical buses or communication channels connecting the NVRAM controller 332 to these portions of the NVRAM 142. Rather, in some embodiments, a common memory bus or other type of bus (such as those described below with respect toFigures 4A-M) is used to communicatively couple the NVRAM controller 332 to the FM 151B, NVRAM mass storage 152A, and BIOS NVRAM 172. For example, in one embodiment, the three lines inFigure 3represent a bus, such as a memory bus (e.g., a DDR3, DDR4, etc, bus), over which the NVRAM controller 332 implements a transactional protocol to communicate with the NVRAM 142. The NVRAM controller 332 may also communicate with the NVRAM 142 over a bus supporting a native transactional protocol such as a PCI express bus, desktop management interface (DMI) bus, or any other type of bus utilizing a transactional protocol and a small enough transaction payload size (e.g., cache line size such as 64 or 128 byte).In one embodiment, computer system 300 includes integrated memory controller (IMC) 331 which performs the central memory access control for processor 310, which is coupled to: 1) a memory-side cache (MSC) controller 124 to control access to near memory (NM) acting as a far memory cache 150B; and 2) a NVRAM controller 332 to control access to NVRAM 142. Although illustrated as separate units inFigure 3,the MSC controller 124 and NVRAM controller 332 may logically form part of the IMC 331.In the illustrated embodiment, the MSC controller 124 includes a set of range registers 336 which specify the mode of operation in use for the NM acting as a far memory cache 150B (e.g., write-back caching mode, near memory bypass mode, etc, described above). In the illustrated embodiment, DRAM 144 is used as the memory technology for the NM acting as cache for far memory 150B. In response to a memory access request, the MSC controller 124 may determine (depending on the mode of operation specified in the range registers 336) whether the request can be serviced from the NM acting as cache for FM 150B or whether the request must be sent to the NVRAM controller 332, which may then service the request from the far memory (FM) portion 151B of the NVRAM 142.In an embodiment where NVRAM 142 is implemented with PCMS, NVRAM controller 332 is a PCMS controller that performs access with protocols consistent with the PCMS technology. As previously discussed, the PCMS memory is inherently capable of being accessed at the granularity of a byte. Nonetheless, the NVRAM controller 332 may access a PCMS-based far memory 151B at a lower level of granularity such as a cache line (e.g., a 64-bit or 128-bit cache line) or any other level of granularity consistent with the memory subsystem. The underlying principles of the invention are not limited to any particular level of granularity for accessing a PCMS-based far memory 151B. In general, however, when PCMS-based far memory 151B is used to form part of the system address space, the level of granularity will be higher than that traditionally used for other non-volatile storage technologies such as FLASH, which can only perform rewrite and erase operations at the level of a "block" (minimally 64Kbyte in size for NOR FLASH and 16 Kbyte for NAND FLASH).In the illustrated embodiment, NVRAM controller 332 can read configuration data to establish the previously described modes, sizes, etc. for the NVRAM 142 from decode table 333, or alternatively, can rely on the decoding results passed from IMC 331 and I/O subsystem 315. For example, at either manufacturing time or in the field, computer system 300 can program decode table 333 to mark different regions of NVRAM 142 as system memory, mass storage exposed via SATA interfaces, mass storage exposed via USB Bulk Only Transport (BOT) interfaces, encrypted storage that supports TPM storage, among others. The means by which access is steered to different partitions of NVRAM device 142 is via a decode logic. For example, in one embodiment, the address range of each partition is defined in the decode table 333. In one embodiment, when IMC 331 receives an access request, the target address of the request is decoded to reveal whether the request is directed toward memory, NVRAM mass storage, or I/O. If it is a memory request, IMC 331 and/or the MSC controller 124 further determines from the target address whether the request is directed to NM as cache for FM 150B or to FM 151B. For FM 151B access, the request is forwarded to NVRAM controller 332. IMC 331 passes the request to the I/O subsystem 115 if this request is directed to I/O (e.g., non-storage and storage I/O devices). I/O subsystem 115 further decodes the address to determine whether the address points to NVRAM mass storage 152A, BIOS NVRAM 172, or other non-storage or storage I/O devices. If this address points to NVRAM mass storage 152A or BIOS NVRAM 172, I/O subsystem 115 forwards the request to NVRAM controller 332. If this address points to TMP NVRAM 173, I/O subsystem 115 passes the request to TPM 334 to perform secured access.In one embodiment, each request forwarded to NVRAM controller 332 is accompanied with an attribute (also known as a "transaction type") to indicate the type of access. In one embodiment, NVRAM controller 332 may emulate the access protocol for the requested access type, such that the rest of the platform remains unaware of the multiple roles performed by NVRAM 142 in the memory and storage hierarchy. In alternative embodiments, NVRAM controller 332 may perform memory access to NVRAM 142 regardless of which transaction type it is. It is understood that the decode path can be different from what is described above. For example, IMC 331 may decode the target address of an access request and determine whether it is directed to NVRAM 142. If it is directed to NVRAM 142, IMC 331 generates an attribute according to decode table 333. Based on the attribute, IMC 331 then forwards the request to appropriate downstream logic (e.g., NVRAM controller 332 and I/O subsystem 315) to perform the requested data access. In yet another embodiment, NVRAM controller 332 may decode the target address if the corresponding attribute is not passed on from the upstream logic (e.g., IMC 331 and I/O subsystem 315). Other decode paths may also be implemented.The presence of a new memory architecture such as described herein provides for a wealth of new possibilities. Although discussed at much greater length further below, some of these possibilities are quickly highlighted immediately below.According to one possible implementation, NVRAM 142 acts as a total replacement or supplement for traditional DRAM technology in system memory. In one embodiment, NVRAM 142 represents the introduction of a second-level system memory (e.g., the system memory may be viewed as having a first level system memory comprising near memory as cache 150B (part of the DRAM device 340) and a second level system memory comprising far memory (FM) 151B (part of the NVRAM 142).According to some embodiments, NVRAM 142 acts as a total replacement or supplement for the flash/magnetic/optical mass storage 152B. As previously described, in some embodiments, even though the NVRAM 152A is capable of byte-level addressability, NVRAM controller 332 may still access NVRAM mass storage 152A in blocks of multiple bytes, depending on the implementation (e.g., 64 Kbytes, 128 Kbytes, etc.). The specific manner in which data is accessed from NVRAM mass storage 152A by NVRAM controller 332 may be transparent to software executed by the processor 310. For example, even through NVRAM mass storage 152A may be accessed differently from Flash/magnetic/optical mass storage 152A, the operating system may still view NVRAM mass storage 152A as a standard mass storage device (e.g., a serial ATA hard drive or other standard form of mass storage device).In an embodiment where NVRAM mass storage 152A acts as a total replacement for the flash/magnetic/optical mass storage 152B, it is not necessary to use storage drivers for block-addressable storage access. The removal of storage driver overhead from storage access can increase access speed and save power. In alternative embodiments where it is desired that NVRAM mass storage 152A appears to the OS and/or applications as block-accessible and indistinguishable from flash/magnetic/optical mass storage 152B, emulated storage drivers can be used to expose block-accessible interfaces (e.g., Universal Serial Bus (USB) Bulk-Only Transfer (BOT), 1.0; Serial Advanced Technology Attachment (SATA), 3.0; and the like) to the software for accessing NVRAM mass storage 152A.In one embodiment, NVRAM 142 acts as a total replacement or supplement for firmware memory such as BIOS flash 362 and TPM flash 372 (illustrated with dotted lines inFigure 3to indicate that they are optional). For example, the NVRAM 142 may include a BIOS NVRAM 172 portion to supplement or replace the BIOS flash 362 and may include a TPM NVRAM 173 portion to supplement or replace the TPM flash 372. Firmware memory can also store system persistent states used by a TPM 334 to protect sensitive system information (e.g., encryption keys). In one embodiment, the use of NVRAM 142 for firmware memory removes the need for third party flash parts to store code and data that are critical to the system operations.Continuing then with a discussion of the system ofFigure 3,in some embodiments, the architecture of computer system 100 may include multiple processors, although a single processor 310 is illustrated inFigure 3for simplicity. Processor 310 may be any type of data processor including a general purpose or special purpose central processing unit (CPU), an application-specific integrated circuit (ASIC) or a digital signal processor (DSP). For example, processor 310 may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, or Itanium™ processor, all of which are available from Intel Corporation, of Santa Clara, Calif. Alternatively, processor 310 may be from another company, such as ARM Holdings, Ltd, of Sunnyvale, CA, MIPS Technologies of Sunnyvale, CA, etc. Processor 310 may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. Processor 310 may be implemented on one or more chips included within one or more packages. Processor 310 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS. In the embodiment shown inFigure 3,processor 310 has a system-on-a-chip (SOC) configuration.In one embodiment, the processor 310 includes an integrated graphics unit 311 which includes logic for executing graphics commands such as 3D or 2D graphics commands. While the embodiments of the invention are not limited to any particular integrated graphics unit 311, in one embodiment, the graphics unit 311 is capable of executing industry standard graphics commands such as those specified by the Open GL and/or Direct X application programming interfaces (APIs) (e.g., OpenGL 4.1 and Direct X 11).The processor 310 may also include one or more cores 101-104, although a single core is illustrated inFigure 3,again, for the sake of clarity. In many embodiments, the core(s) 101-104 includes internal functional blocks such as one or more execution units, retirement units, a set of general purpose and specific registers, etc. If the core(s) are multi-threaded or hyper-threaded, then each hardware thread may be considered as a "logical" core as well. The cores 101-104 may be homogenous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores may be in order while others are out-of-order. As another example, two or more of the cores may be capable of executing the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.The processor 310 may also include one or more caches, such as cache 313 which may be implemented as a SRAM and/or a DRAM. In many embodiments that are not shown, additional caches other than cache 313 are implemented so that multiple levels of cache exist between the execution units in the core(s) 101-104 and memory devices 150B and 151B. For example, the set of shared cache units may include an upper-level cache, such as a level 1 (L1) cache, mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, an (LLC), and/or different combinations thereof. In different embodiments, cache 313 may be apportioned in different ways and may be one of many different sizes in different embodiments. For example, cache 313 may be an 8 megabyte (MB) cache, a 16 MB cache, etc. Additionally, in different embodiments the cache may be a direct mapped cache, a fully associative cache, a multi-way set-associative cache, or a cache with another type of mapping. In other embodiments that include multiple cores, cache 313 may include one large portion shared among all cores or may be divided into several separately functional slices (e.g., one slice for each core). Cache 313 may also include one portion shared among all cores and several other portions that are separate functional slices per core.The processor 310 may also include a home agent 314 which includes those components coordinating and operating core(s) 101-104. The home agent unit 314 may include, for example, a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the core(s) 101-104 and the integrated graphics unit 311. The display unit is for driving one or more externally connected displays.As mentioned, in some embodiments, processor 310 includes an integrated memory controller (IMC) 331, near memory cache (MSC) controller, and NVRAM controller 332 all of which can be on the same chip as processor 310, or on a separate chip and/or package connected to processor 310. DRAM device 144 may be on the same chip or a different chip as the IMC 331 and MSC controller 124; thus, one chip may have processor 310 and DRAM device 144; one chip may have the processor 310 and another the DRAM device 144 and (these chips may be in the same or different packages); one chip may have the core(s) 101-104 and another the IMC 331, MSC controller 124 and DRAM 144 (these chips may be in the same or different packages); one chip may have the core(s) 101-104, another the IMC 331 and MSC controller 124, and another the DRAM 144 (these chips may be in the same or different packages); etc.In some embodiments, processor 310 includes an I/O subsystem 115 coupled to IMC 331. I/O subsystem 115 enables communication between processor 310 and the following serial or parallel I/O devices: one or more networks 336 (such as a Local Area Network, Wide Area Network or the Internet), storage I/O device (such as flash/magnetic/optical mass storage 152B, BIOS flash 362, TPM flash 372) and one or more non-storage I/O devices 337 (such as display, keyboard, speaker, and the like). I/O subsystem 115 may include a platform controller hub (PCH) (not shown) that further includes several I/O adapters 338 and other I/O circuitry to provide access to the storage and non-storage I/O devices and networks. To accomplish this, I/O subsystem 115 may have at least one integrated I/O adapter 338 for each I/O protocol utilized. I/O subsystem 115 can be on the same chip as processor 310, or on a separate chip and/or package connected to processor 310.I/O adapters 338 translate a host communication protocol utilized within the processor 310 to a protocol compatible with particular I/O devices. For flash/magnetic/optical mass storage 152B, some of the protocols that I/O adapters 338 may translate include Peripheral Component Interconnect (PCI)-Express (PCI-E), 3.0; USB, 3.0; SATA, 3.0; Small Computer System Interface (SCSI), Ultra-640; and Institute of Electrical and Electronics Engineers (IEEE) 1394 "Firewire;" among others. For BIOS flash 362, some of the protocols that I/O adapters 338 may translate include Serial Peripheral Interface (SPI), Microwire, among others. Additionally, there may be one or more wireless protocol I/O adapters. Examples of wireless protocols, among others, are used in personal area networks, such as IEEE 802.15 and Bluetooth, 4.0; wireless local area networks, such as IEEE 802.11-based wireless protocols; and cellular protocols.In some embodiments, the I/O subsystem 115 is coupled to a TPM control 334 to control access to system persistent states, such as secure data, encryption keys, platform configuration information and the like. In one embodiment, these system persistent states are stored in a TMP NVRAM 173and accessed via NVRAM controller 332.In one embodiment, TPM 334 is a secure micro-controller with cryptographic functionalities. TPM 334 has a number of trust-related capabilities; e.g., a SEAL capability for ensuring that data protected by a TPM is only available for the same TPM. TPM 334 can protect data and keys (e.g., secrets) using its encryption capabilities. In one embodiment, TPM 334 has a unique and secret RSA key, which allows it to authenticate hardware devices and platforms. For example, TPM 334 can verify that a system seeking access to data stored in computer system 300 is the expected system. TPM 334 is also capable of reporting the integrity of the platform (e.g., computer system 300). This allows an external resource (e.g., a server on a network) to determine the trustworthiness of the platform but does not prevent access to the platform by the user.In some embodiments, I/O subsystem 315 also includes a Management Engine (ME) 335, which is a microprocessor that allows a system administrator to monitor, maintain, update, upgrade, and repair computer system 300. In one embodiment, a system administrator can remotely configure computer system 300 by editing the contents of the decode table 333 through ME 335 via networks 336.For convenience of explanation, the remainder of the application sometimes refers to NVRAM 142 as a PCMS device. A PCMS device includes multi-layered (vertically stacked) PCM cell arrays that are non-volatile, have low power consumption, and are modifiable at the bit level. As such, the terms NVRAM device and PCMS device may be used interchangeably in the following discussion. However it should be realized, as discussed above, that different technologies besides PCMS may also be utilized for NVRAM 142.It should be understood that a computer system can utilize NVRAM 142 for system memory, mass storage, firmware memory and/or other memory and storage purposes even if the processor of that computer system does not have all of the above-described components of processor 310, or has more components than processor 310.In the particular embodiment shown inFigure 3,the MSC controller 124 and NVRAM controller 332 are located on the same die or package (referred to as the CPU package) as the processor 310. In other embodiments, the MSC controller 124 and/or NVRAM controller 332 may be located off-die or off-CPU package, coupled to the processor 310 or CPU package over a bus such as a memory bus (like a DDR bus (e.g., a DDR3, DDR4, etc)), a PCI express bus, a desktop management interface (DMI) bus, or any other type of bus.IMPLEMENTATION OF NEAR MEMORY AS CACHING LAYER FOR FAR MEMORYAs discussed above, in various configurations, near memory can be configured as a caching layer for far memory. Here, specific far memory storage devices (e.g., specific installed PCMS memory chips) may be reserved for specific (e.g., a specific range of) system memory addresses. As such, specific near memory storage devices (e.g., specific installed DRAM memory chips) may be designed to act as a caching layer for the specific far memory storage devices. Accordingly, these specific near memory storage devices should have the effect of reducing the access times of the most frequently accessed system memory addresses that the specific far memory storage devices are designed to provide storage for.According to a further approach, observed inFigure 4,the near memory devices are configured as a direct mapped cache for their far memory counterparts. As is well understood in the art, a direct mapped cache is designed such that each entry in the cache is reserved for a unique set of entries in the deeper storage. That is, in this case, the storage space of the far memory 401 can be viewed as being broken down into different storage sets 401_1, 401 _2, ... 401_N, where, each set is allocated an entry in the cache 402. As such, as observed inFigure 4,entry 402_1 is reserved for any of the system memory addresses associated with set 401_1; entry 402_2 is reserved for any of the system memory addresses associated with set 401_2, etc. Generally, any of the structural "logic blocks" that appear inFigure 4,as well as any ofFigures 7a,8aand11may be largely, if not entirely, implemented with logic circuitry.Figure 4also shows a portion of an exemplary system memory address that may be provided, for instance, from a CPU processing core for a read or write transaction to or from system memoryEssentially, a group of set bits 404 define which set the system memory address is associated with, and, a group of tag bits 405 define which entry in the appropriate set (which may correspond to a cache line) the system memory address corresponds to. Lower ordered bits 403 identify a specific byte within a cache line.For example, according to one exemplary implementation, the cache line size is 64 bytes, cache 402 is implemented with approximately 1 Gigabyte (GB) of DRAM storage and far memory storage 401 is implemented with approximately 16 Gigabytes (GB) of PCMS storage. Address portions 405, 404 and 403 correspond to 34 bits of address space A[33:0]. Here, lower ordered bits 403 correspond to address bits A[5:0], set address bits 404 correspond to address bits A[29:6] and tag address bits 405 correspond to address bits A[33:30].From this arrangement, note that the four tag bits 405 specify a value within a range of 1 to 16 which corresponds to the ratio of DRAM storage to PCMS storage. As such, each entry in cache 402 will map to (i.e., provide cacheable support across) sixteen different far memory 401 cache lines. This arrangement essentially defines the size of each set in far memory 401 (16 cache lines per set). The number of sets, which corresponds to the number of entries in cache 402, is defined by set bits 404. In this example, set bits 404 corresponds to 24 bits of address space (address bits A[29:6]) which, in turn, corresponds to 16,777,216 cache entries/sets. A 64 byte cache line therefore corresponds to approximately 1GB of storage within cache 402 (16,777,216 x 64 bytes = 1, 073,741,824 bytes).If the size of the cache 402 were doubled to include 2 GB of DRAM, there would be eight cache lines per set (instead of sixteen) because the DRAM:PCMS ratio would double to 2:16 = 1:8. As such the tag 405 would be expressed with three bits (A[33:31]) instead of four bits. The doubling of the DRAM space is further accounted for by providing an additional most significant bit to set bits 404 (i.e., address bits A[30:6] instead of A[29:6]), which, essentially doubles the number of sets.The far memory storage 401 observed inFigure 4may correspond to only a subset of the computer system's total far memory storage. For example, a complete system memory for a computing system may be realized by incorporating multiple instances of the near/far memory sub-system observed inFigure 4(e.g., one instance for each unique subset of system memory addresses). Here, according to one approach, higher ordered bits 408 are used to indicate which specific instance amongst the multiple near/far memory subsystems apply for a given system memory access. For example, if each instance corresponds to a different memory channel that stems from a host side 409 (or, more generally, a host), higher ordered bits 408 would effectively specify the applicable memory channel. In an alternate approach, referred to as a "permuted" addressing approach, higher order bits 408 are not present. Rather, bits 405 represent the highest ordered bits and bits within lowest ordered bit space 403 are used to determine which memory channel is to be utilized for the address. This approach is thought to give better system performance by effectively introducing more randomization into the specific memory channels that are utilized over time. Address bits can be in any order.Figure 5(write) andFigure 6(read) depict possible operation schemes of the near/far memory subsystem ofFigure 4.Referring toFigure 4andFigure 5,for write operations, an integrated memory controller 431 receives a write transaction that includes the write address and the data to be written 501. The transaction may be stored in a buffer 415. Upon determining which near/far memory sub-system instance applies (e.g., from analysis of higher ordered bits 408), the hit miss logic 414 of memory side control (MSC) logic 424 provides the set bits 404 to near memory cache interface logic 416 to cause the cached entry for the applicable set to be read 502 from the near memory cache 402. Here, near memory cache interface logic 416 is responsible for implementing a protocol, including the generation/reception of electrical signals, specific to the near memory (e.g., DRAM) on memory channel 401.As observed inFigure 4,in an embodiment, each cache entry includes, along with its corresponding data 410, an embedded tag 411, a dirty bit 412 and ECC information 413. The embedded tag 411 identifies which cache line in the entry's applicable set in far memory 401 is cached in cache 402. The dirty bit 412 indicates whether the cached entry is the only valid copy for the cache line. ECC information 413, as is known in the art, is used to detect and possibly correct for errors that occurred writing and/or reading the entry from/to the cache 402.After the cached entry for the applicable set is read with the near memory cache interface logic 416, the MSC hit/miss logic 414 compares the embedded tag 411 of the just read entry against the tag 405 of the address of the write transaction 503 (note that the entry read from the cache may be stored in a read buffer 417). If they match, the cached entry corresponds to the target of the transaction (cache hit). Accordingly, the hit/miss logic 414 causes the near memory cache interface logic to write over 504 the just read cache entry in the cache 402 with the new data received for the transaction. The MSC control logic 424 in performing the write keeps the value of the embedded tag 411 unchanged. The MSC control logic 424 also sets the dirty bit 412 to indicate that the newly written entry corresponds to the only valid version the cache line, and calculates new ECC data for the cache line. The cache line read from the cache 402 in read buffer 417 is discarded. At this point, the process ends for a cache hit.If the embedded tag 411 of the cache line read from cache 402 does not match the tag 405 of the transaction address (cache miss), as with a cache hit, the hit/miss logic 414 causes the near memory cache interface logic 416 to write the 505 new data associated with the transaction into the cache 402 (with the set bits 404 specified as the address) to effectively write over the cache line that was just read from the cache 402. The embedded tag 411 is written as the tag bits 405 associated with the transaction. The dirty bit 412 is written to indicate that the cached entry is the only valid copy for this cache line. The memory controller's ECC logic 420 calculates ECC information 413 for the cache line received with the transaction and the near memory cache interface logic 416 writes it into cache 402 along with the cache line.With respect to the cache line that was just read from the cache and is stored in the read buffer 417, the near memory hit/miss logic 414 checks its associated dirty bit 506, and, if the dirty bit indicates that the cache line in the read buffer 417 is the only valid version of the cache line (the dirty bit is "set"), the hit/miss logic 414 causes the NVRAM controller 432, through its far memory interface logic 418, to write 507 the cache line into its appropriate far memory location (using the set bits 404 of the transaction and the embedded tag bits 411 of the cache line that was just read as the address). Here, far memory interface logic 418 is responsible for implementing a protocol, including the generation/reception of electrical signals, specific to the far memory (e.g., PCMS) on memory channel 401. If the dirty bit of the cache line in the read buffer 417 indicates that the cache line in the read buffer 417 is not the only valid version of the cache line, the cache line in the read buffer is discarded.Here, during moments where the interfaces 416, 418 to the near memory cache and far memory are not busy, the MSC control logic 424 may read cache line entries from the cache 402, and, for those cache line entries having its dirty bit set, the memory controller will rewrite it into far memory and "clear" its associated dirty bit to indicate that the cache line in cache 402 is no longer the only valid copy of the cache line.Moreover, it is pertinent to point out that, the respective near memory cache and far memory interfaces 416, 418 can be completely isolated from one another, or, have some overlap with respect to one another. Here, overlap corresponds to aspects of the respective near and far memory protocols and/or signaling that are the same (e.g., same clocking signals, same on-die termination signals, same addressing signals, etc.) and therefore may use the same circuitry for access to near memory cache and far memory. Non overlapping regions correspond to aspects of the two protocols and/or signaling that are not the same and therefore have circuitry applicable to only one of near memory cache and far memory.The architecture described above can be used in implementations where the MSC control logic 424 is coupled to the near memory cache 402 over a different isolated memory channel than the memory channel through which the NVRAM controller 432 and far memory 401 are coupled through. Here, for any specific channel, one of interfaces 416, 418 is enabled while the other is disabled depending on whether near memory cache or far memory is coupled to the channel. Likewise, one of MSC control logic 424 and NVRAM controller 432 is enabled while the other is disabled. In an embodiment, a configuration register associated with the memory controller (not shown), which, for example, may be written to by BIOS, determines which configuration is to be enabled.The same architecture above may also support another configuration in which near memory cache and far memory are coupled to the same channel 421. In this case, the integration of interfaces 416, 416 can be viewed as a single interface to the channel 421. According to this configuration, both interfaces 416, 418 and both controllers 424, 432 are "enabled" but only one set (interface 416 and controller 424 for near memory and interface 418 and controller 432 for far memory) is able to use the channel at any particular instant of time. Here, the usage of the channel over time alternates between near memory signaling and far memory signaling. This configuration may be established with, for instance, a third setting in the aforementioned configuration register. It is to this setting that the below discussion mostly pertains.Here, by being able to use the same channel for both near memory accesses and far memory accesses, the near memory cache that is plugged into the channel can be used as the near memory cache for the far memory storage that is plugged into the same channel. Said another way, specific system memory addresses may be allocated to the one, single channel. The far memory devices that are plugged into the channel provides far memory storage for these specific system memory addresses, and, the near memory storage that is plugged into the same channel provides the cache space for these far memory devices. As such, the above described transactions that invoke both near memory and far memory (e.g., because of a cache miss and/or a dirty bit that is set) can transpire over the same channel.According to one approach, the channel is designed to include mechanical receptacles/connectors that individual planar board cards having integrated circuits disposed on them (e.g., DIMMs) can plug into. Here, the cards have corresponding receptacles/connectors that mate with the channel's receptacles/connectors. One or more cards having only far memory storage can be plugged into a first set of connectors to effect the far memory storage for the channel. One or more cards having only near memory storage can be plugged into the same channel and act as near memory cache for the far memory cards.Here, where far memory storage is inherently denser than near memory storage but near memory storage is inherently faster than far memory storage, channels can be designed with a "speed vs. density" tradeoff in mind. That is, the more near memory cards plugged into the channel, the faster the channel will perform but at the cost of less overall storage capacity supported by the channel. Contra wise, the fewer near memory cards plugged into to the channel, the slower the channel will perform but with the added benefit of enhanced storage capacity supported by the channel. Extremes may include embodiments where only the faster memory storage technology (e.g., DRAM) is populated in the channel (in which case it may act like a cache for far memory on another channel, or, not act like a cache but instead is allocated its own specific system memory address space), or, only the slower memory storage technology (e.g., PCMS) is populated in the channel.In other embodiments, near memory and far memory are disposed on a same card in which case the speed/density tradeoff is determined by the card even if a plurality of such cards are plugged into the same channel.Figure 6depicts a read transaction. According to the methodology ofFigure 6,the memory controller 431 receives a read transaction that includes the read address 611. The transaction may be stored in a buffer 415. Upon determining which near/far memory sub-system (e.g., which memory channel) instance applies, the MSC controller's hit miss logic 414 provides the set bits 404 to near memory cache interface logic 416 to cause the cached entry for the applicable set to be read 612 from the cache 402.After the cached entry for the applicable set is read with the cache interface logic 416, the hit/miss logic 414 compares the embedded tag 411 of the just read entry against the tag 405 of the address of the read transaction 613. If they match, the cached entry corresponds to the target of the transaction (cache hit). Accordingly, the read process ends. If the embedded tag 411 of the cache line read from cache 402 does not match the tag 405 of the transaction address (cache miss), the hit/miss logic 414 causes the far memory interface logic 418 to read 614 the far memory storage at the address specified in the transaction (403, 404, 405). The cache line read from far memory is then written into the cache 615, and, if the dirty bit was set for the cache line that was read from near memory cache in step 612, the cache line that was read from near memory cache is written into far memory 616.Although the MSC controller 424 may perform ECC checking on the read data that was read from far memory, as described in more detail below, according to various embodiments, ECC checking may be performed by logic circuitry 422 that resides local to the far memory device(s) (e.g., affixed to a same DIMM card that PCMS device(s) are affixed to). This same logic circuitry 422 may also calculate the ECC information for a write transaction in the case of a cache miss and the dirty bit is "set".Moreover, in embodiments where the same memory channel 421 is used to communicate near memory signaling and far memory signaling, logic circuitry 422 can be utilized to "speed up" the core write and read processes described above. Some of these speed ups are discussed immediately below.READ AND WRITE TRANSACTIONS WITH NEAR MEMORY AND FAR MEMORY COUPLED TO A SAME MEMORY CHANNELA. Near Memory "In Front Of" Far Memory Control LogicFigure 7ashows a "near memory in front of" approach whileFigure 8ashows a "near memory behind" approach. The "near memory behind" approach will be discussed in more detail further below. For each of the models below, as well as their ensuing discussions, the term "memory controller" or "host" or "host side" is used to refer (mainly) to circuitry and/or acts performed by an MSC controller or an NVRAM controller. Which circuitry applies in a particular situation is straightforward to understand in that, when near memory cache is being accessed on the channel, the MSC controller is involved, whereas, when far memory is being accessed on the channel, the NVRAM controller is involved. Moreover, the discussions below also refer to "far memory control logic" or a "far memory controller" that is remote from the host side and is located proximate to far memory "out on the channel". Here, the far memory control logic can be viewed as a component of the NVRAM controller, with, another component of the NVRAM controller resident on the host to perform appropriate far memory accesses (consistent with the embodiments below) from the host side.Referring toFigure 7a,note that the near memory storage devices 702_1, 702_2 ... 702_N (such as a plurality of DRAM chips) are coupled to a channel 721 independently of the coupling of far memory logic circuitry 722 (and its associated far memory storage devices 701_1, 701_2, ... 702_M (such as a plurality of PCMS chips) to the same channel 721.Said another way, a near memory platform 730 and a far memory platform 732 are separately connected to the same channel 721 independently of one another. This approach can be realized, for example, with different DIMMS having different respective memory storage technologies plugged into a same memory channel (e.g., near memory platform 730 corresponds to a DRAM DIMM and far memory platform 732 corresponds to a PCMS DIMM). This approach can also be realized, for example, with a same DIMM that incorporates different respective memory storage technologies (e.g., near memory platform 730 corresponds to one side of a DIMM and far memory platform 732 corresponds to the other side of the DIMM).Figure 7bshows a read transaction that includes a cache miss where the far memory control logic 722 automatically detects the cache miss and automatically reads far memory in response. Referring toFigures 7aand7b,the host side MSC control logic 424a receives a read request 761 and reads the cache line entry 762 for the applicable set from the cache 702. As part of the transaction on the channel 721 that accesses the cache 702, the host side MSC control logic 424a "sneaks" the tag bits 705 of the original read request onto the channel 721. In a further embodiment, the host side MSC control logic 424a can also sneak information 780 indicating that the original transaction request received by the memory controller is a read request (rather than a write request).According to one approach, explained in more detail below, the tag bits 705 and read/write information 780 are "snuck" on unused row or column addresses of the near memory address bus. In a further embodiment, more column address bits are used for this purpose than row address bits. According to an even further approach, the sneaked information 705, 780 is provided over a command bus component of channel 721 which is used for communicating addressing information to the near memory storage device (and potentially the far memory devices as well).Because remote control logic circuitry 722 is connected to the channel 721, it can "snarf": 1) the tag bits 705 from the original request (and indication 780 of a read transaction) when they are snuck on the channel 721; 2) the read address applied to the near memory cache 702; and, 3) the cache line and its associated embedded tag bits 711, dirty bit 712 and ECC information 713 when read from the near memory cache 702. Here, the snarfing 763 is understood to include storing any/all of these items of information locally (e.g., in register space 750 embedded) on logic circuitry 722.As such, far memory control logic circuitry 722, which also includes its own hit/miss logic 723, can determine 764 whether there is a cache hit or cache miss concurrently with the memory controller's hit/miss logic 714. In the case of a cache hit, the far memory control logic circuitry 722 takes no further action and the memory controller 731 performs the ECC calculation on the data read from cache and compares it with the embedded ECC information 714 to determine whether or not the cache read data is valid.However in the case of a cache miss, and with knowledge that the overall transaction is a read transaction (e.g., from snuck information 780), the logic circuitry 722 will recognize that a read of its constituent far memory storage 701 will be needed to ultimately service the original read request. As such, according to one embodiment, logic circuitry 722 can automatically read 765 its associated far memory resources 732 to retrieve the desired read information, perform an ECC calculation on the cache line read from far memory (which also has embedded ECC information) and, if there is no corruption in the data, provide the desired far memory read information.In order to perform this kind of "automatic read", as alluded to just above, logic circuitry 722 should be informed by the memory controller 731 in some manner that the overall transaction is a read operation as opposed to a write operation (if the above described transaction were a write transaction, logic circuitry would not need to perform a read of far memory). According to one embodiment, as already mentioned above, read/write information 780 that is indicative as to whether a write transaction or a read transaction is at play is "snuck" to logic circuitry 722 (e.g., along with the tag information 705 of the original transaction request).Concurrently with the far memory control logic 722 automatically reading far memory 732, the memory controller 731 can schedule and issue a read request 786 on the channel 721 to the far memory control logic 722. As described in more detail below, in an embodiment, the memory controller 731 is configured to communicate two different protocols over channel 721: i) a first protocol that is specific to the near memory devices 730 (e.g., an industry standard DDR DRAM protocol); and, ii) a second protocol that is specific to the far memory devices 732 (e.g., a protocol that is specific to PCMS devices). Here, the near memory cache read request 762 is implemented with the first protocol and, by contrast, the read request to far memory 786 is implemented with the second protocol.In a further embodiment, as described in more detail further below, because the time needed by the far memory devices 732 to respond to the read request 786 cannot be predicted with certainty, an identifier 790 of the overall read transaction ("transaction id") is sent to the far memory control logic 722 along with the far memory read request 786 sent by the memory controller. When the data is finally read from far memory 732 it is eventually sent 787 to the memory controller 731. In an embodiment, the transaction identifier 790 is returned to the memory controller 731 as part of the transaction on the channel 721 that sends the read data to the memory controller 731.Here, the inclusion of the transaction identifier 790 serves to notify the memory controller 731 of the transaction to which the read data pertains to. This may be especially important where, as described in more detail below, the far memory control logic 722 maintains a buffer to store multiple read requests from the memory controller 731 and the uncertainty of the read response time of the far memory leads to "out-of-order" (000) read responses from far memory (a subsequent read request may be responded to before a preceding read request). In a further embodiment, a distinctive feature of the two protocols used on the channel 721 is that the near memory protocol treats devices 730 as slave devices that do not formally request use of the channel 721 (because their timing is well understood and under the control of the memory controller). By contrast, the far memory protocol permits far memory control logic 722 to issue a request to the memory controller 731 for the sending of read data to the memory controller 731. As a further point of distinction, the tag 705 and r/w information 780 that is "snuck" onto the channel during the near memory cache read is "snuck" in the sense that this information is being transported to the far memory control logic circuitry and is pertinent to a potential far memory access even though, technically, the near memory protocol is in play.Alternatively to the "automatic" read discussed above with respect toFigure 7b,the far memory control logic circuitry 722 can be designed to refrain from automatically reading the needed data and instead wait for a read request and corresponding address from the memory controller in the case of a cache miss. In this case, logic circuitry 722 need not snarf the address when the near memory cache is read, nor does any information concerning whether the overall transaction is a read transaction or a write transaction need to be snuck to logic circuitry 722. The sending of a transaction ID 790 with the read request to the far memory control logic 722 may still be needed if far memory control logic 722 can service read requests out of order.Regardless as to whether or not the logic circuitry 722 automatically performs a needed far memory read on a cache miss, as observed inFigure 7c,in the case of a cache miss detected by the far memory control logic circuitry 722, the hit/miss logic circuitry 723 of far memory control logic circuitry 722 can be designed to check if the dirty bit 712 is set in the snarfed cache line 766. If so, the snarfed cache line will need to be written to far memory 732. As such, logic circuitry 722 can then automatically store 767 the snarfed cache line into its constituent far memory storage resources 732 without a formal request from the memory controller (including the recalculation of the ECC information before it is stored to ensure the data is not corrupted).Here, depending on implementation, for the write operation to the far memory platform, logic circuitry 722 can construct the appropriate write address either by snarfing the earlier read address of the near memory cache read as described above and combining it with the embedded tag information of the cache line that was read from the near memory cache. Alternatively, if logic circuitry 722 does not snarf the cache read address, it can construct the appropriate write address by combining the tag information embedded in the snarfed cache line with a read address provided by the memory controller when it requests the read of the correct information from far memory. Specifically, logic circuitry 722 can combine the set and lowered ordered bits portions 404, 405 of the read request with the embedded tag 711 on the snarfed cache line to fully construct the correct address.Automatically performing the write to the far memory platform 732 as described above eliminates the need for the memory controller 731 to request the write to the far memory platform, but also, and in furtherance, completely frees the channel 721 of any activity related to the write to the far memory platform. This may correspond to a noticeable improvement in the speed of the channel.It is pertinent to point that the pair of speed-ups described just above: automatic read of far memory (Figure 7b) and automatic write to far memory (Figure 7c) can be implemented in any combination (both, just one) depending on designer choice.As a matter of contrast, a basic read transaction without any speedup offered by the presence of the far memory controller 722 nominally includes six atomic operations for a read transaction that suffers a cache miss when the dirty bit is set. These are: cache read request, cache read response, far memory read request, far memory read response, near memory write request (cache update) and far memory write request (load cache line read from cache into far memory because dirty bit is set).By contrast, with both of the speedups ofFigure 7b(automatic read of far memory) andFigure 7c(automatic write to far memory) being implemented, the overall transaction can be completed with only four atomic operations on the channel. That is, the far memory read request and far memory write request can be eliminated.The above discussion concerned read transaction processes when the near memory is "in front of" the far memory control logic. In the case of a write transaction process, referring toFigure 7d,in response to the receipt of a write transaction 751, the memory controller initiates a near memory cache read, and, sneaks tag information 705 and information 780 indicating that the overall transaction is a write and not a read as described above 752. After the read of near memory is complete, the memory controller 731 writes the new data over the old data in cache 753. In an embodiment, the memory controller checks to see if there is a cache hit 754 and/or if the dirty bit is set 755 to understand what action the far memory control logic circuitry will take (e.g., for channel scheduling), but otherwise takes no further action on the channel.Far memory control logic circuitry 722 snarfs the address used to access the cache, the sneaked information 705, 780 and the cache line read from cache with its associated information 756 and detects the cache miss on its own accord 757 as described above. If there is a cache hit, far memory control logic takes no further action. If there is a cache miss, depending on design implementation, similar to the processes described above, logic circuitry 722 can also detect 758 whether the dirty bit is set and write 759 the snarfed cache line into far memory automatically (without a request from the memory controller).In an alternate approach, the memory controller 731, after detecting a cache miss and that the dirty bit is set 754, 755, sends a request to the far memory control logic 722 (including the write address) to write the cache line read from the cache into far memory. The memory controller can also send the cache line read from cache to the far memory control logic over the channel 721.B. Near Memory "Behind" Far Memory Control LogicReferring toFigure 8a,which depicts a "near memory behind" architecture, note that the near memory storage devices 802_1, 802_2 ... 802_N (such as a plurality of DRAM chips) are coupled to at least a portion of the channel 821 through the far memory control logic circuitry 822 at least to some extent. Here, whereas the far memory control logic for a "near memory in front of approach" includes distinct interfaces for the channel and far memory, by contrast, the far memory control logic for the "near memory behind" approach includes distinct interfaces for the channel, far memory and near memory. According to one embodiment, the channel 821 can be viewed as having three principle sub-components: 1) a command bus 841 (over which read and write requests and their corresponding addresses are sent); 2) a data bus 842 (over which read and write data is sent); and, 3) control signals 843 (e.g., select signal(s), clock enable signal(s), on-die termination signal(s)).As depicted in the particular approach ofFigure 8a,the data bus 890 of the near memory storage platform 830 may be independently coupled 891 to the data bus 842, but, is coupled to the command bus 841 and control signals 843 components through logic circuitry 822. The far memory storage platform 831 is coupled to all three subcomponents 841, 842, 843 through logic circuitry 822. In an alternate embodiment, the data bus 890 of the near memory storage platform 830, like the far memory storage platform, is coupled to the channel's data bus component 842 through logic circuitry 822. The "near memory behind" architecture may at least be realized, for example, with the logic circuitry 822, near memory storage devices 830 and far memory storage devices 831 all being implemented on a same physical platform (e.g., a same DIMM card that plugs into the channel where multiple such DIMM cards can be plugged into the channel).Figure 8bshows a read process for a "near memory behind" architecture in the case of a cache miss. Referring toFigures 8aand8b,if the memory controller 831 receives a read request 861 it sends, over command bus 841, a read request 862 (e.g., in packetized form) to far memory control logic circuitry 822 containing the set bits 804 and lower ordered bits 803 of the original request's address. Moreover, as part of the read request sequence, the tag bits 805 of the original read request (e.g., from the CPU) is "snuck" 862 onto the channel 821. According to one approach, explained in more detail below, the tag bits 805 are "snuck" on the command bus component 841 of the channel 821 (which is used for communicating addressing information to the far memory control logic 822 for both near and far memory accesses). Here, unlike the far memory "in front of" approach, for reasons explained further below, additional information that indicates whether the original transaction is a read or write need not be snuck on the channel. Here, the far memory control logic 822 can "key" off of the read request to far memory by the memory controller to determine that the overall transaction is a read transaction and not a write transaction.Logic circuitry 822, in response to the received read request, presents the associated address on the local near memory address bus 870 to effect a cache read operation to the near memory platform. The appropriate cache line from the near memory platform 830 is subsequently presented 804 on the data bus 842 either directly by the near memory platform 830, in which case the memory controller performs the ECC calculation, or through the far memory control logic 822, in which case both logic 822 and memory controller 831 may perform ECC calculations.Because far memory control logic circuitry 822 is connected to the channel 821, it can "snarf" or otherwise locally store 863 (e.g., in its own register space 850) any of: 1) the tag bits 805 that were snuck on the channel 821; 2) the address information used to address the near memory cache 830; and, 3) the cache line from near memory 830 and its associated embedded tag bits 811, dirty bit 812 and ECC information 813 when provided by the near memory platform 830.In response, the hit/miss logic 823 of logic circuitry 822 can determine whether there is a cache hit or cache miss concurrently with the memory controller's hit/miss logic 814. In the case of a cache hit, the information read from near memory is provided to the memory controller 831 and logic circuitry 822 takes no further action. In an embodiment where the near memory cache platform is connected to the data bus without going through logic circuitry 822, the memory controller 831 performs the ECC calculation on the cache line read from near memory cache. In another embodiment where the near memory cache platform connects to the data bus through logic circuitry 822, the ECC calculation on the cache line read from near memory cache is calculated on both logic circuitry 822 and the memory controller 831.In the case of a cache miss detected by the logic circuitry 822, the cache/hit miss logic circuitry 823 will recognize that a read of the far memory storage platform 831 will be needed to ultimately service the original read request. As such, according to one embodiment, the logic circuitry 822 can automatically read from the far memory platform 831 to retrieve the desired read information 864 and perform an ECC calculation.Concurrently with the far memory control logic 822 automatically reading far memory 831, recalling that the memory controller 831 has already been provided with the cache line read from near memory, the memory controller 831 can likewise detect the cache miss and, in response, schedule and issue a read request 886 on the channel 821 to the far memory control logic 822. As alluded to above and as described in more detail below, in an embodiment, the memory controller 831 is able to communicate two different protocols over channel 821: i) a first protocol that is specific to the near memory devices 830 (e.g., an industry standard DDR DRAM protocol); and, ii) a second protocol that is specific to the far memory devices 831 (e.g., a protocol that is specific to PCMS devices). Here, the near memory cache read 862 is implemented with a first protocol over channel 821, and, by contrast, the read request to far memory 886 is implemented with the second protocol.In a further embodiment, as alluded to above and as described in more detail further below, because the time needed by the far memory devices 831 to respond to the read request 886 cannot be predicted with certainty, an identifier 890 of the overall read transaction ("transaction id") is sent to the far memory control logic 822 along with the far memory read request 886 sent by the memory controller. When the data is finally read from far memory 831 it is eventually sent 887 to the memory controller 831. In an embodiment, the transaction identifier 890 is returned to the memory controller 831 as part of the transaction on the channel 821 that sends the read data to the memory controller 831.Here, the inclusion of the transaction identifier 890 serves to notify the memory controller 831 of the transaction to which the read data pertains to. This may be especially important where, as described in more detail below, the far memory control logic 822 maintains a buffer to store multiple read requests from the memory controller 831 and the uncertainty of the read response time of the far memory leads to "out-of-order" (000) read responses from far memory (a subsequent read request may be responded to before a preceding read request).In a further embodiment, where two different protocols are used on the channel, a distinctive feature of the two protocols is that the near memory protocol treats devices 830 as slave devices that do not formally request use of the channel 821 (because the timing of the near memory devices is well understood and under the control of the memory controller). By contrast, the far memory protocol permits far memory control logic 822 to issue a request to the memory controller 831 for the sending of read data to the memory controller 831. As an additional point of distinction, the tag 805 information that is "snuck" onto the channel during the near memory cache read is "snuck" in the sense that this information is being transported to the far memory control logic circuitry 822 for a potential far memory read even though, technically, the near memory protocol is in play.Alternatively to automatically performing the far memory read, the far memory control logic circuitry 822 can be designed to refrain from automatically reading the needed data in far memory and wait for a read request and corresponding address from the memory controller 831. In this case, logic circuitry 822 does not need not to keep the address when the near memory cache is read, nor does it need any sneaked information 880 concerning whether the overall transaction is a read transaction or a write transaction from the memory controller 831.Regardless as to whether or not the logic circuitry 822 automatically performs a far memory read in the case of a cache miss, as observed in the process ofFigure 8c,the hit/miss logic circuitry 823 of logic circuitry 822 can be designed to write the cache line that was read from near memory cache into far memory when a cache miss occurs and the dirty bit is set. In this case, at a high level, the process is substantially the same as that observed inFigure 7c- except that the write to near memory 830 is at least partially hidden 867 from the channel 821 in the sense that the near memory platform 830 is not addressed over the channel. If the data bus 895 of the near memory platform 830 is not directly coupled to the data bus of the channel 842, but is instead coupled to the data bus 842 of the channel through the far memory control logic 822, the entire far memory write can be hidden from the channel 821.Automatically performing the write to the far memory platform 831 in this manner not only eliminates the need for the memory controller 831 to request the write, but also, completely frees the channel 821 of any activity related to the write to the far memory platform 831. This should correspond to a noticeable improvement in the speed of the channel.Additional efficiency may be realized if the far memory control logic circuitry 822 is further designed to update the near memory cache platform 830 with the results of a far memory read operation, in the case of a cache miss, in order to effect the cache update step. Here, as the results of the far memory read operation 869 correspond to the most recent access to the applicable set, these results also need to be written into the cache entry for the set in order to complete the transaction. By updating the cache with the far memory read response, a separate write step over the channel 821 to near memory to update the cache is avoided. Here, some mechanism (e.g., additional protocol steps) may need to be implemented into the channel so that the far memory control logic can access the near memory (e.g., if the usage of the near memory is supposed to be scheduled under the control of the memory controller 831).It is pertinent to point that the speed-ups described just above: automatic read of far memory (Figure 8b), automatic write to far memory(Figure 8c),and cache update concurrent with read response may be implemented in any combination (all, any two, just one) depending on designer choice.In the case of a write transaction process, according to one approach where the near memory data bus 880 is directly coupled to the channel data bus 842, the process described above with respect toFigure 7dcan be performed. Another approach, presented inFigure 8d,may be used where the near memory data bus 880 is coupled to the channel data bus 842 through the far memory control logic 822.According to the process ofFigure 8d,in response to the receipt of a write transaction 851, the memory controller sends a write command 852 to the far memory control logic 822 (including the corresponding address and data) and sneaks the write transaction's tag information over the channel. In response, the far memory control logic 822 performs a read 853 of the near memory cache platform 830 and determines from the embedded tag information 811 and the sneaked tag information 805 whether a cache miss or cache hit has occurred 854. In the case of a cache hit or a cache miss when the dirty bit is not set 855, the new write data received with the write command is written 856 to near memory cache 830. In the case of a cache miss and the dirty bit is set, the far memory control logic circuitry writes the new write data received with the write command into near memory cache and writes the evicted cache line just read from near memory 830 into far memory 831.Recall from the discussion of the read transaction ofFigure 8bthat information indicative of whether the overall transaction is a read or write does not need to be snuck to the far memory control logic in a "near memory behind" approach. This can be seen fromFigures 8band8dwhich show the memory controller initially communicating a near memory read request in the case of an overall read transaction (Figure 8a), or, initially communicates a near memory write transaction in the case of an overall write transaction (Figure 8d).ATOMIC CHANNEL TRANSACTIONS AND PHYSICAL CHANNEL INTEGRATIONAs observed inFigures 7aand8a,communications between the memory controller and near memory devices may be carried over a same channel that communications between the memory controller and far memory devices are communicated. Further, as mentioned above, near memory and far memory may be accessed by the memory controller with different protocols (e first protocol for accessing near memory and a second protocol for accessing far memory. As such two different protocols may be implemented, for example, on a same memory channel. Various aspects of these protocols are discussed immediately below.a. Near Memory Cache Access (first protocol)Two basic approaches for accessing near memory were presented in the sections above: a first where the near memory storage devices reside "in front of" the far memory control logic, and, a second where the near memory storage devices reside "behind" the far memory control logic.i. near memory in frontAt least in the case where the near memory devices are located "in front of" the far memory control logic, it may be beneficial to preserve or otherwise use an existing/known protocol for communicating with system memory. For example, in the case where near memory cache is implemented with DRAM devices affixed to a DIMM card, it may be beneficial to use a memory access protocol that is well established/accepted for communicating with DRAM devices affixed to a DIMM card (e.g., either a presently well established/accepted protocol, or, a future well established/accepted protocol). By using a well established/accepted protocol for communicating with DRAM, economies of scale may be achieved in the sense that DIMM cards with DRAM devices that were not necessarily designed for integration into a computing system having near and far memory levels may nevertheless be "plugged into" the memory channel of such a system and utilized as near memory.Moreover, even in cases where the near memory is located "behind" the far memory control logic, when attempting to access near memory, the memory controller may nevertheless be designed to communicate to the far memory control logic using well established/known DRAM memory access protocol so that the system as a whole may offer a number of different system configuration options to a user of the system. For example, a user can choose between using: 1) "DRAM only" DIMM cards for near memory; or, 2) DIMM cards having both DRAM and PCMS devices integrated thereon (with the DRAM acting as the near memory for the PCMS devices located on the same DIMM).Implementation of a well established/known DRAM protocol also permits a third user option in which a two level memory scheme (near memory and far memory) is not adopted (e.g., no PCMS devices are used to implement system memory) and, instead, only DRAM DIMMs are installed to effect traditional "DRAM only" system memory. In this case, the memory controller's configuration would be set so that it behaved as a traditional memory controller (that does not utilize any of the features described herein to effect near and far memory levels).As such, logic circuitry that causes the memory controller to behave like a standard memory controller would be enabled, whereas, logic circuitry that causes the memory controller to behave in a manner that contemplates near and far memory levels would be disabled. A fourth user option may be the reverse where system memory is implemented only in an alternative system memory technology (e.g., only PCMS DIMM cards are plugged in). In this case, logic may be enabled that causes the memory controller to execute basic read and write transactions only with a different protocol that is consistent with the alternative system memory technology (e.g., PCMS specific signaling).Figure 9ashows an exemplary depiction of a memory channel 921 that is adapted to support a well established/known DRAM access protocol (such as Double Data Rate ("DDR") which effects read and write accesses on rising and falling edges of a same signal). The channel 921 can be viewed as having three principle sub-components: 1) a command bus 941 (over which read and write requests and their corresponding addresses are sent); 2) a data bus 942 (over which read and write data is sent); and, 3) control signals 943 (select signal(s) 943_1, clock enable signal(s) 943_2, on-die termination signal(s) 943_3). In an embodiment, as described above, the memory controller 909 presents traditional DDR signals on the channel when it is accessing near memory cache regardless if it is "talking to" actual DRAM devices on one or more DIMM cards, and/or, one or more far memory control logic chips on one or more same or additional DIMM cards.According to one embodiment of the operation of channel 921, for near memory accesses: 1) the command bus 941 carries packets in the direction from the memory controller 909 toward the near memory storage devices, where, each packet includes a read or write request and an associated address; and, 2) the data bus 942 carries write data to targeted near memory devices, and, carries read data from targeted near memory devices.As observed inFigure 9a,the data bus 942 is composed of additional lines beyond actual read/write data lines 942_1. Specifically, the data bus 942 also includes a plurality of ECC lines 942_2, and strobe lines 942_3. As well known, ECC bits are stored along with a cache line's data so that data corruption errors associated with the reading/writing of the cache line can be detected. For example, a 64 byte (64B) cache line may additionally include 8 bytes (8B) of ECC information such that the actual data width of the information being stored is 72 bytes (72B). Strobes lines 942_3 are typically assigned on a per data line basis (e.g., a strobe line pair is assigned for every 8 or 4 bits of data/ECC). In a double data rate approach, information can be written or read on both rising and falling edges of the strobes 942_3.With respect to the control lines 943, in an embodiment, these include select signals 943_1, clock enable lines 943_2, and on-die termination lines 943_3. As is well known, multiple DIMM cards can be plugged into a same memory channel. Traditionally, when a memory controller reads or writes data at a specific address, it reads or writes the data from/to a specific DIMM card (e.g., an entire DIMM card or possibly a side of a DIMM card or other portion of a DIMM card). The select signals 943_1 are used to activate the particular DIMM card (or portion of a DIMM card) that is the target of the operation, and, deactivate the DIMM cards that are not the target of the operation.Here, the select signals 943_1 may be determined from the bits of the original read or write transaction (e.g., from the CPU) which effectively specify which memory channel of multiple memory channels stemming from the memory controller that is the target of the transaction, and, further, which DIMM card of multiple DIMM cards plugged into the identified channel is the target of the transaction. Select signals 943_1 could conceivably be configured such that each DIMM card (or portion of a DIMM) plugged in a same memory channel receives its own one unique select signal. Here, the particular select signal sent to the active DIMM card (or portion of a DIMM card) for the transaction is activated, while the select signals sent to the other DIMM cards are deactivated. Alternatively, the signal signals are routed as a bus to each DIMM card (or portion of a DIMM card). The DIMM card (or portion of a DIMM card) that is selected is determined by the state of the bus.The clock enable lines 943_2 and on-die termination lines 943_3 are power saving features that are activated before read/write data is presented on the channel's data bus 942, and, deactivated after read/write data is presented on the channel's data bus 942_1.In various embodiments, such as near memory cache constructed from DRAM, the timing of near memory transactions are precisely understood in terms of the number of clock cycles needed to perform each step of a transaction. That is, for near memory transactions, the number of clock cycles needed to complete a read or write request is known, and, the number of clock cycles needed to satisfy a read or write request is known.Figure 10shows an atomic operation sequence for read and write operations of a near memory access protocol as applied to near memory (e.g., over a memory channel as just described above). According to the methodology ofFigure 10,a targeted DIMM card (or portion of a DIMM card) amongst multiple DIMM cards that are plugged into a same memory channel is selected through activation of appropriate select lines 1001. Clock enable lines and on-die termination lines are then activated 1002 (conceivably there may be some overlap of the activation of the select lines and the clock enable and on-die termination lines). A read or write command with the applicable address is then sent (e.g., over the command bus) 1003. Only the selected/activated DIMM card (or portion of a DIMM card) can receive and process the command. In the case of a write, write data is written into the activated devices (e.g., from a memory channel data bus) 1004. In the case of a read, read data from the activated devices is presented (e.g., on a memory channel data bus) 1004.Note that the process ofFigure 10,although depicting atomic operations to near memory in a future memory protocol, can also be construed consistently with existing DDR protocol atomic operations. Moreover, future systems that include near memory and far memory may access near memory with an already existing DDR protocol or in with a future DRAM protocol that systems of the future that only have DRAM system memory technology access DRAM system memory with.Specifically, in an implementation where the DRAM near memory cache is "in front of" the far memory control logic, and where, the far memory control logic circuitry does not update the DRAM near memory cache on a read transaction having a cache miss, the memory controller will drive signals on the channel in performing steps 1001, 1002, 1003 and provide the write data on the data bus for a write transaction in step 1004. In this case, the memory controller may behave much the same as existing memory controllers or memory controllers of future systems that only have DRAM system memory. The same may be said for the manner in which the memory controller behaves with respect to when: i) cache is first read for either a read or a write transaction; and, ii) cache is written after a cache hit for either a read or a write transaction.ii. near memory behindFurther still, in implementations where the DRAM near memory cache is "behind" the far memory control logic, for either a read or write of near memory cache, near memory may still be accessed with a protocol that is specific to the near memory devices. For example, the near memory devices may be accessed with a well established (current or future) DRAM DDR protocol. Moreover, even if the near memory devices themselves are specifically signaled by the far memory control logic with signals that differ in some way from a well established DRAM protocol, the memory controller may nevertheless, in ultimately controlling the near memory accesses, apply a well established DRAM protocol on the channel 921 in communicating with the far memory control logic to effect the near memory accesses.Here, the far memory control logic may perform the local equivalent (i.e., "behind" the far memory control logic rather than on the channel) of any/all of steps 1001, 1002, 1003, or aspects thereof, in various combinations. In addition, the memory controller may also perform each of these steps in various combinations with the far memory control logic including circumstances where far memory logic circuitry is also performing these same steps. For example, the far memory control logic may be designed to act as a "forwarding" device that simply accepts signals from the channel originally provided by the memory controller and re-drives them to its constituent near memory platform.Alternatively, the far memory control logic may originally create at least some of the signals needed to perform at least some of steps 1001, 1002, 1003 or aspects thereof while the memory controller originally creates signals needed to perform others of the steps. For instance, according to one approach, in performing a cache read, the memory controller may initially drive the select signals on the channel in performing step 1001. In response to the receipt of the select signals 1001, the far memory control logic may simply re-drive these signals to its constituent near memory platform, or, may process and comprehend their meaning and enable/disable the near memory platform (or a portion thereof) according to a different selection signaling scheme than that explicitly presented on the channel by the memory controller. The select signals may also be provided directly to the near memory platform from the channel and also routed to the far memory control logic so the far memory control logic can at least recognize when its constituent near memory platform (or portion thereof) is targeted for the transaction.In response to recognizing that at least a portion of its constituent near memory devices are targeted for the transaction, the far memory control logic may originally and locally create any/all of the clock enable signals and/or on-die termination signals in step 1002 behind the far memory control logic between the control logic and the near memory storage devices. These signals may be crafted by the far memory control logic from a clock signal or other signal provided on the channel by the memory controller. Any clock enable signals or on-die termination signals not created by the far memory control logic may be provided on the channel by the memory controller and driven to the near memory platform directly, or, re-driven by the near memory control logic.For near memory cache read operations, the memory controller may perform step 1003 by providing a suitable request and address on the command bus of the channel. The far memory control logic may receive the command from the channel (and locally store its pertinent address information). It may also re-drive or otherwise present the read command and address to the near memory platform. With respect to step 1004, the memory controller will also receive the cache read data. The read data may be presented on the channel's data bus by the far memory control logic circuitry (in re-driving the read data provided by the near memory platform), or, the read data may be driven on the channel's data bus by the near memory platform directly.With respect to near memory channel operations that occur after a cache read, such as a write to cache after a cache hit for a write transaction, the far memory control logic circuitry or the memory controller may perform any of steps 1001, 1002, 1003 in various combinations consistent with the principles described just above. At one extreme, the far memory control logic circuitry performs each of steps 1001, 1002 and 1003 independently of the memory controller. At another extreme the memory controller performs each of steps 1001, 1002 and 1003, and, the far memory control logic circuitry re-drives all or some of them to the near memory platform, or, receives and comprehends and then applies its own signals to the near memory platform in response. In between these extremes, the far memory control logic may perform some of steps 1001, 1002, and 1003 or aspects thereof while the memory controller performs others of these steps or aspects thereof.The atomic operations described just above may be integrated as appropriate with the embodiments disclosed above in the preceding sections.b. Far Memory AccessRecall that where near memory cache is constructed from DRAM, for example, the timing of near memory transactions are precisely understood in terms of the number of clock cycles needed to perform each step of a transaction. That is, for near memory transactions, the number of clock cycles needed to complete a read or write request is known, and, the number of clock cycles needed to satisfy a read or write request is known. As such, near memory accesses may be entirely under the control of the memory controller, or, at least, the memory controller can precisely know the time spent for each near memory access (e.g., for scheduling purposes).By contrast, for far memory transactions, although the number of clock cycles needed to complete a read or write request over the command bus may be known (because the memory controller is communicating to the near memory control logic circuitry), the number of clock cycles needed to satisfy any such read or write request to the far memory devices themselves is unknown. As will be more apparent in the immediately following discussion, this may lead to the use of an entirely different protocol on the channel for far memory accesses than that used for near memory accesses.Figure 11shows a more detailed view of an embodiment of the far memory control logic circuitry 1120 and the associated interface circuitry 1135 that directly interfaces with the far memory devices. Here, for example, the various storage cells of the near memory devices may have different "wear-out" rates depending on how frequently they are accessed (more frequently accessed cells wear out faster than less frequently accessed cells).In an attempt to keep the reliability of the various storage cells approximately equal, logic circuitry 1120 and/or interface circuitry 1135 may include wear-out leveling algorithm circuitry 1136 that, at appropriate moments, moves the data content of more frequently accessed storage cells to less frequently accessed storage cells (and, likewise, moves the data content of less frequently accessed storage cells to more frequently accessed storage cells). When the far memory control logic has a read or write command ready to issue to the far memory platform, a wear out leveling procedure may or may not be in operation, or, if in operation, the procedure may have only just started or may be near completion or anywhere in between.These uncertainties, as well as other possible timing uncertainties stemming from the underlying storage technology (such as different access times applied to individual cells as a function of their specific past usage rates), lead to the presence of certain architectural features. Specifically, with respect to the near memory control logic, a far memory write buffer 1137 exists to hold write requests to far memory, and, a far memory read buffer 1138 exists to hold far memory read requests. Here, the presence of the far memory read and write buffers 1137, 1138 permits the queuing, or temporary holding, of read and write requests.If a read or write request is ready to issue to the far memory devices, but, the far memory devices are not in a position to receive any such request (e.g., because a wear leveling procedure is currently in operation), the requests are held in their respective buffers 1137, 1138 until the far memory devices are ready to accept and process them. Here, the read and write requests may build up in the buffers from continued transmissions of such requests from the memory controller and/or far memory control logic (e.g., in implementations where the far memory control logic is designed to automatically access near memory as described above) until the far memory devices are ready to start receiving them.A second architectural feature is the ability of the memory controller to interleave different portions of read and write transactions (e.g., from the CPU) on the channel 1121 to enhance system throughput. For example, consider a first read transaction that endures a cache miss which forces a read from far memory. Because the memory controller does not know when the read request to far memory will be serviced, rather than potentially idle the channel waiting for a response, the memory controller is instead free to issue a request that triggers a cache read for a next (read or write) transaction. The process is free to continue until some hard limit is reached.For example, the memory controller is free to initiate a request for a next read transaction until it recognizes that either the far memory control logic's read buffer 1138 is full (because a cache miss would create a need for a far memory read request) or the far memory control logic's write buffer is full (because a set dirty bit on a cache miss will create a need for a far memory write request). Similarly, the memory controller is free to initiate a request for a next write transaction until it recognizes that the far memory control logic's write buffer is full (because a set dirty bit on a cache miss will create a need for a far memory write request).In an embodiment, the memory controller maintains a count of credits for each of the write buffer 1137 and the read buffer 1138. Each time the write buffer 1137 or read buffer 1138 accepts a new request, its corresponding credit count is decremented. When the credit count falls below or meets a threshold (such as zero) for either of the buffers 1137, 1138, the memory controller 1137, 1138 refrains from issuing on the channel any requests for a next transaction. As described in more detail below, the memory controller can comprehend the correct credit count for the read buffer by: 1) decrementing the read buffer credit count whenever a read request is understood to be presented to the read buffer 1138 (either by being sent by the memory controller over the channel directly, or, understood to have been created and entered automatically by the far memory control logic); and, 2) decrementing the read buffer credit whenever a read response is presented on the channel 1121 for the memory controller.Moreover, again as described in more detail below, the memory controller can comprehend the correct credit count for the write buffer by: 1) decrementing the write buffer credit count whenever a write request is understood to be presented to the write buffer 1137 (e.g., by being sent by the memory controller over the channel directly, or, understood to have occurred automatically by the far memory control logic); and, 2) decrementing the write buffer credit whenever a write request is serviced from the write buffer 1137. In an embodiment, again as described in more detail below, the far memory control logic 1120 informs the memory controller of the issuance of write requests from the write buffer 1137 to the far memory storage device platform 1131 by "piggybacking" such information with a far memory read request response. Here, a read of far memory is returned over the channel 1121 to the memory controller. As such, each time far memory control logic 1120 performs a read of far memory and communicates a response to the memory controller, as part of that communication, the far memory control logic also informs the memory controller of the number of write requests that have issued from the write buffer 1137 since the immediately prior far memory read response.An additional complication is that, in an embodiment, read requests may be serviced "out of order". For example, according to one design approach for the far memory control logic circuitry, write requests in the write buffer 1137 are screened against read requests in the read buffer 1138. If any of the target addresses between the two buffers match, a read request having one or more matching counterparts in the write buffer is serviced with the new write data associated with the most recent pending write request. If the read request is located in any other location than the front of the read buffer queue 1138, the servicing of the read request will have the effect of servicing the request "out-of-order" with respect to the order in which read requests were entered in the queue 1138. In various embodiments the far memory control logic may also be designed to service requests "out-of-order" because of the underlying far memory technology (which may, at certain times, permit some address space to be available for a read but not all address space).In order for the memory controller to understand which read request response corresponds to which read request transaction, in an embodiment, when the memory controller sends a read request to the far memory control logic, the memory controller also provides an identifier of the transaction ("TX_ID") to the near memory control logic. When the far memory control logic finally services the request, it includes the transaction identifier with the response.Recall thatFigure 9aand its discussion pertained to an embodiment of a memory channel and its use by a memory controller for accessing near memory cache with a first (near memory) access protocol. Notably,Figure 9ais further enhanced to show information that can be "snuck" onto the channel by the memory controller as part of the first (near memory) access protocol - but - is nevertheless used by the far memory controller to potentially trigger a far memory access.Figure 9bshows the same channel and its use for accessing far memory cache by the memory controller with a second (far memory) access protocol.Because in various embodiments the tag information of a cache line's full address is stored along with the data of the cache line in near memory cache (e.g., embedded tag information 411, 711, 811), note thatFigure 9aindicates that, when the channel is used to access near memory cache (read or write), some portion of bits lines 942_2 that are nominally reserved for ECC are instead used for the embedded tag information 411, 711. "Stealing" ECC lines to incorporate the embedded tag information rather than extending the size of the data bus permits, for example, DIMM cards manufactured for use in a traditional computer system to be used in a system having both near and far levels of storage. That is, for example, if a DRAM only DIMM were installed in a channel without any far memory (and thus does not act like a cache for the far memory), the full width of the ECC bits would be used for ECC information. By contrast, if a DIMM having DRAM were installed in a channel with far memory (and therefore the DRAM acts like a cache for the far memory), when the DRAM is accessed, some portion of the ECC bits 942_2 would actually be used to store the tag bits of the address of the associated cache line on the data bus. The embedded tag information 411, 711, 811 is present on the ECC lines during step 1004 ofFigure 10when the data of a near memory cache line is being written into near memory or being read from near memory.Also recall from above that in certain embodiments the far memory control logic may perform certain acts "automatically" with the assistance of the additional information that is "snuck" to the far memory controller on the memory channel as part of a near memory request. These automatic acts may include: 1) automatically detecting a cache hit or miss; 2) an automatic read of far memory upon recognition of a cache miss and recognition that a read transaction is at play; and, 3) an automatic write to far memory upon recognition of a cache miss coupled with recognition that the dirty bit is set.As discussed in preceding sections, in order to perform 1), 2) and 3) above, the cache hit or miss is detected by sneaking the transaction's tag information 405, 705, 805 to the far memory control logic as part of the request that triggers the near memory cache access, and, comparing it to the embedded tag information 411, 711, 811 that is stored with the cache line and that is read from near memory.In an embodiment, referring toFigure 9aandFigure 10the transaction's tag information 405, 705, 805 is snuck to the far memory control logic over the command bus in step 1003 (command phase) in locations that would otherwise be reproduced as unused column and/or row bits on the near memory address bus (e.g., more so column than row). The snarf of the embedded tag information 411, 711, 811 by the far memory control logic can be made in step 1004 ofFigure 10when the cache line is read from near memory by snarfing the "stolen ECC bits" as described above). The two tags can then be compared .Moreover, in order to perform 2) or 3) above, the far memory control logic should be able to detect the type of transaction at play (read or write). In the case where near memory is in front of the far memory control logic, again referring toFigure 9aandFigure 10,the type of transaction at play can also be snuck to the far memory control logic over the command bus in a manner like that described for 1) just above for a transaction's tag information (e.g., on the command bus during command phase 1003). In the case where the near memory is behind the far memory control logic, it is possible for the far memory control logic to detect whether the overall transaction is a read or write simply by keying off of the transaction's original request from the memory controller (e.g., compareFigures 8band8d). Otherwise the same operation as for the near memory in front approach can be effected.Additionally, in order to perform 3) above, referring toFigure 9aandFigure 10,the far memory control logic should be able to detect whether the dirty bit is set. Here, since the dirty bit is information that is embedded with the data of a cache line in near memory, another ECC bit is "stolen" as described just above with respect to the embedded tag information 411, 711, 811. As such, the memory controller writes the dirty bit by presenting the appropriate value in one of the ECC bit locations 942_2 of the channel during step 1004 of a near memory write access. Similarly, the far memory control logic can detect the dirty bit by snarfing this same ECC location during a near memory read access.Referring toFigure 9bandFigure 10,in order to address "out-of-order" issues, a transaction identifier can be sent to the far memory control logic circuit as part of a far memory read request. This can also be accomplished by presenting the transaction identifier on the command bus during the command phase 1003 of the far memory read request.Figure 12ashows an atomic process for a read access of far memory made over the channel by the memory controller. The process ofFigure 12amay be accomplished, for instance, in cases where the far memory control logic does not automatically perform a read into far memory upon detection of a cache miss for a read transaction and needs to be explicitly requested by the memory controller to perform the far memory read. Moreover, recall that in embodiments described above, the memory controller can issue a read request to the far memory control logic in the case of a cache miss even if the far memory control logic automatically initiates the far memory read (see, e.g.,Figures 7band8b).Referring toFigures 9b,11and12a,a read request having a far memory read address is issued 1201 by the memory controller over the command bus 941. The read request issued over the command bus also includes a transaction identifier that is kept (e.g., in a register) by the far memory control logic 1120.The request is placed 1202 in a read buffer 1138. Write requests held in a write buffer 1137 are analyzed to see if any have a matching target address 1203. If any do, the data for the read request response is taken from the most recently created write request 1204. If none do, eventually, the read request is serviced from the read buffer 1138, read data is read from the far memory platform 1131, and ECC information for the read data is calculated and compared with the ECC information stored with the read data 1205. If the ECC check fails an error is raised by the far memory control logic 1206. Here, referring toFigure 9b,the error may be signaled over one of the select 943_1, clock enable 943_2 or ODT 943_3 lines.If the read response was taken from the write buffer 1137 or the ECC check was clean, the far memory control logic 1120 informs the memory controller that it has a read response ready for transmission 1207. In an embodiment, as observed inFigure 9b,this indication 990 is made over one of a select signal line 943_1, clock enable signal line 943_2 or an on-die termination line 943_3 of the channel that is usurped for this purpose. When the memory controller (which in various embodiments has a scheduler to schedule transactions on the channel), decides it can receive the read response, it sends an indication 991 to the far memory control logic that it should begin to send the read response 1208. In an embodiment, as observed inFigure 9b,this indication 991 is also made over one of a select line 943_1, clock enable signal line 943_2 or an on-die termination line 943_3 of the channel that is usurped for this purpose.The far memory control logic 1120 then determines how many write requests have issued from the write buffer 1137 since the last read response was sent ("write buffer issue count"). The read data is then returned over the channel along with the transaction identifier and the write buffer issue count 1209. In an embodiment, since the ECC calculation was made by the far memory control logic, the data bus lines that are nominally used for ECC are essentially "free". As such, as observed inFigure 9b,the transaction identifier 992 and write buffer issue count 993 are sent along the ECC lines 942_2 of the channel from the far memory controller to the memory controller. Here, the write buffer issue count 993 is used by the memory controller to calculate a new credit count so as to permit the sending of new write requests to the far memory control logic 1210. The memory controller can self regulate its sending of read requests by keeping track of the number of read requests that have been entered into the read buffer 1138 and the number of read responses that have been returned.Figure 12bshows a basic atomic process for a write access of far memory over the channel by the memory controller. The process ofFigure 12bmay be accomplished, for instance, in cases where the far memory control logic does not automatically perform a write into far memory (e.g., on a cache miss with the dirty bit for either a read transaction or a write transaction) and needs to be explicitly requested by the memory controller to do so. The write process of Figure 12b may also be utilized in channels that do not have any resident near memory (e.g., a PCMS only channel). According to the process ofFigure 12bthe memory controller receives a write transaction 1221. The memory controller checks its write buffer credit count to see if enough credits exist to send a write request 1222. If so, the memory controller sends a write request 1223 to the far memory control logic over the command bus. In response, the far memory control logic places the request in its write buffer 1224. Eventually, the write request is serviced from the write buffer, ECC information is calculated for the data to be written into far memory and stored along with the data into far memory 1224.Enhanced write process were discussed previously with respect toFigure 7d(near memory in front) andFigure 8d(near memory behind). Here, the operation of the far memory control logic and embodiments of specific components of the channel for effecting these write processes have already been discussed above. Notably, however, in addition, with respect to the enhanced write process ofFigure 7d,the memory controller can determine from the cache read information whether a write to far memory is needed in the case of a cache miss and the dirty bit is set. In response, the memory controller can increment its write buffer count as it understands the far memory control logic will automatically perform the write into far memory but will also automatically enter a request into the write buffer 1224 in order to do so. With respect to the enhanced write process ofFigure 8d,the memory controller can also receive the cache read information and operate as described just above.Of course, the far memory atomic operations described above can be utilized, as appropriate, over a channel that has only far memory technology (e.g., a DDR channel only having DIMMs plugged into whose storage technology is only PCMS based).The far memory control logic as described above can be implemented on one or more semiconductor chips. Likewise the logic circuitry for the memory controller can be implemented on one or more semiconductor chips.Although much of the above discussion was directed to near memory system memory and far memory system memory devices that were located external to the CPU die and CPU package (e.g., on DIMM cards that plug into a channel that emanates from the CPU package), architecturally, the above embodiments and processes could nevertheless also be implemented within a same CPU package (e.g., where a channel is implemented with conductive traces on a substrate that DRAM and PCMS devices are mounted to along with the CPU die in a same CPU package (far memory control logic could be designed into the CPU die or another die mounted to the substrate) or even on the CPU die itself (e.g., where, besides logic circuitry to, e.g., implement the CPU and memory controller, the CPU die also has integrated thereon DRAM system memory and PCMS system memory, and, the "channel" is implemented with (e.g., multi-level) on-die interconnect wiring).TRAININGTraining is an embedded configuration scheme by which communicatively coupled semiconductor devices can "figure out" what the appropriate signaling characteristics between them should be. In the case where only DRAM devices are coupled to a same memory channel, the memory controller is trained to the read data provided by each rank of DRAM. The memory controller is also trained to provide properly timed write data to each rank. Training occurs on an 8 bit basis for x8 DRAMs and on a 4 bit basis for x4 DRAMs. Differences in trace lengths between 4 or 8 bit groups require this training resolution (within the 4 or 8 bit group, the traces are required to be matched). The host should do the adjustments because the DRAMs no not have adjustment capability. This saves both cost and power on the DRAMs.When snarfing is to be done because PCMS and DRAM are coupled to a same channel, the far memory controller must be trained also. For reads from near memory, the far memory controller must be trained to accept the read data. If read data is to be snarfed by the DRAMs from the far memory controller, the far memory controller must be trained to properly time data to the DRAMs (which are not adjustable), followed by the host being trained to receive the resulting data. In the case of the far memory controller snarfing write data, a similar two step procedure would be used.Additional exemplary embodimentsExample 1. A method performed by logic circuitry disposed on a card having a connector to plug into a memory channel that supports near memory cache accesses and far memory accesses, comprising: receiving from said memory channel a first tag component of a target address of a read request transaction being processed by a host that is coupled to said memory channel; receiving a second tag component of an address of a cache line read from a near memory cache in response to said read request transaction; and, comparing said first and second tag components to determine if said cache line corresponds to a cache hit or a cache miss.Example 2. The method of example 1 further comprising performing at least one of the following in response to detecting that a cache miss has occurred: automatically reading a desired cache line from far memory; detecting that a dirty bit of said cache line read from near memory is set and automatically writing said cache line read from said near memory into far memory.Example 3. The method of example 1 wherein after said reading of said desired cache line from far memory said logic circuitry further performs an ECC calculation on data of said desired cache line.Example 4. The method of example 1 wherein said near memory cache is implemented with DRAM technology and said far memory is implemented with PCM technology.Example 5. The method of example 1 wherein said near memory cache resides on said card.Example 6. The method of example 1 further comprising performing the following in response to detecting that a cache miss has occurred: receiving from said host an identifier of said read request transaction and presenting said identifier of said read request transaction on said channel as part of a communication on said channel that transports data of said cache line read from far memory to said host.Example 7. The method of example 1 wherein said first tag component is received with a first read request presented on said channel by said host according to a first channel protocol used for accessing said near memory.Example 8. The method of example 7 wherein said second tag component is received with a second read request presented on said channel by said host according to a second channel protocol used for accessing said far memory.Example 9. A semiconductor chip, comprising: an interface to a memory channel; a read buffer to hold a far memory read request received from said memory channel; logic circuitry to detect a cache miss of a cache line read from a near memory in response to a near memory read request issued on said memory channel, said near memory a cache for said far memory, said logic circuitry to additionally perform at least one of the following in response thereto: initiate a read of a desired cache line from said far memory, said desired cache line containing data sought by a transaction that caused said near memory read request to be issued on said memory channel; detect that a dirty bit of said cache line read from near memory is set and automatically writing said cache line read from said near memory into far memory.Example 10. The semiconductor chip of example 9 wherein said logic circuitry receives from said first interface both tag information of an address of said cache line read from near memory and tag information of said transaction's address.Example 11. The semiconductor chip of example 9 wherein said logic circuitry includes a second interface distinct from said first interface to couple to said far memory, and wherein said semiconductor chip receives through said first interface tag information of said transaction's address.Example 12. The semiconductor chip of example 9 further comprising ECC logic to calculate ECC information for said cache line read from said near memory and/or said cache line written into said far memory.Example 13. The semiconductor chip of example 9 further comprising first register space to store a first tag component of said transaction's address, and, second register space to store a second tag component of an address of said cache line read from said near memory, said second tag component embedded with said cache line read from said near memory.Example 14. The semiconductor chip of example 9 wherein said near memory is implemented with DRAM and said far memory component is implemented with PCM.Example 15. The semiconductor chip of example 14 wherein said semiconductor chip further comprises wear out leveling algorithm logic circuitry for said PCM far memory.Example 16. The semiconductor chip of example 14 wherein said semiconductor chip further comprises a write request buffer to hold write requests to said far memory, and, a read request buffer to hold read requests to said far memory.Example 17. A semiconductor chip comprising: memory controller circuitry having interface circuitry to couple to a memory channel, the memory controller circuitry including; first logic circuitry to implement a first memory channel protocol on the memory channel through the interface circuitry, said first memory channel protocol specific to a first volatile system memory technology; second logic circuitry to implement a second memory channel protocol on the memory channel through the interface circuitry, said second memory channel protocol specific to a second non volatile system memory technology, said second memory channel protocol being a transactional protocol.Example 18. The semiconductor chip of example 17 wherein said first type of memory storage technology is DRAM and said second type of memory storage technology is PCM.Example 19. The semiconductor chip of example 17 wherein said memory channel includes a command bus, a data bus and control lines.Example 20. The semiconductor chip of example 17 wherein said first logic circuitry does not receive a read request from said first interface as part of implementing said first memory channel protocol and wherein said second logic circuitry receives a read request from said interface as part of implementing said second memory channel protocol.Example 21. A computer system, comprising: memory controller circuitry having interface circuitry to couple to a memory channel, the memory controller circuitry including; first logic circuitry to implement a first memory channel protocol on the memory channel through the interface circuitry, said first memory channel protocol specific to a first volatile system memory technology; second logic circuitry to implement a second memory channel protocol on the memory channel through the interface circuitry, said second memory channel protocol specific to a second non volatile system memory technology, said second memory channel protocol being a transactional protocol; a first memory device composed of the first volatile system memory technology coupled to the memory channel; and, a second memory device composed of the second non volatile system memory technology coupled to the memory channel.Example 22. The computer system of example 21 wherein said first type of memory storage technology is DRAM and said second type of memory storage technology is PCM.Example 23. The computer system of example 21 wherein said memory channel includes a command bus, a data bus and control lines.Example 24. The computer system of example 23 further comprising, along said command bus, far memory control logic circuitry between the channel and the second memory device.Example 25. The computer system of example 24 wherein said far memory control circuitry is also between said channel and said first memory device along said command bus.
A universal asynchronous receiver/transmitter (UART) module is disclosed. The UART module may include a receiver unit being clocked by a programmable receiver clock configured to sample an incoming data signal and comprising a counter clocked by said receiver clock, wherein the counter is reset to start counting with every falling edge of the data signal and to trigger a BRK detection signal if the counter reaches a programmable threshold value.
1.A universal asynchronous receiver/transmitter UART module includes:a receiver unit that is clocked by a programmable receiver clock configured to sample an incoming data signal and includes a counter clocked by the receiver, wherein the counter is reset to be at the data signal Each falling edge starts counting, and if the counter reaches a programmable threshold, the BRK detection signal is triggered.2.The UART of claim 1, wherein the counter stops counting on a rising edge of the data signal.3.The UART according to claim 1 or 2, wherein the threshold may be programmed to 11 .4.The UART according to any one of the preceding claims, wherein the receiver unit comprises a state machine to control the counter.5.The UART of claim 4 wherein the state machine is programmable to operate in different operating modes.6.The UART of claim 4 or 5, further comprising a first-in-first-out buffer memory that receives a plurality of sampled data.7.The UART of any of the preceding claims, wherein the programmable receiver clock is coupled to a baud rate generator.8.A microprocessor includes:A Universal Asynchronous Receiver/Transmitter (UART) module includes a receiver unit that is clocked by a programmable receiver clock configured to sample an incoming data signal, and clocked by the receiver. a counter, wherein the counter is reset to start counting at each falling edge of the data signal, and if the counter reaches a programmable threshold, a BRK detection signal is triggered.9.The microprocessor of claim 8, wherein the counter stops counting at a rising edge of the data signal.10.The microprocessor according to claim 8 or 9, wherein the threshold can be programmed to 11.11.A microprocessor according to claim 8, 9 or 10, wherein said receiver unit includes a state machine to control said counter.12.The microprocessor of claim 11 wherein the state machine is programmable to operate in different operating modes.13.The microprocessor of claim 11 or 12, further comprising a first-in-first-out buffer memory that receives a plurality of sampled data.14.A microprocessor according to any one of the preceding claims 8 to 13, wherein the programmable receiver clock is coupled to a baud rate generator.15.A method for controlling a universal asynchronous receiver/transmitter UART module, the method comprising:The receiver unit is clocked by a programmable receiver clock configured to sample the incoming data signal;Resetting a counter clocked by the programmable receiver, wherein the counter is reset to start counting at each falling edge of the data signal; andIf the counter reaches a programmable threshold, the BRK detection signal is triggered.16.The method of claim 15, wherein the counter stops counting on a rising edge of the data signal.17.The method of claim 15 or 16, wherein the threshold may be programmed to 11 .18.A method according to any one of the preceding claims 15 to 17, wherein the receiver unit comprises a state machine to control the counter.19.The method of claim 18, wherein the state machine is programmable to operate in different operating modes.20.The method of claim 18 or 19, further comprising transmitting a plurality of sampled data to a first-in-first-out buffer memory.
Stand-alone UARK BRK detectionCross reference for related applicationsThis application claims the benefit of U.S. Provisional Patent Application No. 62/183,006, filed on June 22, 2015, which is hereby incorporated by reference for all purposes.Technical fieldThe present invention relates to a serial interface, and in particular to a Universal Asynchronous Receiver/Transmitter (UART) interface with BRK detection.Background techniqueUARTs are well known and commonly used in microcontrollers to provide communication channels. The UART interface translates parallel data into serial transmissions. There are various types of protocols and they are used for UART communication as defined by various communication standards such as EIA, RS-232, RS-422, or RS-485. Other protocols (such as the LIN protocol) use the same interface configuration as the RS-232 interface.Summary of the InventionThere is a need to provide a UART that allows automatic detection of the BRK whenever it receives BRK.A universal asynchronous receiver/transmitter (UART) module is disclosed. The UART module may include a receiver unit that is clocked by a programmable receiver clock configured to sample an incoming data signal and includes a counter clocked by the receiver, wherein the counter Reset to start counting at each falling edge of the data signal and trigger the BRK detection signal if the counter reaches a programmable threshold.In various embodiments, a Universal Asynchronous Receiver/Transmitter (UART) module is disclosed. The module may include a receiver unit that is clocked by a programmable receiver clock configured to sample an incoming data signal and includes a counter clocked by the receiver, wherein the counter is reset The counting starts at each falling edge of the data signal and triggers the BRK detection signal if the counter reaches a programmable threshold.In some embodiments, the programmable receiver clock may be coupled to a baud rate generator. In some embodiments, the counter stops counting on the rising edge of the data signal. In the same or alternative embodiment, the threshold may be programmed to 11 .In some embodiments, the receiver unit may include a state machine to control the counter. In such embodiments, the state machine may be programmed to operate in different operating modes. In such embodiments, the interface may also include a first-in-first-out buffer memory that receives multiple sampled data.In various embodiments, a microprocessor is disclosed. The microprocessor may include a Universal Asynchronous Receiver/Transmitter (UART) module that includes a receiver unit that is controlled by a programmable receiver clock configured to sample an incoming data signal and A counter clocked by the receiver is included, wherein the counter is reset to start counting at each falling edge of the data signal and triggers a BRK detection signal if the counter reaches a programmable threshold.In some embodiments, the programmable receiver clock may be coupled to a baud rate generator. In some embodiments, the counter stops counting at a rising edge of the data signal. In the same or alternative embodiment, the threshold may be programmed to 11 .In some embodiments, the receiver unit may include a state machine to control the counter. In such embodiments, the state machine may be programmed to operate in different operating modes. In such embodiments, the interface may also include a first-in-first-out buffer memory that receives multiple sampled data.In various embodiments, a method for controlling a Universal Asynchronous Receiver/Transmitter (UART) module is disclosed. The method can include: clocking a receiver unit by a programmable receiver clock configured to sample an incoming data signal; resetting a counter clocked by the programmable receiver, wherein the counter is reset to be at the data Each falling edge of the signal starts counting; and the BRK detection signal is triggered if the counter reaches a programmable threshold.Description of the drawingsFIG. 1 illustrates a BRK received by a UART at the beginning of a byte according to some embodiments of the present invention; FIG.FIG. 2 illustrates a BRK received by a UART in the middle of a byte according to some embodiments of the present invention; FIG.FIG. 3 illustrates an example known transmitter module of a known universal asynchronous receiver transmitter implemented in a known microcontroller; FIG.4 illustrates an example known transmitter module of a known universal asynchronous receiver transmitter implemented in a known microcontroller; andFIG. 5 illustrates a receiver unit for a UART or any other similar serial interface unit operable to provide an automatic BRK detector according to some embodiments of the present invention.detailed descriptionSome legacy UARTs used by many microcontrollers do not have special logic to detect a break ("BRK") characteristic. In some embodiments, BRK is an 8-bit zero with a Framing Error. FIG. 1 illustrates the BRK 102 received by the UART at the beginning of byte 104 in accordance with some embodiments of the present invention. The receive line (eg, "RXS") can be pulled low after the start bit remains low for 11 clock cycles indicating BRK. In general, the receiver will begin receiving decoding after the eight clock and stop bit clocks that will cause an error (eg, FERIF_qclk). Conventional receivers can only detect this error. In contrast, an enhanced system according to various embodiments can automatically detect this BRK. Due to the fact that BRK is a predetermined length (eg, 11 clocks), a BRK detector counter (described in more detail below with reference to FIG. 5) may detect this BRK signal and generate a corresponding detection signal. The counter can start at the falling edge of the receive line and stop at the next rising edge. If the counter reaches a predetermined BRK number, BRK is detected.In some known systems, the UART may not recognize the BRK if the UART receives BRK in the middle of the byte. FIG. 2 illustrates the BRK 202 received by the UART in the middle of byte 204 according to some embodiments of the present invention. This may not be the ideal operation of a protocol such as a local interconnect network ("LIN"). According to various embodiments, the UART module may include a hardware counter in its receiver unit whenever the BRK signals the BRK. According to various embodiments, a hardware counter is provided in an interface that counts low cycles. As soon as the receive ("RX") line goes low, the counter starts counting. Depending on the serial data transmitted, the BRK detector counter can be set and reset many times until the BRK signal starts. A brief stop caused by the serial data will not trigger any detection. However, the BRK signal in the middle byte transmission can be easily detected by the counter and a corresponding detection signal can be generated.FIG. 3 illustrates an example known transmitter module of a known universal asynchronous receiver transmitter implemented in a known microcontroller. FIG. 4 illustrates an example known receiver module of a known universal asynchronous receiver transmitter implemented in a known microcontroller. The UART module is a serial I/O communication peripheral device. The UART module contains all the clock generators, shift registers, and data buffers needed to perform input or output serial data transfers independent of device program execution. The UART (also called Serial Communication Interface (SCI)) can be configured as a full-duplex asynchronous system. Full-duplex mode is suitable for communication with peripheral systems such as CRT terminals and personal computers.In various embodiments, the UART modules illustrated in FIGS. 3 through 4 may include the following capabilities and others: full-duplex asynchronous transmission and reception; two-character input buffering; single-character output buffering; programmable 8-bit or 9-bit characters Length; 9-bit mode address detection; input buffer overflow error detection; framing error detection of received characters; sleep operation;In various embodiments, the UART module implements the following additional features, making it more suitable for use with a local interconnect network ("LIN") bus system: automatic detection and calibration of baud rates; wake-up on abort reception; 13-bit abort Character transmission. During the sleep mode, all clocks to the UART are suspended. Because of this, the baud rate generator is inactive and cannot perform correct character reception. The auto-wakeup feature allows the controller to wake up due to activity on the receive/data transfer ("RX/DT") line. In some embodiments, this feature may only be available in asynchronous mode. The auto-wakeup feature can be enabled by setting a specific memory section of the UART. For example, the auto-wakeup feature can be enabled by setting the wake-up enabled ("WUE") bit of the BAUDCON register. Once set, the normal receive sequence on RX/DT can be disabled and the Enhanced Universal Synchronous Asynchronous Receiver Transmitter ("EUSART") can be left idle to monitor wake-up events independent of the CPU mode. The wake-up event may consist of a high-to-low transition on, for example, the RX/DT line. (This is consistent with the start of the sync break or wake-up signal character for the LIN protocol). The EUSART module can generate a receive interrupt flag (eg, RCIF interrupt) consistent with the wake-up event. In the normal CPU mode of operation, the interrupt can be generated synchronously with the Q clock, and if the device is in the sleep mode, the interrupt is generated asynchronously. The interrupt condition can be cleared by reading another memory part of the UART (for example, the RCREG register). At the end of the abort, the WUE bit can be automatically cleared by a low-to-high transition on the RX line. This signals the end of the user abort event. At this point, the EUSART module can be in idle mode waiting to receive the next character.The UART can use the standard non-return-to-zero (NRZ) format to transmit and receive data. NRZ is implemented with two levels: a high voltage output ("VOH") flag state that represents a "1" data bit; and a low voltage output ("VOL") blank state that represents a "0" data bit. NRZ refers to the fact that successively transmitted data bits having the same value remain at the bit output level without returning to the intermediate level between each bit transmission. The NRZ transmit port is idle in the marker state. Each character transmission consists of a start bit, followed by 8 or 9 data bits, and is always terminated by one or more stop bits. The start bit is always blank and the stop bit is always a marker. The most common data format is 8 bits. Each transmission bit lasts for a period of 1/(baud rate). A dedicated on-chip 8-bit/16-bit baud rate generator is used to derive the standard baud rate frequency from the system oscillator. The UART can transmit and receive the least significant bit first. The transmitter and receiver of the UART are functionally independent but share the same data format and baud rate. According to some embodiments, parity may not be supported, but parity may be implemented in software and stored as a ninth data bit.Asynchronous mode is typically used in some embodiments implementing the RS-232 standard. Referring again to FIG. 4, in some embodiments, data may be received on the RX/DT 402 pins of the drivable data recovery block 404. In some embodiments, data recovery block 404 may be a high speed shifter operating at a higher rate than baud rate (eg, 16 times baud rate). In some embodiments, receiver 400 may also include a serial receive shift register ("RSR") 406 . RSR 406 may be a shifter operating at (or approximately) a bit rate. When all eight or nine bits of a character have been moved in, they are immediately passed to first-in-first-out ("FIFO") memory 408. In some embodiments, memory 408 may be a two-character FIFO. In some embodiments, the FIFO buffer allows receiving two full characters and starting a third character before software must begin servicing the UART receiver. According to some embodiments, the FIFO and RSR register cannot be directly accessed through software. Access to the received data may be provided through the memory portion of the UART (eg, the RCREG register).FIG. 5 illustrates a receiver unit 500 for a UART or any other similar serial interface unit operable to provide an automatic BRK detector according to some embodiments of the present invention. In some embodiments, the receiver unit 500 may be clocked by a programmable receiver clock 504 . In some embodiments, programmable receiver clock 504 may be clocked by baud rate generator 506 . The programmable receiver clock 504 is operable to sample incoming data signals (eg, incoming data at the receiver pin 508).In some embodiments, receiver unit 500 may include a counter that is clocked by programmable receiver clock 504 . The counter may be reset to start timing at the portion of the data signal and trigger the BRK detection signal if the counter reaches a programmable threshold. For example, as described in greater detail above with reference to FIGS. 1-4, the BRK may include 11 clock cycles. Therefore, if the counter reaches 11, it can trigger the BRK detection signal.In some embodiments, the counter may include a configurable state machine 502 coupled to the BRK detector 504 . In some embodiments, the configuration of the state machine 502 may be controlled by a configuration register signal (eg, MODE[3:0]). For example, as illustrated in FIG. 5, the configuration register signal (MODE[3:0]) has 4 bits, and various settings may be allowed. Other registers can be used. In some embodiments, state machine 502 may be coupled to BRK detector 504 . In various embodiments, BRK detector 504 may be a counter that starts and stops (regardless of the received signal) at the incoming falling and rising edges.In some embodiments, the counter may be further coupled to a memory buffer 508 . For example, the counter may be coupled to a first-in-first-out memory buffer, such as the example buffer illustrated in FIG.According to various embodiments, the UART is described as allowing automatic detection of the BRK whenever BRKs are received.
Examples provide a baseband processing device, a radio frequency device, a baseband processing apparatus, a radio frequency apparatus, a modem, a radio transceiver, a mobile terminal, methods and computer programs for baseband processing and for radio frequency processing. A baseband processing device (10) is configured for a mobile communication modem (100) and to process a baseband representation of a radio frequency signal. The baseband processing device (10) includes an interface (12) configured to exchange payload data and control data with a radio frequency device (50). The interface (12) includes a plurality of parallel communication links to the radio frequency device (50). A clock link (12a) is configured to communicate a clock signal common to the baseband processing device (10) and the radio frequency device (50). A control link (12b) is configured to communicate control data. One or more payload links (12c) are configured to communicate payload data between the baseband processing device (10) and the radio frequency device (50).
A baseband processing apparatus (10) for a mobile communication modem (100), the baseband processing apparatus (10) being configured to process a baseband representation of a radio frequency signal, the baseband processing apparatus (10) comprising means for exchanging (12) payload data and control data with a radio frequency apparatus (50), wherein the means for exchanging (12) is configured to generate a plurality of parallel communication links to the radio frequency apparatus (50),wherein a clock link (12a) is configured to communicate a clock signal common to the baseband processing apparatus (10) and the radio frequency apparatus (50),wherein a control link (12b) is configured to communicate control data, and wherein one or more payload links (12c) are configured to communicate payload data between the baseband processing apparatus (10) and the radio frequency apparatus (50),wherein the baseband processing apparatus (10) is further configured to adapt a power used for communicating on the one or more payload links (12c) based on a payload data transmission rate used on the one or more payload links (12c).The baseband processing apparatus (10) of claim 1, wherein the one or more payload links (12c) comprise a payload uplink, which is configured to transmit payload data from the baseband processing apparatus (10) and to the radio frequency apparatus (50), and wherein the one or more payload links (12c) comprise a payload downlink, which is configured to receive payload data from the radio frequency apparatus (50) at the baseband processing apparatus (10).The baseband processing apparatus (10) of one of the claims 1 or 2, wherein the means for exchanging (12) is configured to transfer link management information for the plurality of parallel communication links on the control link (12b), and wherein the means for exchanging (12) is configured to transfer payload data on the one or more payload links (12c) without header information.The baseband processing apparatus (10) of one of the claims 1 to 3, further configured to adapt a power used for communicating on the one or more payload links (12c) based on an allowable error ratio on the one or more payload links (12c).The baseband processing apparatus (10) of one of the claims 1 to 4, further configured to adapt an error rate on the one or more payload links (12c) based on an error ratio for radio frequency transmission of the payload data, and further configured to adapt an error rate on the one or more payload links (12c) based on a payload data transmission rate used on the one or more payload links (12c).A radio frequency apparatus (50) for a mobile communication modem (100), the radio frequency apparatus (50) being configured to process radio frequency signals of a mobile communication system, the radio frequency apparatus (50) comprising means for exchanging (52) payload data and control data with a baseband processing apparatus (10), wherein the means for exchanging (52) is configured to generate a plurality of parallel communication links to the baseband processing apparatus (10),wherein a clock link (52a) is configured to communicate a clock signal common to the baseband processing apparatus (10) and the radio frequency apparatus (50),wherein a control link (52b) is configured to communicate control data, andwherein one or more payload links (52c) are configured to communicate payload data between the baseband processing apparatus (10) and the radio frequency apparatus (50),wherein the radio frequency apparatus (59) is further configured to adapt a power used for communicating on the one or more payload links (52c) based on a payload data transmission rate used on the one or more payload links (52c).The radio frequency apparatus (50) of claim 6, wherein the one or more payload links (52c) comprise a payload uplink, which is configured to transmit payload data from the baseband processing apparatus and to the radio frequency apparatus (50), and wherein the one or more payload links (52c) comprise a payload downlink link, which is configured to receive payload data from the radio frequency apparatus (50) at the baseband processing apparatus (10).The radio frequency apparatus (50) of one of the claims 6 or 7, wherein means for exchanging (52) is configured to transfer link management information for the plurality of parallel communication links on the control link (52b), and wherein the means for exchanging (52) is configured to transfer payload data on the one or more payload links (52c) without header information.The radio frequency apparatus (50) of one of the claims 6 to 8, further configured to adapt a power used for communicating on the one or more payload links (52c) based on an allowable error ratio on the one or more payload links (52c).The radio frequency apparatus (50) of one of the claims 6 to 9 further configured to adapt an error rate on the one or more payload links (52c) based on an error ratio for radio frequency transmission of the payload data, and further configured to adapt an error rate on the one or more payload links (52c) based on a payload data transmission rate used on the one or more payload links (52c).A modem (100) for a mobile communication system comprising:• the baseband processing apparatus (10) of one of the claims 1 to 5; and• the radio frequency apparatus (50) of one of the claims 6 to 10.A radio transceiver comprising the modem (100) of claim 11.A mobile terminal comprising the modem (100) of claim 11.A method for exchanging data between a baseband apparatus (10) and a radio frequency apparatus (50) of a mobile communication modem (100), the baseband processing apparatus (10) being configured to process a baseband representation of a radio frequency signal, the method comprises exchanging (32) payload data and control data between the baseband processing apparatus (10) and the radio frequency apparatus (50), the method comprisinggenerating (34) a plurality of parallel communication links between the baseband processing apparatus (10) and the radio frequency apparatus (50),communicating (36) a clock signal common to the baseband processing apparatus (10) and the radio frequency apparatus (50) on a clock link (12a; 52a),communicating (38) control data on a control link (12b; 52b); andcommunicating (40) payload data on one or more payload links (12c; 52c)adapting a power used for communicating on the one or more payload links (52c) based on a payload data transmission rate used on the one or more payload links (52c).A computer program having a program code for performing the method of claim 14, when the computer program is executed on a computer, a processor, or a programmable hardware component.
FieldExamples relate to a baseband processing device, a radio frequency device, a baseband processing apparatus, a radio frequency apparatus, a modem, a radio transceiver, a mobile terminal, methods and computer programs for baseband processing and for radio frequency processing, and in particular, but not exclusively, to an interface concept between a baseband processing part and a radio frequency processing part of a radio transceiver.BackgroundModems of radio communication devices may comprise components for baseband processing and components for radio frequency processing. For example, chipsets of a cellular modem may comprise (at least) a baseband chip and (at least) a Radio Frequency (RF) transceiver chip. As a result, data is exchanged between both chips. This data can be distinguished into control plane data (carrying control information, such as protocol information, link control information, etc.) and data plane data (essentially carrying a digital representation of the antenna signals, e.g. payload data). Due to the nature of at least some digital interface standards a portion of the overall data traffic contains protocol or signaling overhead for link management.Document US 2004/0228395 A1 discloses a concept for using digital unidirectional differential links between a baseband component and radio-frequency component. Document US 2004/0204096 A1 discloses a digital interface for a wireless communication system with a reduced number of connectors. Document US 2012/0203943 A1 describes a radio communication device enabling a serial interface to restart transmission in a short time when interface setting is changed. Document US 2006/0013324 A1 discloses an interface between a baseband component and a radio-frequency component, which is realized by an exclusively digital interface for a reception and a transmission path. Document EP 0 720 304 A2 describes a portable radio apparatus, which comprises an apparatus body and a removable radio system unit, which is detachably connected to the apparatus body. Document US 2007/0054629 A1 discloses components of a radio-frequency (RF) apparatus including transceiver circuitry and frequency modification circuitry of a crystal oscillator circuit that generates a reference signal with adjustable frequency. The components may be partitioned in a variety of ways, for example, as one or more separate integrated circuits.Brief description of the FiguresSome examples of apparatuses, methods and/or computer programs will be described in the following by way of example only, and with reference to the accompanying figures, in whichFig. 1 illustrates examples of a baseband processing device or apparatus, a radio frequency device or apparatus, and a modem;Fig. 2 shows a timing diagram of a communication in an example;Fig. 3 depicts an eye-diagram or eye-pattern in an example;Fig. 4 shows a block diagram of an example mobile communication system with a base station transceiver and a mobile transceiver comprising example modems; andFig. 5 shows a block diagram of an example method for exchanging data between a baseband apparatus and a radio frequency apparatus of a mobile communication modem.Detailed DescriptionThe invention is defined by the appended claims.Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.Accordingly, while further examples are capable of various modifications and alternative forms, some particular examples thereof are shown in the figures and will subsequently be described in detail. However, this detailed description does not limit further examples to the particular forms described. Further examples may cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Like numbers refer to like or similar elements throughout the description of the figures, which may be implemented identically or in modified form when compared to one another while providing for the same or a similar functionality.It will be understood that when an element is referred to as being "connected" or "coupled" to another element, the elements may be directly connected or coupled or via one or more intervening elements. If two elements A and B are combined using an "or", this is to be understood to disclose all possible combinations, i.e. only A, only B as well as A and B. An alternative wording for the same combinations is "at least one of A and B". The same applies for combinations of more than 2 elements.The terminology used herein for the purpose of describing particular examples is not intended to be limiting for further examples. Whenever a singular form such as "a," "an" and "the" is used then using only a single element is neither explicitly or implicitly defined as being mandatory, further examples may also use plural elements to implement the same functionality. Likewise, when a functionality is subsequently described as being implemented using multiple elements, further examples may implement the same functionality using a single element or processing entity. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used, specify the presence of the stated features, integers, steps, operations, processes, acts, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, processes, acts, elements, components and/or any group thereof.Unless otherwise defined, all terms (including technical and scientific terms) are used herein in their ordinary meaning of the art to which the examples belong.Common digital interfaces carrying sampled RF data have certain requirements on Bit Error Rate (BER), which are orders of magnitude higher compared to the quality of the actual RF data to be exchanged. It is a finding that a separated payload data transmission on such an interface would allow for relaxed BER. It is a further finding that payload data or in general data having the same BER requirement could be combined in groups of data with the same or similar requirement. Examples can therefore be based on a clear split of control plane and data plane. Payload data transmitted in the data plane is subject to transmission on the air interface (RF-transmission) and therefore subject to transmission errors evoked by the effect of the radio channel. Some examples of these effects are fast fading, slow fading, interference, multipath propagation, Doppler shift, etc. BER requirements for air interface payload transmissions may typically range in the area of 10-2-10-4, etc. BER requirements for control plane data transmitted between baseband and RF-components in a modem may be significantly lower, e.g. 10-10-10-15. Examples therefore consider a split of control and payload data on the respective interface and may adapt transmission settings for control and payload data separately. This may allow better complying to the different BER requirements and may enable reduced energy or power consumption for operating said interface.Some examples may provide an interface that might not require any additional protocol or headers on the data lanes in order to make substantial use of power savings by means of degrading the link quality. The split of control and payload data may enable power savings. "Control" may refer to the protocol stack and RF driver related control plane. For example, some of the control information may refer to synchronization of operating modes and use cases between baseband and RF components of a modem. Such information may relate to timing information on when transmissions start and end, data rate settings, steering information between transmitter and receiver to coordinate data transfer, etc. Other control information may relate to link management, e.g. packet size or format information how large (how many bits or bytes) are comprised in a data packet, sequence numbers, etc.In some of the examples discussed in the following, digital or analogue interfaces discussed in this matter may be decoupled from the real time domain, because the sampling of the carried RF data might not be synchronized to an underlying reference clock. Data packets are transported over the interface. Examples may completely avoid any protocol or header information on the data lanes. The essence of this protocol or header information is moved to a separate control link between the two peers in the form of e.g. time-stamps referring to a shared interface clock, lane-data-mapping, data-rates etc. In some implementations or examples a separate control link may already exist and may carry a protocol stack and RF driver related control plane and may, according to an example, also carry control information that used to be exchanged on data lines.In the following examples of baseband processing devices and radio frequency processing devices will be described, corresponding apparatuses, methods and computer programs, respectively. An interface between these two components will be described, which allows separating payload and control data on different links or lanes, respectively. A baseband processing device is assumed to be configured to process any kind of baseband data, which has been received or which is to be transmitted in an RF transmission band or transmission band. Such baseband processing may include one or more elements of the group of baseband filtering, signal detection, coding/decoding, etc. The RF device converts a baseband signal into the RF transmission band signal and vice versa. Typical processing steps carried out at the RF device may therefore be one or more elements of the group of up- and/or down-conversion, analog-to-digital conversion, digital-to-analog conversion, mixing, filtering, amplifying (low-noise- and/or power-amplifying), diplexing, duplexing, etc.Typical components that could be comprised in the baseband device are one or more processors, one or more Digital-Signal-Processors (DSPs), one or more filters, converters, etc. Typical components that could be comprised in the RF device are one or more processors, one or more Digital-Signal-Processors (DSPs), one or more filters, one or more mixers, one or more amplifiers (Low Noise Amplifiers (LNA) and/or Power Amplifiers (PA)), one or more antennas, etc.Fig. 1 illustrates examples of a baseband processing device or apparatus 10 (left hand side), a radio frequency device or apparatus 50 (right hand side), and a modem 100 comprising the baseband processing device or apparatus 10 and the radio frequency device or apparatus 50.The baseband processing device 10 is configured for the mobile communication modem 100. The modem (modulator-demodulator) is configured to generate radio frequency signals for a mobile communication system. The mobile communication system may correspond, for example, to one of the Third Generation Partnership Project (3GPP)-standardized mobile communication networks, where the term mobile communication system is used synonymously to mobile communication network. The mobile or wireless communication system may correspond to a mobile communication system of the 5th Generation (5G) and may use mm-Wave technology. The mobile communication system may correspond to or comprise, for example, a Long-Term Evolution (LTE), an LTE-Advanced (LTE-A), High Speed Packet Access (HSPA), a Universal Mobile Telecommunication System (UMTS) or a UMTS Terrestrial Radio Access Network (UTRAN), an evolved-UTRAN (e-UTRAN), a Global System for Mobile communication (GSM) or Enhanced Data rates for GSM Evolution (EDGE) network, a GSM/EDGE Radio Access Network (GERAN), or mobile communication networks with different standards, for example, a Worldwide Inter-operability for Microwave Access (WIMAX) network IEEE 802.16 or Wireless Local Area Network (WLAN) IEEE 802.11, generally an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Time Division Multiple Access (TDMA) network, a Code Division Multiple Access (CDMA) network, a Wideband-CDMA (WCDMA) network, a Frequency Division Multiple Access (FDMA) network, a Spatial Division Multiple Access (SDMA) network, etc.A base station or base station transceiver may comprise a modem 100 and can be operable to communicate with one or more active mobile transceivers, which may also comprise such a modem 100. A base station transceiver can be located in or adjacent to a coverage area of another base station transceiver, e.g. a macro cell base station transceiver or small cell base station transceiver. Hence, examples may provide a mobile communication system comprising one or more mobile transceivers and one or more base station transceivers, wherein the base station transceivers may establish macro cells or small cells, as e.g. pico-, metro-, or femto cells. A mobile transceiver or terminal may correspond to a smartphone, a cell phone, User Equipment (UE), a laptop, a notebook, a personal computer, a Personal Digital Assistant (PDA), a Universal Serial Bus (USB) -stick, a car, etc. A mobile transceiver may also be referred to as UE or mobile in line with the 3GPP terminology.A base station transceiver can be located in the fixed or stationary part of the network or system. A base station transceiver may correspond to a remote radio head, a transmission point, an access point, a macro cell, a small cell, a micro cell, a femto cell, a metro cell etc. A base station transceiver can be a wireless interface of a wired network, which enables transmission of radio signals to a UE or mobile transceiver. Such a radio signal may comply with radio signals as, for example, standardized by 3GPP or, generally, in line with one or more of the above listed systems. Thus, a base station transceiver may correspond to a NodeB, an eNodeB, a Base Transceiver Station (BTS), an access point, a remote radio head, a transmission point etc., which may be further subdivided in a remote unit and a central unit.In the example illustrated in Fig. 1 the baseband processing device 10 is configured to process a baseband representation of a radio frequency signal. The baseband processing device 10 comprises an interface 12, which is configured to exchange payload data and control data with a radio frequency device 50. The interface 12 comprises a plurality of parallel communication links to the radio frequency device 50. A clock link 12a is configured to communicate a clock signal common to the baseband processing device 50 and the radio frequency device 50. A control link 12b is configured to communicate control data, and one or more payload links 12c are configured to communicate payload data between the baseband processing device 10 and the radio frequency device 50.The interface 12 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information or signals, any means for exchanging data, which may be in digital (bit) values or signals according to a specified code or analogue signals. The interface 12 may hence follow any protocol or specification and may correspond to any wireline input or output of the baseband device 10. The interface 12 may correspond to any means for interfacing, one or more interface modules, one or more interface units, one or more interface devices, any means for obtaining, providing or exchanging data, an obtainer or provider, one or more provision units, one or more provision modules, one or more provision devices, one or more obtaining modules, one or more obtaining units, one or more obtaining devices, etc.Fig. 1 also illustrates an example of a radio frequency device 50 for a mobile communication modem 100. The radio frequency processing device 50 is configured to process radio frequency signals of a mobile communication system. The radio frequency device 50 comprises an interface 52 configured to exchange payload data and control data with a baseband processing device 10. The interface 52 can be implemented in line with the above interface 12 at the baseband processing device 10. The interface 52 comprises a plurality of parallel communication links to the baseband processing device 10. A clock link 52a is configured to communicate a clock signal common to the baseband processing device 10 and the radio frequency device 50. A control link 52b is configured to communicate control data, and one or more payload links 52c are configured to communicate payload data between the baseband processing device 10 and the radio frequency device 50.A connection between the baseband processing device 10 and the RF device 50 may hence comprise at least one clock link or lane 12a, 52a, at least one control link or lane 12b, 52b, and at least one payload link or lane 12c, 52c. Multiple payload links or lanes 12c, 52c in one connection may be coupled with one or more control links 12b, 52b and one or more clock links 12a, 52a. The terms "link(s)" and "lane(s)" are used synonymously herein. One clock link 12a, 52a may provide a clock signal as reference for multiple payload links 12c, 52c, and multiple control links 12b, 52b, respectively. One control link 12b, 52b may use a clock signal as reference for multiple payload links 12c, 52c.As shown in Fig. 1 the interfaces 12 and 52 may be coupled to each other via the plurality of parallel communication links. For example, the baseband device 10 and the RF device 50 may be implemented together in the modem 100. The plurality of parallel communication links may be implemented as parallel wires, conductors, conducting paths, strip lines, etc. In some examples the baseband device 10, the RF device 50 and the plurality of parallel communication links can be implemented on the same Printed Circuit Board (PCB).Fig. 1 also illustrates an example of a baseband processing apparatus 10 for a mobile communication modem 100. The baseband processing apparatus 10 is configured to process a baseband representation of a radio frequency signal. The baseband processing apparatus 10 comprises means for exchanging 12 payload data and control data with a radio frequency apparatus 50. The means for exchanging 12 is configured to generate a plurality of parallel communication links to the radio frequency apparatus 50. A clock link 12a is configured to communicate a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50. A control link 12b is configured to communicate control data. One or more payload links 12c are configured to communicate payload data between the baseband processing apparatus 10 and the radio frequency apparatus 50.Fig. 1 also illustrates an example of a radio frequency processing apparatus 50 for a mobile communication modem 100. The radio frequency processing apparatus 50 is configured to process radio frequency signals of a mobile communication system. The radio frequency apparatus 50 comprises means for exchanging 52 payload data and control data with a baseband processing apparatus 10. The means for exchanging 52 is configured to generate a plurality of parallel communication links to the baseband processing apparatus 10. A clock link 52a is configured to communicate a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50. A control link 52b is configured to communicate control data. One or more payload links 52c are configured to communicate payload data between the baseband processing apparatus 10 and the radio frequency apparatus 50.In the following further examples will be described. The described features or functionalities of the interfaces 12, 52 apply correspondingly to the respective means for exchanging 12, 52. Repetitive description is avoided. Moreover, some symmetric features or functionality are described which can be applied in the same manner on both sides of the communication, i.e. at the baseband device or apparatus 10 and at the RF device or apparatus 50.In further examples the one or more payload links 12c, 52c may comprise a payload uplink, which is configured to transmit payload data from the baseband processing device 10 to the radio frequency device 50. The one or more payload links 12c, 52c may also comprise a payload downlink, which is configured to receive payload data from the radio frequency device 50 at the baseband processing device 10. Note that these relations are considered from a UE perspective. Examples also provide a base station comprising a baseband processing device 10 and a radio frequency device 50. From the base station perspective transmission the one or more payload links 12c, 52c may comprise a payload downlink, which is configured to transmit payload data from the baseband processing device 10 and to the radio frequency device 50. The one or more payload links 12c, 52c may also comprise a payload uplink, which is configured to receive payload data from the radio frequency device 50 at the baseband processing device 10. At least in some examples the payload data is sampled radio frequency data.Some examples may constrain the exchanged payload data to pure RF data so a considerably worse BER is allowed on the payload links 12c, 52c of the interface 12, 52. For example, an equivalent noise floor which could be 10-20dB below the actual impairments on the analog parts of the RF subsystem and on the air interface may be tolerable. A degradation in link performance (e.g. BER≈10-5 as an example of an order of magnitude) could be fully exploited for power saving. As a reference example, in some modems a digital RF interface with a BER in the order of magnitude of 10-14 may be used.Fig. 2 shows a timing diagram of a communication in an example. Fig. 2 depicts signals on a clock link 12a, 52a at the top, signals on five parallel payload links 12c, 52c (LANE 1 - LANE 5) in the middle, and signals on a control link 12b, 52b at the bottom. The signal on the clock link 12a, 52a determines the clock for both sides. In the example depicted in Fig. 2 it is assumed that transmissions can be up- and/or downlink. The control link 12b, 52b can be used by both sides (the baseband processing device 10 and the radio frequency device 50) to announce transmissions). At least in some examples a payload link 12c, 52c or lane can be assigned to one transmission direction, i.e. to uplink or downlink.In Fig. 2 the clock cycles are numbered from left to right 1, 2, 4, 8, 16, 32, 64, 96 etc. For a better overview the clock cycles are hachured in groups of eight cycles. On the payload links or lanes 12c, 52c and the control link 12b, 52b transmissions are hachured. In the example depicted in Fig. 2 start of transmission on LANE 1 is announced on the control link, e.g. in clock cycle 6. The announcement refers to the start of a transmission of a first signal in clock cycle 16. As indicated in Fig. 2 announcements for transmissions may indicate start and stop using the same control message or signal. In other examples separate messages or signals may be used to indicate start and stop of transmissions. For example, the transmission on LANE 1 starting during cycle 16 is a downlink transmission at a first higher data rate. On LANE 2 an uplink transmission is announced in clock cycle 19 and starts in cycle 32. As indicated by the cycle boundaries the transmission on LANE 2 uses a lower data rate, for example, a quarter of the clock rate is used. On LANE 3 the example of Fig. 2 indicates that a transmission is ongoing until cycle 24. At cycle 34 a speed change (change of payload transmission rate) on LANE 3 is announced on the control link 12b, 52b starting at cycle 48, which is indicated by the denser cycle boundaries. A transmission on LANE 3 (which could be uplink or downlink) is announced at cycle 46 on the control link 12b, 52b and starts at cycle 60, it ends at cycle 96. Another transmission on LANE 1 is announced on the control link 12b, 52b in cycle 86 and starts in cycle 97.The main principle of the interface 12, 52 of at least some examples can be described in 3 parts. First, a common clock (time) base on both sides (baseband device 10 and RF device 50) of the interfaces 12, 52 is used, which allows assigning time stamps to dedicated events. This clock may be used case dependent. It might not be set to an unnecessarily high frequency all the time but can be scaled for certain use cases. For example, it may run at a significantly lower speed for second generation (2G) mobile communication system use case compared to an LTE carrier aggregation use case. Second, there may be pure data lanes 12c, 52c, where digital information is inserted onto a pre-defined grid (given by the clock as described above). Depending on individual speeds of lanes the effective data rate may be individual, which is further illustrated in Fig. 2 , e.g. LANE 1 vs. LANE 2. Important to note is that these lanes may carry sampled RF data without any protocol or header. The interface 12, 52 may be configured to transfer payload data on the one or more payload links 12c, 52c without header information. Third, a control link 12b, 52b that carries the information which data is inserted onto which lane at what timestamp, how long the individual streams last, etc. is implemented. The interface 12, 52 may be configured to transfer link management information for the plurality of parallel communication links on the control link 12b, 52b. The interfaces 12, 52 may be configured to insert payload data packets on the one or more payload links 12c, 52c based on one or more timestamps. The interface 12, 52 may be configured to transfer time stamp information on the control link 12b.Some examples use the assumption to re-use or share an already existing control link that already carries the protocol stack or RF driver based control plane. The clock line 12a, 52a may be implemented once to enable the common base between the two sides of the interface 12, 52. Speed changes on this clock lane may be triggered by a state of the modem on a high abstract level, e.g. according to whether it is in a paging mode, 2G call, 3G call or LTE Carrier Aggregation (CA). The short-term dynamic load assignments across the lanes might not require or trigger a speed change of the clock. The data lines 12c, 52c may be unidirectional so there may be a set of lanes into one direction and a set of further lanes into the opposite direction. The control interface or link 12b, 52b may be bi-directional. In some examples two control links, one for each direction, are conceivable.Examples may enable a power saving by degrading the BER on the payload links 12c, 52c of the interface 12, 52. By separately grouping payload data and control plane data and using separate links for their transmission, the links and the BER on the links can be adapted to the common BER of the data on the respective link. The power consumption on a physical interface 12, 52 may mainly be given by static biasing current for Low Dropout Regulators (LDOs, power supply ripple rejection ratio (PSRR)), etc. The power consumption may also be influenced by internal driver stages having to dynamically re-/charge the gates of the output stages. Another influencing factor may be the output stages having to dynamically re-/charge the parasitic capacitance of the (mostly differential) lines. For example, parallel lines/lanes are implemented on a PCB and have a certain length or extent. Capacities are introduced between the parallel lines/lanes. Further capacities exist between the lines/lanes and a reference potential, e.g. ground, which may combine serially. These capacities or capacitances, which may depend on implementation specifics, are re-/charged during state changes of the signals on the lines/lanes. The last part may be proportional to the voltage swing, which may directly impact an eye opening and thus the link quality.In general, smaller common mode and differential voltages allow operating the interface 12, 52 from a lower DC/DC rail, which may save effective battery current through the conversion ratio of the DC/DC converter. In some examples the baseband device 10 and/or the RF device 50 may be configured to adapt a power used for communicating on the one or more payload links 12c, 52c based on a payload data transmission rate used on the one or more payload links 12c, 52c. The baseband device 10 and/or the RF device 50 may be configured to adapt a power used for communicating on the one or more payload links 12c, 52c based on an allowable error ratio on the one or more payload links 12c, 52c. The allowable error ratio may be determined by a respective Signal-to-Noise-Ratio (SNR). The baseband device 10 and/or the RF device 50 may be configured to adapt a power used for communicating on the one or more payload links 12c, 52c based on an allowable SNR on the one or more payload links 12c, 52c. The baseband device 10 and/or the RF device 50 may be configured to adapt a power used for communicating on the one or more payload links 12c, 52c based on any quality criterion determining a transmission quality on the one or more payload links 12c, 52c.Some examples may be configured to adapt an error rate on the one or more payload links 12c, 52c based on an error ratio for radio frequency transmission of the payload data. In some examples, a quality criterion such as BER or Block Error Rate (BLER), e.g. of the most error-prone processing or transmission step, may determine the overall quality. Examples may hence adapt a transmission quality on the payload links 12c, 52c to subsequent/preceding requirements. For example, the transmission quality may be adapted to a BER requirement on the air interface. Additionally or alternatively, the baseband device 10 and/or the RF device 50 may be configured to adapt an error rate on the one or more payload links 12c, 52c based on a payload data transmission rate used on the one or more payload links 12c.Some examples may further improve power consumption by parallel utilization of existing resources. Additionally or alternatively, the baseband device 10 and/or the RF device 50 may be configured to adapt a number of parallel payload links 12c, 52c based on a power consumption of the baseband processing device 10 and based on a transferred payload data rate. For example, it may be fair to assume that in most of the cases not all existing lanes will be running at maximum speed, which the hardware has to be designed for (such as LTE 5-CA or 2-CA 4x4 Multiple-Input-Multiple-Output (MIMO), etc.). So in reduced throughput use cases there may be various possibilities to distribute the data across the existing lanes. The baseband device 10 and/or the RF device 50 may be configured to adapt a clock rate on the clock link 12a based on a payload data transmission rate used on the one or more payload links 12c. For example, total power consumption may be made up from a base load (to operate the interface, data channeling, etc.) and a dynamic contributor from driving the physical PCB lines.The analysis below focuses on the (accumulated) dynamic current. The saving potential of example may be in the analog nature of the transmission lines: apart from the swing the eye-opening may be affected by the step response of the rising/falling slope (reflections on the PCB line), so the quality at the point of sampling may be a mixture of vertical scaling and how quickly the step response settles. Having a longer eye-period may relax the settling effect and a lower swing can be afforded at the same link quality.Fig. 3 depicts an eye-diagram or eye-pattern in an example. Fig. 3 shows a time axis running from left to right. The signal depicted along the time axis corresponds to a differential signal as transmitted on a payload link 12c, 52c in an example. The hachured sections enveloping the differential signal indicate noise variations. In the view graph depicted it is assumed that the signal changes between two levels at to and in case of a first lower data rate at t4, at a second higher data rate state changes would occur earlier, for example at t1 - if the second data rate would be four times the first data rate. In other words, Fig. 3 illustrates exemplary speeds (1x at t4 and 4x at t1) on the same physical line. Since transmitting the very same data comes with the same numbers of 1/0 recharging operations, the dynamic power consumption may be lower if the data is spread onto parallel (slower) lanes with a lower voltage swing. In some examples the baseband device 10 and/or the RF device 50 may be configured to adapt a voltage swing on the one or more payload links 12c, 52c based on a data rate on the one or more payload links 12c, 52c. Additionally or alternatively, the baseband device 10 and/or the RF device 50 may be configured to adapt a voltage swing on the one or more payload links 12c, 52c based on an allowable error ratio on the one or more payload links 12c, 52c. Power savings may be achieved by lowering the voltage swing, as for payload data the error ratio may be relaxed compared to mixed header and payload transmission on one link. Utilization of parallel payload data links 12c, 52c at lower data rates may allow using a lower voltage swing on the parallel payload links 12c, 52. A number of used low rate payload links in order to achieve a particular data rate may be higher than a number of respective high rate data links achieving the same data rate. However, an overall power consumption may be lower for the higher number of low rate payload data links 12c, 52c due to lower voltage swing. Such a relation may depend on the implementation of the respective baseband and RF devices 10, 50, e.g. their driver stages, clock generator, PCB implementation, etc.So for medium data rates there may be a tradeoff between running fewer lanes at a higher speed or more lanes at a lower speed in some examples. To some extent digital pre-/post-processing may compensate savings when splitting/recombining the data on either side. Implementation details may hence determine an overall power efficiency. Moreover, in some examples the benefits of using a high number of parallel low rate lanes rather than a lower number of high rate lanes may also depend on the PCB placement and length of lines/lanes. For short distances (with a lower parasitic capacitance, shorter reflections) the effect may be negligible.Some examples using parallel utilization of existing lanes and resources may further benefit from power saving due to relaxed electrical requirements. This benefit may as well apply to protocol/header-based interfaces, lines or lanes, which could also benefit from parallelization.As already indicated in Fig. 1 , examples also provide a modem 100 for a mobile communication system comprising an example of the baseband processing device 10 and an example of the radio frequency device 50. Examples also provide a radio transceiver comprising an example of a modem 100. Fig. 4 shows a block diagram of a mobile communication system 400, which may correspond to any mobile communication system, for example, anyone as listed above. The mobile communication system 400 comprises at least a base station transceiver 300 and a mobile terminal 200. The base station transceiver 300 comprises an example 100a of the modem 100. The mobile terminal or transceiver 200 also comprises an example 100b of the modem 100. Examples provide a base station transceiver 300 comprising an example of the modem 100. Examples may also provide a mobile transceiver comprising an example of the modem 100.Fig. 5 shows a block diagram of a method for exchanging data between a baseband apparatus 10 and a radio frequency apparatus 50 of a mobile communication modem 100. The baseband processing apparatus 10 is configured to process a baseband representation of a radio frequency signal. The method comprises exchanging 32 payload data and control data between the baseband processing apparatus 10 and the radio frequency apparatus 50. The method comprises generating 34 a plurality of parallel communication links between the baseband processing apparatus 10 and the radio frequency apparatus 50. The method further comprises communicating 36 a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50 on a clock link 12a, 52a. The method further comprises communicating 38 control data on a control link 12b, 52b and communicating 40 payload data on one or more payload links 52c, 12c.The aspects and features mentioned and described together with one or more of the previously detailed examples and figures, may as well be combined with one or more of the other examples in order to replace a like feature of the other example or in order to additionally introduce the feature to the other example.Examples may further be or relate to a computer program having a program code for performing one or more of the above methods, when the computer program is executed on a computer or processor. Steps, operations or processes of various above-described methods may be performed by programmed computers or processors. Examples may also cover program storage devices such as digital data storage media, which are machine, processor or computer readable and encode machine-executable, processor-executable or computer-executable programs of instructions. The instructions perform or cause performing some or all of the acts of the above-described methods. The program storage devices may comprise or be, for instance, digital memories, magnetic storage media such as magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. Further examples may also cover computers, processors or control units programmed to perform the acts of the above-described methods or (field) programmable logic arrays ((F)PLAs) or (field) programmable gate arrays ((F)PGAs), programmed to perform the acts of the above-described methods.The examples as described herein may be summarized as follows:A first example is a baseband processing device 10 for a mobile communication modem 100, the baseband processing device 10 being configured to process a baseband representation of a radio frequency signal, the baseband processing device 10 comprising an interface 12 configured to exchange payload data and control data with a radio frequency device 50. The interface 12 comprises a plurality of parallel communication links to the radio frequency device 50. A clock link 12a is configured to communicate a clock signal common to the baseband processing device 10 and the radio frequency device 50. A control link 12b is configured to communicate control data. One or more payload links 12c are configured to communicate payload data between the baseband processing device 10 and the radio frequency device 50.Example 2 is the baseband processing device 10 of example 1, wherein the one or more payload links 12c comprise a payload uplink, which is configured to transmit payload data from the baseband processing device 10 and to the radio frequency device 50, and wherein the one or more payload links 12c comprise a payload downlink, which is configured to receive payload data from the radio frequency device 50 at the baseband processing device 10.Example 3 is the baseband processing device 10 of one of the examples 1 or 2, wherein the interface 12 is configured to transfer link management information for the plurality of parallel communication links on the control link 12b.Example 4 is the baseband processing device 10 of one of the examples 1 to 3, wherein the interface 12 is configured to transfer payload data on the one or more payload links 12c without header information.Example 5 is the baseband processing device 10 of one of the examples 1 to 4, further configured to adapt a power used for communicating on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 6 is the baseband processing device 10 of one of the examples 1 to 5, further configured to adapt a power used for communicating on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 7 is the baseband processing device 10 of one of the examples 1 to 6, further configured to adapt an error rate on the one or more payload links 12c based on an error ratio for radio frequency transmission of the payload data.Example 8 is the baseband processing device 10 of one of the examples 1 to 7, further configured to adapt an error rate on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 9 is the baseband processing device 10 of one of the examples 1 to 8, further configured to adapt a clock rate on the clock link 12a based on a payload data transmission rate used on the one or more payload links 12c.Example 10 is the baseband processing device 10 of one of the examples 1 to 9, wherein the payload data is sampled radio frequency data.Example 11 is the baseband processing device 10 of one of the examples 1 to 10, wherein the interface 12 is configured to insert payload data packets on the one or more payload links 12c based on one or more timestamps, wherein the interface 12 is configured to transfer time stamp information on the control link 12b.Example 12 is the baseband processing device 10 of one of the examples 1 to 11, wherein the control link 12b is bi-directional.Example 13 is the baseband processing device 10 of one of the examples 1 to 12, further being configured to adapt a voltage swing on the one or more payload links 12c based on a data rate on the one or more payload links 12c.Example 14 is the baseband processing device 10 of one of the examples 1 to 13, further being configured to adapt a voltage swing on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 15 is the baseband processing device 10 of one of the examples 1 to 14, further being configured to adapt a number of parallel payload links 12c based on a power consumption of the baseband processing device 10 and based on a transferred payload data rate.Example 16 is a radio frequency processing device 50 for a mobile communication modem 100. The radio frequency processing device 50 is configured to process radio frequency signals of a mobile communication system, the radio frequency device 50 comprising an interface 52 configured to exchange payload data and control data with a baseband processing device 10. The interface 52 comprises a plurality of parallel communication links to the baseband processing device 10. A clock link 52a is configured to communicate a clock signal common to the baseband processing device 10 and the radio frequency device 50. A control link 52b is configured to communicate control data. One or more payload links 52c are configured to communicate payload data between the baseband processing device 10 and the radio frequency device 50.Example 17 is the radio frequency device 50 of example 16, wherein the one or more payload links 52c comprise a payload uplink, which is configured to transmit payload data from the baseband processing device and to the radio frequency device 50, and wherein the one or more payload links 52c comprise a payload downlink link, which is configured to receive payload data from the radio frequency device 50 at the baseband processing device 10.Example 18 is the radio frequency device 50 of one of the examples 16 or 17, wherein the interface 52 is configured to transfer link management information for the plurality of parallel communication links on the control link 52b.Example 19 is the radio frequency device 50 of one of the examples 16 to 18, wherein the interface 52 is configured to transfer payload data on the one or more payload links 52c without header information.Example 20 is the radio frequency device 50 of one of the examples 16 to 19, further configured to adapt a power used for communicating on the one or more payload links 52c based on a payload data transmission rate used on the one or more payload links 52c.Example 21 is the radio frequency device 50 of one of the examples 16 to 20 further configured to adapt a power used for communicating on the one or more payload links 52c based on an allowable error ratio on the one or more payload links 52c.Example 22 is the radio frequency device 50 of one of the examples 16 to 21 further configured to adapt an error rate on the one or more payload links 52c based on an error ratio for radio frequency transmission of the payload data.Example 23 is the radio frequency device 50 of one of the examples 16 to 22 further configured to adapt an error rate on the one or more payload links 52c based on a payload data transmission rate used on the one or more payload links 52c.Example 24 is the radio frequency device 50 of one of the examples 16 to 23 further configured to adapt a clock rate on the clock link 52a based on a payload data transmission rate used on the one or more payload links 52c.Example 25 is the radio frequency device 50 of one of the examples 16 to 24, wherein the payload data is sampled radio frequency data.Example 26 is the radio frequency device 50 of one of the examples 16 to 25, wherein the interface 52 is configured to insert payload data packets on the one or more payload links 52c based on one or more timestamps, wherein the interface 52 is configured to transfer time stamp information on the control link 52b.Example 27 is the radio frequency device 50 of one of the examples 16 to 26, wherein the control link 52b is bi-directional.Example 28 is the radio frequency device 50 of one of the examples 16 to 27, further being configured to adapt a voltage swing on the one or more payload links 52c based on a data rate on the one or more payload links 52c.Example 29 is the radio frequency device 50 of one of the examples 16 to 28, further being configured to adapt a voltage swing on the one or more payload links 52c based on an allowable error ratio on the one or more payload links 52c.Example 30 is the radio frequency device 50 of one of the examples 16 to 29, further being configured to adapt a number of parallel payload links 52c based on a power consumption of the radio frequency device 50 and based on a transferred payload data rate.Example 31 is a baseband processing apparatus 10 for a mobile communication modem 100. The baseband processing apparatus 10 is configured to process a baseband representation of a radio frequency signal. The baseband processing apparatus 10 comprises means for exchanging 12 payload data and control data with a radio frequency apparatus 50. The means for exchanging 12 is configured to generate a plurality of parallel communication links to the radio frequency apparatus 50. A clock link 12a is configured to communicate a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50. A control link 12b is configured to communicate control data. One or more payload links 12c are configured to communicate payload data between the baseband processing apparatus 10 and the radio frequency apparatus 50.Example 32 is the baseband processing apparatus 10 of example 31, wherein the one or more payload links 12c comprise a payload uplink, which is configured to transmit payload data from the baseband processing apparatus 10 and to the radio frequency apparatus 50, and wherein the one or more payload links 12c comprise a payload downlink, which is configured to receive payload data from the radio frequency apparatus 50 at the baseband processing apparatus 10.Example 33 is the baseband processing apparatus 10 of one of the examples 31 or 32, wherein the means for exchanging 12 is configured to transfer link management information for the plurality of parallel communication links on the control link 12b.Example 34 is the baseband processing apparatus 10 of one of the examples 31 to 33, wherein the means for exchanging 12 is configured to transfer payload data on the one or more payload links 12c without header information.Example 35 is the baseband processing apparatus 10 of one of the examples 31 to 34, further configured to adapt a power used for communicating on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 36 is the baseband processing apparatus 10 of one of the examples 31 to 35, further configured to adapt a power used for communicating on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 37 is the baseband processing apparatus 10 of one of the examples 31 to 36, further configured to adapt an error rate on the one or more payload links 12c based on an error ratio for radio frequency transmission of the payload data.Example 38 is the baseband processing apparatus 10 of one of the examples 31 to 37, further configured to adapt an error rate on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 39 is the baseband processing apparatus 10 of one of the examples 31 to 38, further configured to adapt a clock rate on the clock link 12a based on a payload data transmission rate used on the one or more payload links 12c.Example 40 is the baseband processing apparatus 10 of one of the examples 31 to 39, wherein the payload data is sampled radio frequency data.Example 41 is the baseband processing apparatus 10 of one of the examples 31 to 40, wherein means for exchanging 12 is configured to insert payload data packets on the one or more payload links 12c based on one or more timestamps, wherein means for exchanging 12 is configured to transfer time stamp information on the control link 12b.Example 42 is the baseband processing apparatus 10 of one of the examples 31 to 41, wherein the control link 12b is bi-directional.Example 43 is the baseband processing apparatus 10 of one of the examples 31 to 42, further being configured to adapt a voltage swing on the one or more payload links 12c based on a data rate on the one or more payload links 12c.Example 44 is the baseband processing apparatus 10 of one of the examples 31 to 43, further being configured to adapt a voltage swing on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 45 is the baseband processing apparatus 10 of one of the examples 31 to 44, further being configured to adapt a number of parallel payload links 12c based on a power consumption of the baseband processing apparatus 10 and based on a transferred payload data rate.Example 46 is a radio frequency processing apparatus 50 for a mobile communication modem 100. The radio frequency processing apparatus 50 is configured to process radio frequency signals of a mobile communication system. The radio frequency apparatus 50 comprises means for exchanging 52 payload data and control data with a baseband processing apparatus 10. The means for exchanging 52 is configured to generate a plurality of parallel communication links to the baseband processing apparatus 10. A clock link 52a is configured to communicate a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50. A control link 52b is configured to communicate control data. One or more payload links 52c are configured to communicate payload data between the baseband processing apparatus 10 and the radio frequency apparatus 50.Example 47 is the radio frequency apparatus 50 of example 46, wherein the one or more payload links 52c comprise a payload uplink, which is configured to transmit payload data from the baseband processing apparatus and to the radio frequency apparatus 50, and wherein the one or more payload links 52c comprise a payload downlink link, which is configured to receive payload data from the radio frequency apparatus 50 at the baseband processing apparatus 10.Example 48 is the radio frequency apparatus 50 of one of the examples 46 or 47, wherein means for exchanging 52 is configured to transfer link management information for the plurality of parallel communication links on the control link 52b.Example 49 is the radio frequency apparatus 50 of one of the examples 46 to 48, wherein the means for exchanging 52 is configured to transfer payload data on the one or more payload links 52c without header information.Example 50 is the radio frequency apparatus 50 of one of the examples 46 to 49, further configured to adapt a power used for communicating on the one or more payload links 52c based on a payload data transmission rate used on the one or more payload links 52c.Example 51 is the radio frequency apparatus 50 of one of the examples 46 to 50 further configured to adapt a power used for communicating on the one or more payload links 52c based on an allowable error ratio on the one or more payload links 52c.Example 52 is the radio frequency apparatus 50 of one of the examples 46 to 51 further configured to adapt an error rate on the one or more payload links 52c based on an error ratio for radio frequency transmission of the payload data.Example 53 is the radio frequency apparatus 50 of one of the examples 46 to 52 further configured to adapt an error rate on the one or more payload links 52c based on a payload data transmission rate used on the one or more payload links 52c.Example 54 is the radio frequency apparatus 50 of one of the examples 46 to 53 further configured to adapt a clock rate on the clock link 52a based on a payload data transmission rate used on the one or more payload links 52c.Example 55 is the radio frequency apparatus 50 of one of the examples 46 to 54, wherein the payload data is sampled radio frequency data.Example 56 is the radio frequency apparatus 50 of one of the examples 46 to 55, wherein the means for exchanging 52 is configured to insert payload data packets on the one or more payload links 52c based on one or more timestamps, wherein means for exchanging 52 is configured to transfer time stamp information on the control link 52b.Example 57 is the radio frequency apparatus 50 of one of the examples 46 to 56, wherein the control link 52b is bi-directional.Example 58 is the radio frequency apparatus 50 of one of the examples 46 to 57, further being configured to adapt a voltage swing on the one or more payload links 52c based on a data rate on the one or more payload links 52c.Example 59 is the radio frequency apparatus 50 of one of the examples 46 to 58, further being configured to adapt a voltage swing on the one or more payload links 52c based on an allowable error ratio on the one or more payload links 52c.Example 60 is the radio frequency apparatus 50 of one of the examples 46 to 59, further being configured to adapt a number of parallel payload links 52c based on a power consumption of the radio frequency apparatus 50 and based on a transferred payload data rate.Example 61 is a method for exchanging data between a baseband apparatus 10 and a radio frequency apparatus 50 of a mobile communication modem 100. The baseband processing apparatus 10 is configured to process a baseband representation of a radio frequency signal. The method comprises exchanging 32 payload data and control data between the baseband processing apparatus 10 and the radio frequency apparatus 50. The method comprises generating 34 a plurality of parallel communication links between the baseband processing apparatus 10 and the radio frequency apparatus 50. The method comprises communicating 36 a clock signal common to the baseband processing apparatus 10 and the radio frequency apparatus 50 on a clock link 12a; 52a. The method comprises communicating 38 control data on a control link 12b; 52b. The method comprises communicating 40 payload data on one or more payload links 12c; 52c.Example 62 is the method of example 61, wherein the one or more payload links 12c comprise a payload uplink, which is configured to transmit payload data from the baseband processing apparatus 10 and to the radio frequency apparatus 50, and wherein the one or more payload links 12c comprise a payload downlink, which is configured to receive payload data from the radio frequency apparatus 50 at the baseband processing apparatus 10.Example 63 is the method of one of the examples 61 or 62, comprising transferring link management information for the plurality of parallel communication links on the control link 12b.Example 64 is the method of one of the examples 61 to 63, comprising transferring payload data on the one or more payload links 12c without header information.Example 65 is the method of one of the examples 61 to 64, comprising adapting a power used for communicating on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 66 is the method of one of the examples 61 to 65, comprising adapting a power used for communicating on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 67 is the method of one of the examples 61 to 66, comprising adapting an error rate on the one or more payload links 12c based on an error ratio for radio frequency transmission of the payload data.Example 68 is the method of one of the examples 61 to 67, comprising adapting an error rate on the one or more payload links 12c based on a payload data transmission rate used on the one or more payload links 12c.Example 69 is the method of one of the examples 61 to 68, comprising adapting a clock rate on the clock link 12a based on a payload data transmission rate used on the one or more payload links 12c.Example 70 is the method of one of the examples 61 to 69, wherein the payload data is sampled radio frequency data.Example 71 is the method of one of the examples 61 to 70, comprising inserting payload data packets on the one or more payload links 12c based on one or more timestamps, and transferring time stamp information on the control link 12b.Example 72 is the method of one of the examples 61 to 71, wherein the control link 12b is bi-directional.Example 73 is the method of one of the examples 61 to 72, comprising adapting a voltage swing on the one or more payload links 12c based on a data rate on the one or more payload links 12c.Example 74 is the method of one of the examples 61 to 73, comprising adapting a voltage swing on the one or more payload links 12c based on an allowable error ratio on the one or more payload links 12c.Example 75 is the method of one of the examples 61 to 74, comprising adapting a number of parallel payload links 12c based on a power consumption of the baseband processing apparatus 10, the radio frequency apparatus 50 and/or based on a transferred payload data rate.Example 76 is a modem 100 for a mobile communication system comprising the baseband processing device 10 of one of the examples 1 to 15 or the baseband processing apparatus 10 of one of the examples 31 to 45, and the radio frequency device 50 of one of the examples 16 to 30, or the radio frequency apparatus 50 of one of the examples 46 to 60.Example 77 is a radio transceiver comprising the modem 100 of example 76.Example 78 is a mobile terminal comprising the modem 100 of example 76.Example 79 is a computer program having a program code for performing the method of at least one of the examples 61 to 75, when the computer program is executed on a computer, a processor, or a programmable hardware component.Example 80 is a machine readable storage including machine readable instructions, when executed, to implement a method or realize an apparatus as exemplified in any example described herein.Example 81 is a machine readable medium including code, when executed, to cause a machine to perform the method of any one of examples 61 to 75.The description and drawings merely illustrate the principles of the disclosure. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor(s) to furthering the art. All statements herein reciting principles, aspects, and examples of the disclosure, as well as specific examples thereof, are intended to encompass equivalents thereof.A functional block denoted as "means for ..." performing a certain function may refer to a circuit that is configured to perform a certain function. Hence, a "means for s.th." may be implemented as a "means configured to or suited for s.th.", such as a device or a circuit configured to or suited for the respective task.Functions of various elements shown in the figures, including any functional blocks labeled as "means", "means for providing a sensor signal", "means for generating a transmit signal.", etc., may be implemented in the form of dedicated hardware, such as "a signal provider", "a signal processing unit", "a processor", "a controller", etc. as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which or all of which may be shared. However, the term "processor" or "controller" is by far not limited to hardware exclusively capable of executing software, but may include digital signal processor (DSP) hardware, network processor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage. Other hardware, conventional and/or custom, may also be included.A block diagram may, for instance, illustrate a high-level circuit diagram implementing the principles of the disclosure. Similarly, a flow chart, a flow diagram, a state transition diagram, a pseudo code, and the like may represent various processes, operations or steps, which may, for instance, be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown. Methods disclosed in the specification or in the claims may be implemented by a device having means for performing each of the respective acts of these methods.It is to be understood that the disclosure of multiple acts, processes, operations, steps or functions disclosed in the specification or claims may not be construed as to be within the specific order, unless explicitly or implicitly stated otherwise, for instance for technical reasons. Therefore, the disclosure of multiple acts or functions will not limit these to a particular order unless such acts or functions are not interchangeable for technical reasons. Furthermore, in some examples a single act, function, process, operation or step may include or may be broken into multiple sub-acts, -functions, -processes, -operations or -steps, respectively. Such sub acts may be included and part of the disclosure of this single act unless explicitly excluded.
Particular embodiments described herein provide for an electronic device, such as a tablet, that includes a circuit board coupled to a plurality of electronic components (which includes any type of components, elements, circuitry, etc.). One particular example implementation of the electronic device may include a first housing and a second housing removably coupled to the first housing, where the second housing is configured to function as a stand for the first housing. The stand configuration can allow for user desired viewing of a first display located on the first housing and of a second display located on the second housing. Further, the second housing can be removed from the first housing and replaced with a different second housing.
CLAIMS:1. An electronic device, comprising:a first housing; anda second housing removably coupled to the first housing, wherein the second housing is configured to function as a stand for the first housing.2. The electronic device of Claim 1, wherein the first housing includes a first display and the second housing includes a second display.3. The electronic device of any of Claims 1 and 2, wherein the stand configuration allows for user desired viewing of the first display and of the second display.4. The electronic device of any of Claims 1-3, wherein the first housing is a standalone tablet.5. The electronic device of any of Claims 1-4, further comprising:an interconnect to electrically couple the first housing and the second housing.6. The electronic device of any of Claims 1-5, wherein the second housing can be removed from the first housing and replaced with a different second housing, wherein the different second housing uses the same interconnect as the second housing.7. The electronic device of any of Claims 1-6, wherein the different second housing does not include any electronics.8. The electronic device of any of Claims 1-7, wherein the second housing is a standalone electronic device.9. An electronic device, comprising:a first housing, wherein the first housing includes a first display; anda second housing removably coupled to the first housing, wherein the second housing includes a second display, wherein the second housing is configured to function as a stand for the first housing such that the stand configuration allows for user desired viewing of the first display and of the second display.10. The electronic device of Claim 9, further comprising:an interconnect to electrical couple the first housing and the second housing.11. The electronic device of any of Claims 9 and 10, wherein the second housing can be removed from the first housing and replaced with a different second housing, wherein the different second housing uses the same interconnect as the second housing.12. The electronic device of any of Claims 9-11, wherein the different second housing does not include any electronics.13. The electronic device of any of Claims 9-12, wherein the first housing is a standalone tablet.14. The electronic device of any of Claims 9-13, wherein the second housing is a standalone electronic device.15. A system, comprising:means for rotating a second housing away from a first housing, wherein the first housing includes a first display and the second housing includes a second display; andmeans for adjusting the angle of rotation such that the second housing acts as a stand for the first housing and allows for user desired viewing of the first display and of the second display.16. The system of Claim 15, wherein the first housing is a standalone tablet.17. The system of any of Claims 15 and 16, further comprising:means for removing the second housing from the first housing, wherein an interconnect electrically couples the first housing and the second housing.18. The system of any of Claims 15-17, further comprising:means for replacing the second housing with a different second housing, wherein the different second housing uses the same interconnect as the second housing.19. The system of any of Claims 15 and 18, wherein the different second device does not include any electronics.20. The system of any of Claims 15 and 19, wherein the second housing is a standalone electronic device.
ELECTRONIC DEVICE SYSTEM WITH A MODULAR SECOND HOUSINGTECHNICAL FIELD[0001] Embodiments described herein generally relate to the field of electronic devices and, more particularly, to a modular second housing for an electronic device.BACKGROUND[0002] End users have more electronic device choices than ever before. A number of prominent technological trends are currently afoot (e.g., more computing devices, more detachable displays, etc.), and these trends are changing the electronic device landscape. One of the technological trends is a tablet with a stand. In many instances, the stand can only support the tablet and cannot be removed. Hence, there is a challenge in providing an electronic device that allows a stand to be removed and replaced with a device that can provide functions in addition to or other than a stand.BRIEF DESCRIPTION OF THE DRAWINGS[0003] Embodiments are illustrated by way of example and not by way of limitation in the FIGURES of the accompanying drawings, in which like references indicate similar elements and in which:[0004] FIGURE 1A is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure;[0005] FIGURE IB is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure;[0006] FIGURE 1C is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure;[0007] FIGURE ID is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure;[0008] FIGURE IE is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure;[0009] FIGURE IF is a simplified schematic diagram illustrating an embodiment of an electronic device, in accordance with one embodiment of the present disclosure; [0010] FIGURE 2A is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0011] FIGURE 2B is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0012] FIGURE 2C is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0013] FIGURE 3 is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0014] FIGURE 4 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device, in accordance with one embodiment of the present disclosure;[0015] FIGURE 5 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device, in accordance with one embodiment of the present disclosure;[0016] FIGURE 6 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;[0017] FIGURE 7 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device, in accordance with one embodiment of the present disclosure;[0018] FIGURE 8 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;[0019] FIGURE 9 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;[0020] FIGURE 10 is a simplified schematic diagram illustrating an embodiment of a portion of an electronic device in accordance with one embodiment of the present disclosure;[0021] FIGURE 11A is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure; [0022] FIGURE 11B is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0023] FIGURE 11C is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0024] FIGURE 11D is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0025] FIGURE HE is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0026] FIGURE 12 is a simplified schematic diagram illustrating an embodiment of an electronic device in accordance with one embodiment of the present disclosure;[0027] FIGURE 13 is a simplified block diagram associated with an example ARM ecosystem system on chip (SOC) of the present disclosure; and[0028] FIGURE 14 is a simplified block diagram illustrating example logic that may be used to execute activities associated with the present disclosure.[0029] The FIGURES of the drawings are not necessarily drawn to scale, as their dimensions can be varied considerably without departing from the scope of the present disclosure.DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS OVERVIEW[0030] An electronic device is provided in one example embodiment and includes a plurality of electronic components (which can include any type of components, elements, circuitry, etc.). One particular example implementation of the electronic device may include a first housing and a second housing removably coupled to the first housing, where the second housing is configured to function as a stand for the first housing. The first housing can include a first display and the second housing can include a second display and the stand configuration can allow for a user desired viewing of the first display and of the second display.[0031] In other embodiments, an interconnect can electrical couple the first housing and the second housing. The second housing can be removed from the first housing and replaced with a different second housing, where the different second housing uses the same interconnect as the second housing. In certain examples, the different second housing does not include any electronics. The first housing can be a standalone tablet and the second housing can be a standalone electronic device.EXAMPLE EMBODIMENTS[0032] The following detailed description sets forth example embodiments of apparatuses, methods, and systems relating to detachable display configurations for an electronic device. Features such as structure(s), function(s), and/or characteristic(s), for example, are described with reference to one embodiment as a matter of convenience; various embodiments may be implemented with any suitable one or more of the described features.[0033] FIGURE 1A is a simplified orthographic view illustrating an embodiment of an electronic device 10a in a tablet configuration in accordance with one embodiment of the present disclosure. Electronic device 10a may include a first housing 12, a second housing 14a, and a hinge 16. Second housing 14a can include a second housing display 44. Hinge 16 can define an axis of rotation that is shared between first housing 12 and second housing 14a.[0034] In one or more embodiments, second housing display 44 can be a liquid crystal display (LCD) display screen, a light-emitting diode (LED) display screen, an organic light-emitting diode (OLED) display screen, a plasma display screen, or any other suitable display screen system. Second housing display 44 may be a touchscreen that can detect the presence and location of a touch within the display area. In another embodiment, second housing 14a may include a battery and various electronics (e.g., processor, memory, etc.) to allow second housing 14a to operate as a standalone tablet. In another embodiment, second housing 14a may include a wireless module (e.g., Wi-Fi module, Bluetooth module, etc.). In yet another embodiment, second housing 14a may include a camera, a microphone, and speakers.[0035] In one or more embodiments, electronic device 10a is a tablet computer. In still other embodiments, electronic device 10a may be any suitable electronic device having a display such as a mobile device, a tablet device (e.g., i-Pad™), Phablet™, a personal digital assistant (PDA), a smartphone, an audio system, a movie player of any type, a computer docking station, etc. In yet another embodiment, most of the electronics (e.g., processor, memory, etc.) for electronic device 10a reside in first housing 12. [0036] Turning to FIGURE IB, FIGURE IB is a simplified orthographic view of electronic device 10a in accordance with one embodiment of the present disclosure. First housing 12 can include a first housing display 18. In one or more embodiments, first housing display 18 can be a liquid crystal display (LCD) display screen, a light-emitting diode (LED) display screen, an organic light-emitting diode (OLED) display screen, a plasma display screen, or any other suitable display screen system. First housing display 18 may be a touchscreen that can detect the presence and location of a touch within the display area. In another embodiment, first housing 12 may include a battery and various electronics (e.g., processor, memory, etc.) to allow first housing 12 to operate as a standalone tablet. In another embodiment, first housing 12 may include a wireless module (e.g., Wi-Fi module, Bluetooth module, etc.). In yet another embodiment, first housing 12 may include a camera, a microphone, and speakers.[0037] Turning to FIGURE 1C, FIGURE 1C is a simplified schematic diagram illustrating an embodiment of electronic device 10a in a stand mode in accordance with one embodiment of the present disclosure. As illustrated, second housing 14a has been rotated on hinge 16 away from first housing 12. Second housing 14a can function as a stand that supports first housing 12. The angle of second housing 14a can be configured to provide a proper or user desired viewing angle of first housing display 18 and a proper or user desired viewing angle of second housing display 44.[0038] Turning to FIGURE ID, FIGURE ID is a simplified schematic diagram illustrating an embodiment of an electronic device in a detachable configuration in accordance with one embodiment of the present disclosure. As illustrated, second housing 14a has been separated from first housing 12. Second housing 14a may include a second housing interconnect 24 that, when second housing 14a is connected to first housing 12, electrical signals can pass between first housing 12 and second housing 14a. In an embodiment, second housing 14a can include a battery and various electronics (e.g., processor, memory, etc.) to allow second housing 14a to operate as a standalone device. In another embodiment, second housing 14a may include a wireless module (e.g., Wi-Fi module, Bluetooth module, etc.) that allows second housing 14a to communicate with first housing 12 when second housing 14a is removed from first housing 12. [0039] Turning to FIGURE IE, FIGURE IE is a simplified schematic diagram illustrating an embodiment of electronic device 10a in a stand mode in accordance with one embodiment of the present disclosure. As illustrated in FIGURE IE, second housing 14a (not shown) has been detached from first housing 12 and replaced with second housing 14b. Turning to FIGURE IF, FIGURE IF is a simplified orthographic view illustrating an embodiment of an electronic device 10a in a tablet configuration in accordance with one embodiment of the present disclosure. As illustrated in FIGURE IF second housing 14a (not shown) has been detached from first housing 12 and replaced with second housing 14b.[0040] FIGURES 1A-1F illustrate the configurability of electronic device 10a. For example, first housing 12 may be connected to second housing 14a in a tablet configuration or a stand configuration. In addition, second housing 14a may be removed from first housing 12 and second housing 14a may operate as a standalone electronic device. Also, second housing 14a may be removed from first housing 12 and replaced with a different second housing 14b. Second housing 14b may include a secondary display, an additional battery, a 3-D depth camera, a high megapixel camera, speakers, etc. This allows for a common chassis (first housing 12) that can be fitted with a variety of stand options to support different feature sets as well as aftermarket add-on accessories that become fully integrated into the form factor of electronic device 10a.[0041] In general terms, electronic device 10a may be configured to provide a variety of second housings coupled to the first housing at a hinge. The hinge can be configured such that the second housing and the first housing can be separated. The first housing can include a first housing interconnect and each second housing can include a mating second housing interconnect. This allows a variety of second housings to be attached to the first housing such that the overall system can be configured to operate in a wide variety of configurations.[0042] For purposes of illustrating certain example features of electronic device 10a, the following foundational information may be viewed as a basis from which the present disclosure may be properly explained. There currently are no electronic devices on the market with fully integrated interchangeable stands that include electronic components or secondary displays. In addition, current devices sometimes integrate a secondary display into the back of the lid of a laptop or on the back of a phone. These devices are not configured to provide a proper viewing angle of both screens at the same time. Further, current devices for social broadcasting often require a user to adjust the angle of the main display to assure the secondary display on the back of the main display is not angled too far towards the electronic device. The existing devices typically make the user compromise the best viewing angle of the main display in order to allow other users to view the secondary display and do not allow for proper viewing of both displays.[0043] Particular embodiments described herein provide for an electronic device, such as a notebook computer, laptop, cellphone, or other mobile device that includes a circuit board coupled to a plurality of electronic components (which includes any type of components, elements, circuitry, etc.). The electronic device may also include a first housing coupled to a removable second housing at a hinge that includes an interconnect. For example, the hinge can include connectors and mechanical retentions to provide an electrical connection between the first housing and the second housing.[0044] In an embodiment, the interconnect may be a printed circuit board (PCB) interconnector, a USB connector, pogo pin connector, a wireless interface (including a wireless energy transmission module), or other type of docking connector that can facilitate an electrical connection between the first housing and the second housing. A locking mechanism, such as a mechanical locking snap feature, can mitigate detachment during general usage. The mechanical snap feature may include a mechanical or electrical release to release the second housing and allow for easy interchangeability. The snap feature may also be implemented with magnets. The removable second housing and interconnect enables various possible second housing options such as one or more secondary displays, an integrated battery, an additional camera or a camera with a high megapixel, perceptual computing world facing 3-D depth camera, integrated Pico projector, speakers, etc.[0045] The hinge can also be configured to allow the second housing to function as a support stand for the electronic device. When the electronic device is placed on a table with the second housing rotated away from the first housing, a user can interact with a touchscreen on either side of the system at an easy to use ergonomic angle for sharing or broadcasting information. The angle of the second housing can simultaneously support a user desired viewing angle of the first housing and a user desired viewing angle of the second housing. The electronic device can be configured to allow for an effective hinge and connection capability that provides an orientation flexibility and a suitable connection to enable configurability.[0046] Turning to FIGURE 2A, FIGURE 2A is a simplified schematic diagram illustrating an embodiment of electronic device 10a, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE 2A, first housing 12 can include hinge 16. Second housing 14a can be removably connected to first housing 12 using hinge 16. Turning to FIGURE 2B, FIGURE 2B is a simplified schematic diagram illustrating an embodiment of electronic device 10a, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE 2B, second housing 14a has been rotated away from first housing 12 using hinge 16.[0047] Turning to FIGURE 2C, FIGURE 2C is a simplified schematic diagram illustrating an embodiment of electronic device 10a, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE 2C, second housing 14a has been separated from first housing 12. Hinge 16 can include a first housing interconnect 22 and a rotation means 26 to allow second housing 14a to rotate relative to first housing 12. First housing interconnect 22 may be a printed circuit board (PCB) interconnector, a USB connector, pogo pin connector, a wireless interface (including a wireless energy transmission module), or other type of docking connector that can facilitate an electrical connection between first housing 12 and second housing 14a.[0048] Turning to FIGURE 3, FIGURE 3 is a simplified schematic diagram illustrating an embodiment of a portion of electronic device 10a, in accordance with one embodiment of the present disclosure. In an embodiment, hinge 16 can include first housing interconnect 22, rotation means 26, a first housing coupler 28, and a release 58. Second housing 14a can include second housing interconnect 24 and a second housing coupler 30. Second housing interconnect 24 is configured to connect to first housing interconnect 22 and pass an electrical current and signals between first housing 12 and second housing 14a, to recharge an on-board battery or capacitor, power any number of items (e.g., a wireless module, camera, speakers, etc.), and provide a communication path between first housing 12 and second housing 14a. First housing coupler 28 and second housing coupler 30 can be configured to releasable couple first housing 12 to second housing 14a (e.g., a securing mechanism that can include hooks, magnetic elements, etc.). Release 58 can be configured to release or uncouple second housing 14a from first housing 12 when activated. Release 58 can be activated (e.g., by pushing or sliding) such that release 58 releases second housing 14a from first housing 12.[0049] Turning to FIGURE 4, FIGURE 4 is a simplified schematic diagram illustrating an embodiment of second housing 14a, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14a can include two second housing interconnects 24. Not shown are two mating first housing interconnects 22 configured to connect to the illustrated two second housing interconnects 24 and pass an electrical current and signals between first housing 12 and second housing 14a.[0050] Turning to FIGURE 5, FIGURE 5 is a simplified schematic diagram illustrating an embodiment of second housing 14b, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14b can include one or more batteries 32. Second housing 14b can be used when an additional power supply for electronic device 10a is desired. In such an example, second housing 14a can be removed from first housing 12 and second housing 14b can be coupled with first housing 12.[0051] Turning to FIGURE 6, FIGURE 6 is a simplified schematic diagram illustrating an embodiment of second housing 14c, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14c can include a notification display 34 and a wireless module 36. Notification display 34 can be configured to display scrolling text such as information about received text messages or about a song that is currently being played on electronic device 10a.[0052] Turning to FIGURE 7, FIGURE 7 is a simplified schematic diagram illustrating an embodiment of second housing 14d, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14d can include one or more camera lenses 38 and a microphone 40. Second housing 14d may be used when a user wants a higher resolution camera than the one that may be included on first housing 12 or when a user wants to add a video recorder to electronic device 10a. In such an example, second housing 14a can be removed and second housing 14c can be coupled with first housing 12.[0053] Turning to FIGURE 8, FIGURE 8 is a simplified schematic diagram illustrating an embodiment of second housing 14e, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14e can include a depth sensor 42. Depth sensor 42 may be used when a user wants to add 3D picture or 3D video capabilities to electronic device 10a. In such an example, second housing 14a can be removed from first housing 12 and second housing 14e can be coupled with first housing 12.[0054] Turning to FIGURE 9, FIGURE 9 is a simplified schematic diagram illustrating an embodiment of second housing 14f, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14f can be void of any electronics and may be comprised of a lightweight material such as plastic. Second housing 14f may be used when a user wants electronic device 10a to be as light as possible for travel. In such an example, second housing 14a can be removed from first housing 12 and second housing 14f can be coupled with first housing 12. Because there are not any electronics included in second housing 14f, second housing 14f does not need second housing interconnect 24 and may only include second housing coupler 30 to couple second housing 14f to first housing 12.[0055] Turning to FIGURE 10, FIGURE 10 is a simplified schematic diagram illustrating an embodiment of second housing 14g, in accordance with one embodiment of the present disclosure. In an embodiment, second housing 14g can include one or more speakers 46. Second housing 14g can be used when additional sound output capability for electronic device 10a is desired. In such an example, second housing 14a can be removed from first housing 12 and second housing 14g can be coupled with first housing 12.[0056] Note that the illustrated second housings are used as examples only and the examples provided should not limit the scope or inhibit the broad teachings of the configurable electronic device as potentially applied to a myriad of other architectures. For example, second housing could include a video project, motion sensor, etc. Also note that the illustrated second housing interconnect 24 and second housing coupler 30 are used as examples only and the examples provided should not limit the scope or inhibit the broad teachings of the configurable electronic device as potentially applied to a myriad of other architectures.[0057] Using first housing interconnect 22 and second housing interconnect 24, an electrical current and signals can be passed from/to first housing 12 to/from second housing 14a to recharge an on-board battery or capacitor, power any number of items (e.g., a wireless module, camera, speakers, etc.), and provide a communication path between first housing 12 and second housing 14a (or second housing 14b-g, depending on which one is coupled to first housing 12). In other examples, electrical current and signals can be passed through a plug- in connector (e.g., whose male side protrusion connects to first housing 12 and whose female side connects to second housing 14a or vice-verse). Note that any number of connectors (e.g., Universal Serial Bus (USB) connectors (e.g., in compliance with the USB 3.0 Specification released in November 2008), Thunderbolt™ connectors, a non-standard connection point such as a docking connector, etc.) can be provisioned in conjunction with electronic device 10a. [Thunderbolt™ and the Thunderbolt logo are trademarks of Intel Corporation in the U.S. and/or other countries.]. Virtually any other electrical connection methods could be used and, thus, are clearly within the scope of the present disclosure.[0058] Turning to FIGURE 11A, FIGURE 11A is a simplified schematic diagram illustrating an embodiment of an electronic device 10b, in accordance with one embodiment of the present disclosure. In an embodiment, electronic device 10b can include a base device 60. Base device 60 can include a display portion 48 and a keyboard portion 50. Display portion 48 can include a display 54. Keyboard portion 50 can include a keyboard 52. In one or more embodiments, electronic device 10b is a notebook computer or laptop computer. Display 54 can be a liquid crystal display (LCD) display screen, a light-emitting diode (LED) display screen, an organic light-emitting diode (OLED) display screen, a plasma display screen, or any other suitable display screen system. Display 54 may be a touchscreen that can detect the presence and location of a touch within the display area.[0059] Turning to FIGURE 11B, FIGURE 11B is a simplified schematic diagram illustrating an embodiment of electronic device 10b, in accordance with one embodiment of the present disclosure. In an embodiment, electronic device 10b can include second housing 14a and hinge 16. Turning to FIGURE 11C, FIGURE 11C is a simplified schematic diagram illustrating an embodiment of electronic device 10b, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE 11C, second housing 14a has been rotated away from display portion 48 using hinge 16.[0060] Turning to FIGURE 11D, FIGURE 11D is a simplified schematic diagram illustrating an embodiment of electronic device 10b, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE 11D, second housing 14a can function as a stand that provides support for display portion 48. The angle of second housing 14a can be configured to provide a user desired viewing angle of display 54 and user desired viewing angle of second housing display 44. [0061] Turning to FIGURE HE, FIGURE HE is a simplified schematic diagram illustrating an embodiment of electronic device 10b, in accordance with one embodiment of the present disclosure. As illustrated in FIGURE HE, second housing 14a has been removed from display portion 48. Second housing 14a may be replaced with another second housing (e.g., second housing 14b-g illustrated in FIGURES 5-10) as described above.[0062] Turning to FIGURE 12, FIGURE 12 is a simplified block diagram illustrating an embodiment of electronic device 10b in accordance with one embodiment of the present disclosure. Second housing 14a may include wireless module 36. Wireless module 36 (e.g., Wi-Fi module, Bluetooth module, WiDi module, or other wireless communication circuitry) allows second housing 14a to communicate with base device 60 (or first housing 12) when second housing 14a is removed from display portion 48 (or first housing 12). Base device 60 can include a base wireless module 62. Wireless module 36 may also allow second housing 14a to communicate with network 20 and a second electronic device 56 through a wireless connection.[0063] The wireless connection may be any 3G/4G/LTE cellular wireless, WiFi/WiMAX connection, WiDi connection, or some other similar wireless connection. In an embodiment, the wireless connection may be a wireless personal area network (WPAN) to interconnect second housing 14a to base device 60, network 20, or second electronic device 56 within a relatively small area (e.g., Bluetooth™, invisible infrared light, Wi-Fi, WiDi, etc.). In another embodiment, the wireless connection may be a wireless local area network (WLAN) that links second housing 14a to base device 60, network 20, or second electronic device 56 over a relatively short distance using a wireless distribution method, usually providing a connection through an access point for Internet access. The use of spread- spectrum or OFDM technologies may allow second housing 14a to move around within a local coverage area, and still remain connected to base device 60, network 20, or second electronic device 56.[0064] Network 20 may be a series of points or nodes of interconnected communication paths for receiving and transmitting packets of information that propagate through network 20. Network 20 offers a communicative interface and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, WAN, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment. Network 20 can comprise any number of hardware or software elements coupled to (and in communication with) each other through a communications medium. Second electronic device 56 may be a computer (e.g., notebook computer, laptop, tablet computer or device), a phablet, a cellphone, a personal digital assistant (PDA), a smartphone, an audio system, a movie player of any type, router, access point, or other device that includes a circuit board coupled to a plurality of electronic components (which includes any type of components, elements, circuitry, etc.).[0065] Turning to FIGURE 13, FIGURE 13 is a simplified block diagram associated with an example ARM ecosystem SOC 1300 of the present disclosure. At least one example implementation of the present disclosure can include the modular second housing features discussed herein and an ARM component. For example, the example of FIGURE 13 can be associated with any ARM core (e.g., A-9, A-15, etc.). Further, the architecture can be part of any type of tablet, smartphone (inclusive of Android™ phones, iPhones™), iPad™, Google Nexus™, Microsoft Surface™, personal computer, server, video processing components, laptop computer (inclusive of any type of notebook), Ultra book™ system, any type of touch- enabled input device, etc.[0066] In this example of FIGURE 13, ARM ecosystem SOC 1300 may include multiple cores 1306-1307, an L2 cache control 1308, a bus interface unit 1309, an L2 cache 1310, a graphics processing unit (GPU) 1315, an interconnect 1302, a video codec 1320, and a liquid crystal display (LCD) l/F 1325, which may be associated with mobile industry processor interface (Ml PI)/ high-definition multimedia interface (HDMI) links that couple to an LCD.[0067] ARM ecosystem SOC 1300 may also include a subscriber identity module (SIM) l/F 1330, a boot read-only memory (ROM) 1335, a synchronous dynamic random access memory (SDRAM) controller 1340, a flash controller 1345, a serial peripheral interface (SPI) master 1350, a suitable power control 1355, a dynamic RAM (DRAM) 1360, and flash 1365. In addition, one or more example embodiments include one or more communication capabilities, interfaces, and features such as instances of Bluetooth™ 1370, a 3G modem 1375, a global positioning system (GPS) 1380, and an 802.11 Wi-Fi 1385.[0068] In operation, the example of FIGURE 13 can offer processing capabilities, along with relatively low power consumption to enable computing of various types (e.g., mobile computing, high-end digital home, servers, wireless infrastructure, etc.). In addition, such an architecture can enable any number of software applications (e.g., Android™, Adobe®Flash®Player, Java Platform Standard Edition (Java SE), JavaFX, Linux, Microsoft Windows Embedded, Symbian and Ubuntu, etc.). In at least one example embodiment, the core processor may implement an out-of-order superscalar pipeline with a coupled low-latency level-2 cache.[0069] Turning to FIGURE 14, FIGURE 14 is a simplified block diagram illustrating potential electronics and logic that may be associated with any of the electronic devices discussed herein. In at least one example embodiment, system 1400 can include a touch controller 1402, one or more processors 1404, system control logic 1406 coupled to at least one of processor(s) 1404, system memory 1408 coupled to system control logic 1406, nonvolatile memory and/or storage device(s) 1432 coupled to system control logic 1406, display controller 1412 coupled to system control logic 1406, display controller 1412 coupled to a display device 1410, power management controller 1418 coupled to system control logic 1406, and/or communication interfaces 1416 coupled to system control logic 1406.[0070] System control logic 1406, in at least one embodiment, can include any suitable interface controllers to provide for any suitable interface to at least one processor 1404 and/or to any suitable device or component in communication with system control logic 1406. System control logic 1406, in at least one example embodiment, can include one or more memory controllers to provide an interface to system memory 1408. System memory 1408 may be used to load and store data and/or instructions, for example, for system 1400. System memory 1408, in at least one example embodiment, can include any suitable volatile memory, such as suitable dynamic random access memory (DRAM) for example. System control logic 1406, in at least one example embodiment, can include one or more I/O controllers to provide an interface to display device 1410, touch controller 1402, and nonvolatile memory and/or storage device(s) 1432.[0071] Non-volatile memory and/or storage device(s) 1432 may be used to store data and/or instructions, for example within software 1428. Non-volatile memory and/or storage device(s) 1432 may include any suitable non-volatile memory, such as flash memory for example, and/or may include any suitable non-volatile storage device(s), such as one or more hard disc drives (HDDs), one or more compact disc (CD) drives, and/or one or more digital versatile disc (DVD) drives for example.[0072] Power management controller 1418 may include power management logic 1430 configured to control various power management and/or power saving functions disclosed herein or any part thereof. In at least one example embodiment, power management controller 1418 is configured to reduce the power consumption of components or devices of system 1400 that may either be operated at reduced power or turned off when the electronic device is in a closed configuration. For example, in at least one example embodiment, when the electronic device is in a closed configuration, power management controller 1418 performs one or more of the following: power down the unused portion of the display and/or any backlight associated therewith; allow one or more of processor(s) 1404 to go to a lower power state if less computing power is required in the closed configuration; and shutdown any devices and/or components that are unused when an electronic device is in the closed configuration.[0073] Communications interface(s) 1416 may provide an interface for system 1400 to communicate over one or more networks and/or with any other suitable device. Communications interface(s) 1416 may include any suitable hardware and/or firmware. Communications interface(s) 1416, in at least one example embodiment, may include, for example, a network adapter, a wireless network adapter, a telephone modem, and/or a wireless modem.[0074] System control logic 1406, in at least one example embodiment, can include one or more I/O controllers to provide an interface to any suitable input/output device(s) such as, for example, an audio device to help convert sound into corresponding digital signals and/or to help convert digital signals into corresponding sound, a camera, a camcorder, a printer, and/or a scanner.[0075] For at least one example embodiment, at least one processor 1404 may be packaged together with logic for one or more controllers of system control logic 1406. In at least one example embodiment, at least one processor 1404 may be packaged together with logic for one or more controllers of system control logic 1406 to form a System in Package (SiP). In at least one example embodiment, at least one processor 1404 may be integrated on the same die with logic for one or more controllers of system control logic 1406. For at least one example embodiment, at least one processor 1404 may be integrated on the same die with logic for one or more controllers of system control logic 1406 to form a System on Chip (SoC).[0076] For touch control, touch controller 1402 may include touch sensor interface circuitry 1422 and touch control logic 1424. Touch sensor interface circuitry 1422 may be coupled to detect touch input over a first touch surface layer and a second touch surface layer of a display (i.e., display device 1410). Touch sensor interface circuitry 1422 may include any suitable circuitry that may depend, for example, at least in part on the touch-sensitive technology used for a touch input device. Touch sensor interface circuitry 1422, in one embodiment, may support any suitable multi-touch technology. Touch sensor interface circuitry 1422, in at least one embodiment, can include any suitable circuitry to convert analog signals corresponding to a first touch surface layer and a second surface layer into any suitable digital touch input data. Suitable digital touch input data for at least one embodiment may include, for example, touch location or coordinate data.[0077] Touch control logic 1424 may be coupled to help control touch sensor interface circuitry 1422 in any suitable manner to detect touch input over a first touch surface layer and a second touch surface layer. Touch control logic 1424 for at least one example embodiment may also be coupled to output in any suitable manner digital touch input data corresponding to touch input detected by touch sensor interface circuitry 1422. Touch control logic 1424 may be implemented using any suitable logic, including any suitable hardware, firmware, and/or software logic (e.g., non-transitory tangible media), that may depend, for example, at least in part on the circuitry used for touch sensor interface circuitry 1422. Touch control logic 1424 for at least one embodiment may support any suitable multi- touch technology.[0078] Touch control logic 1424 may be coupled to output digital touch input data to system control logic 1406 and/or at least one processor 1404 for processing. At least one processor 1404 for at least one embodiment may execute any suitable software to process digital touch input data output from touch control logic 1424. Suitable software may include, for example, any suitable driver software and/or any suitable application software. As illustrated in FIGURE 14, system memory 1408 may store suitable software 1426 and/or nonvolatile memory and/or storage device(s). [0079] Note that in some example implementations, the functions outlined herein may be implemented in conjunction with logic that is encoded in one or more tangible, non- transitory media (e.g., embedded logic provided in an application-specific integrated circuit (ASIC), in digital signal processor (DSP) instructions, software [potentially inclusive of object code and source code] to be executed by a processor, or other similar machine, etc.). In some of these instances, memory elements can store data used for the operations described herein. This can include the memory elements being able to store software, logic, code, or processor instructions that are executed to carry out the activities described herein. A processor can execute any type of instructions associated with the data to achieve the operations detailed herein. In one example, the processors could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), a DSP, an erasable programmable read only memory (EPROM), electrically erasable programmable read-only memory (EEPROM)) or an ASIC that can include digital logic, software, code, electronic instructions, or any suitable combination thereof.[0080] It is imperative to note that all of the specifications, dimensions, and relationships outlined herein (e.g., number, location, height, width, length, materials, etc.) have only been offered for purposes of example and teaching only. Each of these data may be varied considerably without departing from the spirit of the present disclosure, or the scope of the appended claims. The specifications apply only to one non-limiting example and, accordingly, they should be construed as such. In the foregoing description, example embodiments have been described. Various modifications and changes may be made to such embodiments without departing from the scope of the appended claims. The description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.[0081] Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words "means for" or "step for" are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.OTHER NOTES AND EXAMPLES[0082] Example Al is an electronic device that includes a first housing and a second housing. The second housing can be removably coupled to the first housing and the second housing can be configured to function as a stand for the first housing.[0083] In Example A2, the subject matter of Example Al may optionally include where the first housing includes a first display and the second housing includes a second display.[0084] In Example A3, the subject matter of any of the preceding 'A' Examples can optionally include where the stand configuration allows for user desired viewing of the first display and of the second display.[0085] In Example A4, the subject matter of any of the preceding 'A' Examples can optionally include where the first housing is a standalone tablet.[0086] In Example A5, the subject matter of any of the preceding 'A' Examples can optionally include where the electronic device also includes an interconnect to electrically couple the first housing and the second housing.[0087] In Example A6, the subject matter of any of the preceding 'A' Examples can optionally include where the second housing can be removed from the first housing and replaced with a different second housing, where the different second housing uses the same interconnect as the second housing.[0088] In Example A7, the subject matter of any of the preceding 'A' Examples can optionally include where the different second housing does not include any electronics.[0089] In Example A8, the subject matter of any of the preceding 'A' Examples can optionally include where the second housing is a standalone electronic device. [0090] Example AA1 is an electronic device that includes a first housing, where the first housing includes a first display, and a second housing removably coupled to the first housing. The second housing includes a second display and is configured to function as a stand for the first housing such that the stand configuration allows for user desired viewing of the first display and of the second display.[0091] In Example AA2, the subject matter of Example AA1 may optionally include an interconnect to electrical couple the first housing and the second housing.[0092] In Example AA3, the subject matter of any of the preceding ΆΑ' Examples can optionally include where the second housing can be removed from the first housing and replaced with a different second housing, where the different second housing uses the same interconnect as the second housing.[0093] In Example AA4, the subject matter of any of the preceding ΆΑ' Examples can optionally include where the different second housing does not include any electronics.[0094] In Example AA5, the subject matter of any of the preceding ΆΑ' Examples can optionally include where the first housing is a standalone tablet.[0095] In Example AA6, the subject matter of any of the preceding ΆΑ' Examples can optionally include where the second housing is a standalone electronic device.[0096] Example Ml is a method that includes rotating a second housing away from a first housing on a hinge, where the first housing includes a first display and the second housing includes a second display and adjusting the angle of rotation such that the second housing acts as a stand for the first housing and allows for user desired viewing of the first display and of the second display.[0097] In Example M2, the subject matter of any of the preceding 'M' Examples can optionally include where the first housing is a standalone tablet.[0098] In Example M3, the subject matter of any of the preceding 'M' Examples can optionally include removing the second housing from the first housing, where an interconnect electrically couples the first housing and the second housing.[0099] In Example M4, the subject matter of any of the preceding 'M' Examples can optionally include where the second housing is a standalone tablet. [0100] In Example M5, the subject matter of a ny of the preceding 'M' Exam ples can optionally include replacing the second housing with a different second housing, where the different second housing uses the same interconnect as the second housing.[0101] In Example M6, the subject matter of a ny of the preceding 'M' Exam ples can optionally include where the different second housing does not include any electronics.[0102] In Example M7, the subject matter of a ny of the preceding 'M' Exam ples can optionally include where different second housing is configured to function as a power supply, a camera, a video recorder, or a sound system.[0103] Exam ple SI is a system that includes means for rotating a second housing away from a first housing, where the first housing includes a first display and the second housing includes a second display and means for adjusting the angle of rotation such that the second housing acts as a stand for the first housing and allows for proper viewing of the first display and of the second display.[0104] In Exam ple S2, the subject matter of 'SI' can may optionally include where the first housing is a standalone tablet.[0105] In Example S3, the subject matter of any of the preceding 'SS' Examples can optionally include where means for removing the second housing from the first housing, where an interconnect electrically couples the first housing and the second housing.[0106] In Example S4, the subject matter of any of the preceding 'SS' Examples can optionally means for replacing the second housing with a different second housing, where the different second housing uses the same interconnect as the second housing.[0107] In Example S5, the subject matter of any of the preceding 'SS' Examples can optionally include where the different second device does not include any electronics.[0108] In Example S6, the subject matter of any of the preceding 'S' Examples can optionally include where the second housing is a standalone electronic device.[0109] Example XI is a machine-readable storage medium including machine- readable instructions to implement a method or realize an a pparatus as in any one of the Examples A1-A8, AA1-AA4, and M 1-M7. Example Yl is an apparatus comprising means for performing of any of the Exa mple methods M 1-M7. I n Example Y2, the subject matter of Example Yl can optionally include the means for performing the method comprising a processor and a memory. In Example Y3, the subject matter of Example Y2 can optionally include the memory comprising machine-readable instructions.
A mobile device may perform continuous authentication with an authenticating entity. The mobile device may include a set of biometric and non-biometric sensors and a processor. The processor may be configured to receive sensor data from the set of sensors, form authentication information from the received sensor data, and continuously update the authentication information.
WHAT IS CLAIMED IS:1. A mobile device comprising:a set of biometric and non-biometric sensors; anda processor configured to:receive sensor data from the set of sensors;form authentication information from the received sensor data; and continuously update the authentication information.2. The mobile device of claim 1 , wherein the updated authentication information includes at least one of a trust coefficient, trust level, trust score, authentication coefficient, authentication level, authentication score, or authentication strength.3. The mobile device of claim 1, wherein the updated authentication information incorporates predefined security and privacy preference settings.4. The mobile device of claim 1, wherein the updated authentication information satisfies predefined security and privacy preference settings.5. The mobile device of claim 3, wherein the predefined security and privacy preference settings include types of user-approved sensor data, biometric sensor information, user data input, or authentication information.6. The mobile device of claim 3, wherein the processor implements a negotiation function to negotiate conflicting predefined security and privacy preference settings of the mobile device and an authenticating entity to form fused security and privacy preference settings.7. The mobile device of claim 1, wherein the processor implements an authentication strength function to determine an authentication strength for the received sensor data.8. The mobile device of claim 7, wherein the processor implements a trust level function to analyze persistency over time to determine a trust level associated with the authentication information.9. The mobile device of claim 8, wherein the processor implements a trust coefficient calculation function to determine a trust coefficient based upon the authentication strength and the trust level.10. The mobile device of claim 1, wherein the processor is further configured to transmit the updated authentication information to an authenticating entity in response to an authentication request from the authenticating entity.11. A method to perform continuous authentication comprising:receiving sensor data from a set of biometric and non-biometric sensors;forming authentication information from the received sensor data; andcontinuously updating the authentication information.12. The method of claim 11, wherein the updated authentication information includes at least one of a trust coefficient, trust level, trust score, authentication coefficient, authentication level, authentication score, or authentication strength.13. The method of claim 11, wherein the updated authentication information incorporates predefined security and privacy preference settings.14. The method of claim 11, wherein the updated authentication information satisfies predefined security and privacy preference settings.15. The method of claim 13, wherein the predefined security and privacy preference settings include types of user-approved sensor data, biometric sensor information, user data input, or authentication information.16. The method of claim 13, further comprising negotiating conflicting predefined security and privacy preference settings of the mobile device and an authenticating entity to form fused security and privacy preference settings.17. The method of claim 11, further comprising determining an authentication strength for the received sensor data.18. The method of claim 17, further comprising analyzing persistency over time to determine a trust level associated with the authentication information.19. The method of claim 18, further comprising determining a trust coefficient based upon the authentication strength and the trust level.20. The method of claim 11, further comprising transmitting the updated authentication information to an authenticating entity in response to an authentication request from the authenticating entity.21. A non- transitory computer-readable medium including code that, when executed by a processor, causes the processor to:receive sensor data from a set of biometric and non-biometric sensors;form authentication information from the received sensor data; andcontinuously update the authentication information.22. The computer-readable medium of claim 21, wherein the updated authentication information includes at least one of a trust coefficient, trust level, trust score, authentication coefficient, authentication level, authentication score, or authentication strength.23. The computer-readable medium of claim 21, wherein the updated authentication information incorporates predefined security and privacy preference settings.24. The computer-readable medium of claim 21, wherein the updated authentication information satisfies predefined security and privacy preference settings.25. The computer-readable medium of claim 23, wherein the predefined security and privacy preference settings include types of user-approved sensor data, biometric sensor information, user data input, or authentication information.26. The computer-readable medium of claim 23, further comprising code to negotiate conflicting predefined security and privacy preference settings of the mobile device and an authenticating entity to form fused security and privacy preference settings.27. The computer-readable medium of claim 21, further comprising code to determine an authentication strength for the received sensor data.28. The computer-readable medium of claim 27, further comprising code to analyze persistency over time to determine a trust level associated with the authentication information.29 The computer-readable medium of claim 28, further comprising code to determine a trust coefficient based upon the authentication strength and the trust level.30. A mobile device comprising:means for receiving sensor data from a set of biometric and non-biometric sensors; means for forming authentication information from the received sensor data; and means for continuously updating the authentication information.
CONTINUOUS AUTHENTICATION WITH A MOBILE DEVICECROSS REFERENCE TO RELATED APPLICATIONS[0001] The present application claims priority to U.S. Provisional Patent Application No. 61/943,428, filed February 23, 2014, entitled "Trust Broker for Authentication Interaction with Mobile Devices," and U.S. Provisional Patent Application No. 61/943,435 filed February 23, 2014, entitled "Continuous Authentication for Mobile Devices", the content of which are hereby incorporated by reference in their entirety for all purposes. The present application is also related to U.S. Patent Application No. 14/523,679 filed 10/24/2014, entitled "Trust Broker Authentication Method for Mobile Devices".Field[0002] The present invention relates to continuous authentication of a user of a mobile device.Relevant Background[0003] Many service providers, services, applications or devices require authentication of users who may attempt to access services or applications remotely from, for example, a mobile device such as a smart phone, a tablet computer, a mobile health monitor, or other type of computing device. In some contexts, a service provider such as a bank, a credit card provider, a utility, a medical service provider, a vendor, a social network, a service, an application, or another participant may require verification that a user is indeed who the user claims to be. In some situations, a service provider may wish to authenticate the user when initially accessing a service or an application, such as with a username and password. In other situations, the service provider may require authentication immediately prior to executing a transaction or a transferal of information. The service provider may wish to authenticate the user several times during a session, yet the user may choose not to use the service if authentication requests are excessive. In some contexts, a device may require to authenticate a user. For example, an application such as a personal email application on a mobile device may require verification that a user is indeed the rightful owner of the account.[0004] Similarly, the user may wish to validate a service provider, service, application, device or another participant before engaging in a communication, sharing information, or requesting a transaction. The user may desire verification more than once in a session, and wish some control and privacy before sharing or providing certain types of personal information. In some situations, either or both parties may desire to allow certain transactions or information to be shared with varying levels of authentication. SUMMARY[0005] Aspects of the invention relate to a mobile device that may perform continuous authentication with an authenticating entity. The mobile device may include a set of biometric and non- biometric sensors and a processor. The processor may be configured to receive sensor data from the set of sensors, form authentication information from the received sensor data, and continuously update the authentication information.BRIEF DESCRIPTION OF THE DRAWINGS[0006] FIG. 1 is a block diagram of a mobile device in which aspects of the invention may be practiced.[0007] FIG. 2 is a diagram of a continuous authentication system that may perform authentication with an authenticating entity.[0008] FIG. 3 is a diagram illustrating the dynamic nature of the trust coefficient in the continuous authentication methodology.[0009] FIG. 4 is a diagram illustrating a wide variety of different inputs that may be inputted into the hardware of the mobile device to continuously update the trust coefficient.[0010] FIG. 5 is a diagram illustrating that the mobile device may implement a system that provides a combination of biometrics and sensor data for continuous authentication.[0011] FIG. 6 is a diagram illustrating the mobile device utilizing continuous authentication functionality.[0012] FIG. 7 is a diagram illustrating the mobile device utilizing continuous authentication functionality.[0013] FIG. 8 is a diagram illustrating a wide variety of authentication technologies that may be utilized.[0014] FIG. 9 is a diagram illustrating a mobile device and an authenticating entity utilizing a trust broker that may interact with a continuous authentication manager and a continuous authentication engine.[0015] FIG. 10 is a diagram illustrating a variety of different implementations of the trust broker.[0016] FIG. 11 is a diagram illustrating privacy vectors (PVs) and trust vectors (TVs) between a mobile device and an authenticating entity.[0017] FIG. 12 is a diagram illustrating privacy vector components and trust vector components.[0018] FIG. 13A is a diagram illustrating operations of a trust vector (TV) component calculation block that may perform TV component calculations.[0019] FIG. 13B is a diagram illustrating operations of a data mapping block.[0020] FIG. 13C is a diagram illustrating operations of a data mapping block. [0021] FIG. 13D is a diagram illustrating operations of a data normalization block.[0022] FIG. 13E is a diagram illustrating operations of a calculation formula block.[0023] FIG. 13F is a diagram illustrating operations of a calculation result mapping block and a graph of example scenarios.DETAILED DESCRIPTION[0024] The word "exemplary" or "example" is used herein to mean "serving as an example, instance, or illustration." Any aspect or embodiment described herein as "exemplary" or as an "example" in not necessarily to be construed as preferred or advantageous over other aspects or embodiments.[0025] As used herein, the term "mobile device" refers to any form of programmable computer device including but not limited to laptop computers, tablet computers, smartphones, televisions, desktop computers, home appliances, cellular telephones, personal television devices, personal data assistants (PDA's), palm-top computers, wireless electronic mail receivers, multimedia Internet enabled cellular telephones, Global Positioning System (GPS) receivers, wireless gaming controllers, receivers within vehicles (e.g., automobiles), interactive game devices, notebooks, smartbooks, netbooks, mobile television devices, mobile health devices, smart wearable devices, or any computing device or data processing apparatus. An "authenticating entity" refers to a service provider, a service, an application, a device, a social network, another user or participant, or any entity that may request or require authentication of a mobile device or a user of a mobile device.[0026] Figure 1 is block diagram illustrating an exemplary device in which embodiments of the invention may be practiced. The system may be a computing device (e.g., a mobile device 100), which may include one or more processors 101, a memory 105, an FO controller 125, and a network interface 110. Mobile device 100 may also include a number of sensors coupled to one or more buses or signal lines further coupled to the processor 101. It should be appreciated that mobile device 100 may also include a display 120 (e.g., a touch screen display), a user interface 119 (e.g., keyboard, touch screen, or similar devices), a power device 121 (e.g., a battery), as well as other components typically associated with electronic devices. In some embodiments, mobile device 100 may be a transportable device, however, it should be appreciated that device 100 may be any type of computing device that is mobile or non-mobile (e.g., fixed at a particular location).[0027] Mobile device 100 may include a set of one or more biometric sensors and/or non-biometric sensors. Mobile device 100 may include sensors such as a clock 130, ambient light sensor (ALS) 135, biometric sensor 137 (e.g., heart rate monitor, electrocardiogram (ECG) sensor, blood pressure monitor, etc., which may include other sensors such as a fingerprint sensor, camera or microphone that may provide human identification information), accelerometer 140, gyroscope 145, magnetometer 150, orientation sensor 151, fingerprint sensor 152, weather sensor 155 (e.g., temperature, wind, humidity, barometric pressure, etc.), Global Positioning Sensor (GPS) 160, infrared (IR) sensor 153, proximity sensor 167, and near field communication (NFC) sensor 169. Further, sensors/devices may include a microphone (e.g. voice sensor) 165 and camera 170. Communication components may include a wireless subsystem 115 (e.g., Bluetooth 166, Wi-Fi 111, or cellular 161), which may also be considered sensors that are used to determine the location (e.g., position) of the device. In some embodiments, multiple cameras are integrated or accessible to the device. For example, a mobile device may have at least a front and rear mounted camera. The cameras may have still or video capturing capability. In some embodiments, other sensors may also have multiple installations or versions.[0028] Memory 105 may be coupled to processor 101 to store instructions for execution by processor 101. In some embodiments, memory 105 is non-transitory. Memory 105 may also store one or more models, modules, or engines to implement embodiments described below that are implemented by processor 101. Memory 105 may also store data from integrated or external sensors.[0029] Mobile device 100 may include one or more antenna(s) 123 and transceiver(s) 122. The transceiver 122 may be configured to communicate bidirectionally, via the antenna(s) and/or one or more wired or wireless links, with one or more networks, in cooperation with network interface 110 and wireless subsystem 115. Network interface 110 may be coupled to a number of wireless subsystems 115 (e.g., Bluetooth 166, Wi-Fi 111, cellular 161, or other networks) to transmit and receive data streams through a wireless link to/from a wireless network, or may be a wired interface for direct connection to networks (e.g., the Internet, Ethernet, or other wireless systems). Mobile device 100 may include one or more local area network transceivers connected to one or more antennas. The local area network transceiver comprises suitable devices, hardware, and/or software for communicating with and/or detecting signals to/from wireless access points (WAPs), and/or directly with other wireless devices within a network. In one aspect, the local area network transceiver may comprise a Wi-Fi (802. l lx) communication system suitable for communicating with one or more wireless access points.[0030] Mobile device 100 may also include one or more wide area network transceiver(s) that may be connected to one or more antennas. The wide area network transceiver comprises suitable devices, hardware, and/or software for communicating with and/or detecting signals to/from other wireless devices within a network. In one aspect, the wide area network transceiver may comprise a CDMA communication system suitable for communicating with a CDMA network of wireless base stations; however in other aspects, the wireless communication system may comprise another type of cellular telephony network or femtocells, such as, for example, TDMA, LTE, Advanced LTE, WCDMA, UMTS, 4G, or GSM. Additionally, any other type of wireless networking technologies may be used, for example, WiMax (802.16), Ultra Wide Band (UWB), ZigBee, wireless USB, etc. In conventional digital cellular networks, position location capability can be provided by various time and/or phase measurement techniques. For example, in CDMA networks, one position determination approach used is Advanced Forward Link Trilateration (AFLT).[0031] Thus, device 100 may be a mobile device, wireless device, cellular phone, personal digital assistant, mobile computer, wearable device (e.g., head mounted display, wrist watch, virtual reality glasses, etc.), internet appliance, gaming console, digital video recorder, e-reader, robot navigation system, tablet, personal computer, laptop computer, tablet computer, or any type of device that has processing capabilities. As used herein, a mobile device may be any portable, movable device or machine that is configurable to acquire wireless signals transmitted from and transmit wireless signals to one or more wireless communication devices or networks. Thus, by way of example but not limitation, mobile device 100 may include a radio device, a cellular telephone device, a computing device, a personal communication system device, or other like movable wireless communication equipped device, appliance, or machine. The term "mobile device" is also intended to include devices which communicate with a personal navigation device, such as by short-range wireless, infrared, wire line connection, or other connection - regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device 100. Also, "mobile device" is intended to include all devices, including wireless communication devices, computers, laptops, etc., which are capable of communication with a server, such as via the Internet, Wi-Fi, or other network, and regardless of whether satellite signal reception, assistance data reception, and/or position-related processing occurs at the device, at a server, or at another device associated with the network. Any operable combination of the above are also considered a "mobile device."[0032] It should be appreciated that embodiments of the invention as will be hereinafter described may be implemented through the execution of instructions, for example as stored in the memory 105 or other element, by processor 101 of mobile device 100 and/or other circuitry of device 100 and/or other devices. Particularly, circuitry of the device 100, including but not limited to processor 101, may operate under the control of a program, routine, or the execution of instructions to execute methods or processes in accordance with embodiments of the invention. For example, such a program may be implemented in firmware or software (e.g. stored in memory 105 and/or other locations) and may be implemented by processors, such as processor 101, and/or other circuitry of device. Further, it should be appreciated that the terms processor, microprocessor, circuitry, controller, etc., may refer to any type of logic or circuitry capable of executing logic, commands, instructions, software, firmware, functionality and the like. The functions of each unit or module within the mobile device 100 may also be implemented, in whole or in part, with instructions embodied in a memory, formatted to be executed by one or more general or application- specific processors.Various terminologies will be described to aid in the understanding of aspects of the invention. Sensor inputs may refer to any input from any of the previously described sensors, e.g. a clock 130, ambient light sensor (ALS) 135, biometric sensor 137 (e.g., heart rate monitor, blood pressure monitor, etc.), accelerometer 140, gyroscope 145, magnetometer 150, orientation sensor 151, fingerprint sensor 152, weather sensor 155 (e.g., temperature, wind, humidity, barometric pressure, etc.), Global Positioning Sensor (GPS) 160, infrared (IR) sensor 153, microphone 165, proximity sensor 167, near field communication (NFC) sensor 169, or camera 170. In particular, some of the sensor inputs may be referred to as "biometric" sensor inputs or biometric sensor information from biometric sensors, which may include a biometric sensor 137 (e.g., heart rate inputs, blood pressure inputs, etc.), fingerprint sensor 152 (e.g., fingerprint input), touch screen 120 (e.g., finger scan or touch input), touch screen 120 (e.g., hand or finger geometry input), pressure or force sensors (e.g., hand or finger geometry), microphone 165 (e.g., voice scan), camera 170 (e.g., facial or iris scan), etc. It should be appreciated these are just examples of biometric sensor inputs and biometric sensors and that a wide variety of additional sensor inputs may be utilized. Further, other types of sensors may provide other types of inputs generally referred to herein as "non-biometric" sensor inputs/data or just sensor inputs/data (e.g., general sensors). One example of these generalized sensor inputs may be referred to as contextual inputs that provide data related to the current environment that the mobile device 100 is currently in. Therefore, a contextual sensor may be considered to be any type of sensor or combination of sensors that relate to the current context, condition or situation of the mobile device that may relate to contextual sensing information such as light, acceleration, orientation, weather, ambient pressure, ambient temperature, ambient light level, ambient light characteristics such as color constituency, location, proximity, ambient sounds, identifiable indoor and outdoor features, home or office location, activity level, activity type, presence of others, etc. Accordingly, examples of contextual sensors may include ambient light sensor 135, accelerometer 140, weather sensor 155, orientation sensor 151, GPS 160, proximity sensor 167, microphone 165, camera 170, etc. These merely being examples of contextual inputs and contextual sensors. In some implementations, biometric information and contextual information may be extracted from the same sensor such as a single camera or microphone. In some implementations, biometric information and contextual information may be extracted from the same set of sensor data. In some implementations, biometric and contextual information may be extracted from different sensors. In some implementations, biometric and contextual information may be extracted from different sensor data acquired from the same sensor or from a set of sensors. Additionally, data input may refer to user-inputted data for authentication (e.g., names, IDs, passwords, PINs, etc.) or any other data of interest for authentication. It should be noted that in some embodiments biometric sensor information may include raw sensor data or input from one or more biometric sensors, while in other embodiments the biometric sensor information may include only processed data such as fingerprint template information having positions and orientations of various minutiae associated with the fingerprint that allows subsequent recognition of the user yet does not allow recreation of the fingerprint image. In some embodiments, biometric sensor information may allow the authenticating entity to identify the user, while in other embodiments the matching or authentication is performed locally in a secure environment within the mobile device and only a verification output or an output of an authentication system such as an authentication level or an authentication score is provided to the authenticating entity. It should be noted that a sensor scan, such as a fingerprint, iris, voice or retina scan, does not imply a particular method or technique of acquiring sensor data, but rather is intended to more broadly cover any method or technique of acquiring sensor input. More generally, "sensor information" as used herein may include raw sensor data, processed sensor data, information or features retrieved, extracted or otherwise received from sensor data, information about the type or status of the sensor, aggregated sensor data, aggregated sensor information, or other type of sensor information. Similarly, "sensor data" may refer to raw sensor data, sensor input, sensor output, processed sensor data, or other sensor information.Embodiments of the invention may relate to the determination of a dynamic (continuously time- varying) trust coefficient, or a trust vector as will be described later. The trust coefficient may convey the current level of authentication of a user of a mobile device 100 such as a smart phone, tablet, smart watch or other personal electronic device. For example, high levels of trust indicated by a high trust coefficient may be obtained by a high resolution fingerprint sensor 152 of mobile device 100 or by combining a user-inputted personal identification number (PIN) with the results from a simplified, less accurate sensor (e.g. a finger scan from a touch screen display 120). In another example, a high level of trust may be achieved with a high trust coefficient when a voice scan from microphone 165 or other soft biometric indicator is combined with a GPS location (e.g. from GPS 160) of a user (e.g. recognized user at office/home). In cases where an accurate biometric indicator is not available but a user has correctly answered a PIN, a moderate trust coefficient may be appropriate. In another example, the trust coefficient may simply convey the level or result of matching (e.g., a matching score or a result of matching) obtained from a fingerprint sensor. Examples of these scenarios will be hereinafter described in more detail.[0035] Transactions made available to a user may be made to depend on the value of the trust coefficient. For example, a user with a high-level trust coefficient may be provided a high level of user access to sensitive information or more may be provided with the authority to execute financial transactions of greater value; a user with a medium-level trust coefficient may be provided with the authority to execute only small financial transactions; a user with a low- level trust coefficient may only be permitted browser access. A detected spoof attempt or other incorrect authentication result may incur a high mistrust value that requires high-level authentication to overcome.[0036] In some embodiments, a trust coefficient may be calculated (e.g., via a method, function, algorithm, etc.). The trust coefficient may decay with time towards a lower level of trust or mistrust. As will be described, a mobile device and/or a server may determine the trust coefficient. As will be described, in some embodiments, a continuous authentication engine (CAE), a continuous authentication manager (CAM), and a trust broker (TB) may be configured to dynamically calculate, in real time, a trust coefficient so as to provide continuous or quasi- continuous authentication capability in mobile devices.[0037] Embodiments of the invention may relate to an apparatus and method to perform authentication with an authenticating entity that the user wishes to authenticate with, based upon inputs from a plurality of sensors such as biometric sensors and non-biometric sensors, and/or user data input (e.g., user name, password, etc.). For example, the processor 101 of a mobile device 100 may be configured to: receive sensor data from the set of sensors, form authentication information from the received sensor data, and continuously update the authentication information to the authenticating entity. In particular, as will be described hereinafter, mobile device 100 under the control of processor 101 may implement this methodology to be hereinafter described.[0038] With additional reference to FIG. 2, a continuous authentication system 200 is shown that may be implemented by mobile device 100 to perform authentication with an authenticating entity 250. In particular, mobile device 100 may include a plurality of sensors such as biometric sensors and non-biometric sensors, as previously described. Further, mobile device 100, via processor 101, may be configured to implement a continuous authentication system 200 that includes a preference setting function block 210, an authentication strength function block 220, a trust level function block 230, and a trust coefficient calculation function block 240 to implement a plurality of functions.[0039] These functions may include receiving an authentication request from an authenticating entity 250 (implementing an application 252) that may include a trust coefficient request or a request for other authentication information, based upon one or more of biometric sensor information, non-biometric sensor data, user data input, or time. Some sensor information may be determined on a continuous basis from data sensed continuously. For example, authentication strength function block 220 may retrieve, extract or otherwise receive biometric sensor information from biometric sensors (e.g. hard biometrics and/or soft biometrics), non-biometric sensor data from non-biometric sensors (e.g. non-biometrics), user data input, or other authentication information, which matches, fulfills, satisfies or is consistent with or otherwise incorporates predefined security/privacy preference settings (as determined by preference setting function block 210) in order to form a trust coefficient that is calculated by trust coefficient calculation function block 240. The trust coefficient may be continuously, quasi-continuously or periodically updated within the mobile device 100. The trust coefficient or other authentication information may be transmitted to the authenticating entity 250 for authentication with the authenticating entity in a continuous, quasi-continuous or periodical manner, or transmitted upon request or discreetly in time as required by the authenticating entity, e.g., for a purchase transaction. In some implementations, the authentication information may be sent to the authenticating entity 250 based on an interval or elapsing of time, or upon a change in the sensor data or authentication information from the set of sensors. In some implementations, the mobile device 100 may provide continuous authentication by calculating the trust coefficient or other authentication information with or without continuously receiving sensor information. In some implementations, continuous authentication may be provided on-demand by calculating the trust coefficient or other authentication information with or without accessing sensor information.[0040] In one embodiment, the predefined security and privacy preference settings, as set by preference setting function block 210, may be defined by the authenticating entity 250, the mobile device 100, or by the user of the mobile device. The predefined security and privacy preference settings may include types of biometric sensor information, non-biometric sensor data, user data input, or other authentication information to be utilized or not utilized in determining the trust coefficient. Also, the predefined security/privacy preference settings may include required authentication strengths for biometric sensor information and/or non-biometric sensor data in order to determine whether they are to be utilized or not to be utilized. The authentication strength function block 220 may be configured to implement an authentication strength function to determine the authentication strength for a requested hard biometric data input, soft biometric data input, non-biometric data input, sensor data or other authentication information from the corresponding sensor(s) and to pass that authentication strength to the trust coefficient calculation function block 240, which calculates the trust coefficient that may be continuously or non-continuously transmitted to the authenticating entity 250.[0041] For example, an authenticating entity 250 having associated applications 252 may implement such services as bank functions, credit card functions, utility functions, medical service provider functions, vendor functions, social network functions, requests from other users, etc. These types of authenticating entities may require some sort of verification. Embodiments of the invention may be related to continuously updating and transmitting a trust coefficient to an authenticating entity to provide continuous or quasi-continuous authentication.[0042] As examples of various terms, a trust coefficient (TC) may be a level of trust based upon a data input, such as user data inputs (e.g., username, password, etc.), non-biometric sensor inputs (e.g., GPS location, acceleration, orientation, etc.), biometric sensor inputs (e.g., fingerprint scan from a fingerprint sensor, facial or iris scan from a camera, voiceprint, etc.). A trust coefficient may be a composition, aggregation or fusion of one or more data inputs. Also, as will be described, each of these inputs may be given an authentication strength and/or score by authentication strength function block 220 that are used in preparing one or more trust coefficient values by trust coefficient calculation function block 240. An authenticating entity 250 may set a risk coefficient (RC) that needs to be met to create, generate or otherwise form a trust level significant enough to allow for authentication of a mobile device 100 for the particular function to be performed. Therefore, authenticating entity 250 may determine whether mobile device 100 has generated a trust coefficient that is greater than the risk coefficient such that the authenticating entity 250 may authenticate the mobile device 100 for the particular function to be performed. The term trust coefficient may be a part of the trust vector (TV), as will be described in more detail later.[0043] Looking more particularly at the functionality of FIG. 2, continuous authentication system 200 provides a method for continuous authentication. In particular, block 210 implements a security/privacy preference setting function to establish and maintain preference settings for execution. Preference settings as implemented by preference setting function block 210 may include user preferences, institutional preferences, or application preferences. For example, the preference settings may be related to security/privacy settings, security/privacy preferences, authentication strengths, trust levels, authentication methods, decay rate as a function of time, decay periods, preferred trust and credential input/output formats, ranges of scores and coefficients, persistence values, etc. User preferences may include, for example settings associated with access to different networks (e.g., home network, office network, public network, etc.), geographic locations (e.g., home, office, or non-trusted locations), operational environment conditions, and format settings. In some implementations, user preferences may include customizing the functionality itself, for example, modifying the trust coefficient decay rate as a function of time, changing the decay period, etc.[0044] Institutional preferences may relate to the preferences of an institution, such as a trust broker of a third party service provider (e.g., of the authenticating entity 250), or other party that may wish to impose preferences, such as a wireless carrier, a device manufacturer, the user's employer, etc. Application preferences (e.g., from applications 252 of authenticating entities 250) may relate to the preferences imposed by the application or service that the user wishes to authenticate with, such as a website that the user desires to conduct financial transactions with, submit or receive confidential information to and from, make a purchase from, engage in social networking, etc. For example, the application preferences may include authentication level requirements and trust level requirements.[0045] Accordingly, preference setting function block 210 may receive as inputs one or more specified preferences of the user, specified preferences from one or more applications or services from the authenticating entity that the user may wish to interact with, or specified preferences of third party institutions.[0046] In one embodiment, preference setting function block 210 may implement a negotiation function or an arbitration function to negotiate or arbitrate conflicting predefined security and privacy preferences settings between the authenticating entity 250 (e.g., application preferences and institutional preferences) and the mobile device 100 (e.g., user preferences), or to create, generate or otherwise form fused security and privacy preference settings, which may be transmitted to the authentication strength function block 220, trust level function block 230, and the trust coefficient calculation function block 240. Thus, preference setting function block 210, which receives various user preferences, institutional preferences and application preferences, may be configured to output fused security/privacy preference settings to negotiate or arbitrate contradictory settings among the mobile device preferences, user preferences, application preferences, institutional preferences, etc. For example, a user of the mobile device 100 may set voice to be the most preferred authentication method for convenience. While an authenticating entity 250 such as a bank may set voice to be a least preferred authentication method due to suspected unreliability. Preference setting function block 210 may implement an arbitration or negotiation function to arbitrate or negotiate between any conflicting predefined security/privacy preference settings, and may output appropriate fused preference settings to the authentication strength function block 220 and trust coefficient calculation function block 240 (e.g., voice from the microphone and iris scan from the camera).[0047] Authentication strength function block 220 may be configured to implement an authentication strength function to determine authentication strength based on, for example, hard biometric, soft biometric or non-biometric information input. As an example, biometric data may be defined into two categories: "hard" biometrics, which may include data for fingerprint recognition, face recognition, iris recognition, etc., and "soft" biometrics that may include clothes color and style, hair color and style, eye movement, heart rate, a signature or a salient feature extracted from an ECG waveform, gait, activity level, etc. Non-biometric authentication data may include a username, password, PIN, ID card, GPS location, proximity, weather, as well as any of previously described contextual sensor inputs or general sensor inputs. In addition, authentication strength function block 220 may receive sensor characterization data, including, for example, a sensor identification number, sensor fault tolerance, sensor operation environment and conditions that may impact the accuracy of the sensor, etc. Some biometrics information and sensor characterization data may change dynamically and continuously.[0048] In one embodiment, authentication strength function block 220 may receive data inputs (hard biometrics, soft biometrics, non-biometrics, etc.) from these various biometric and non-biometric sensors and preference data from preference setting function block 210. Based upon this, authentication strength function block 220 may be configured to output a first metric to the trust coefficient calculation block 240 signifying the strength of the biometric or non-biometric sensor data to be used for user authentication. The first metric may be expressed using characterizations such as high, medium, low, or none; a number/percentage; a vector; other suitable formats; etc. The value of this metric may change dynamically or continuously in time as some biometrics information and sensor characterization data or preference settings may change dynamically and continuously.[0049] The strength or reliability of soft and hard biometrics may be dynamic. For example, the user may be requested to enroll her biometric information (e.g., a fingerprint) or authenticate her after a certain amount of time following the first enrollment of the biometric information. It may be beneficial to shorten this time interval when/if suspicious use of the mobile could be detected. Similarly, for the sake of a user's convenience, the time interval could be lengthened when/if device autonomously recognizes, on a continuous basis, cues, e.g., consistent patterns of usage and context, to offset the passage of time and delay the need for re-authentication. Trust level function block 230 may implement a trust level function to analyze persistency over time to determine a trust level. In particular, trust level function block 230 may be configured to analyze the persistency over time of selected user behaviors or contexts and other authentication information. For example, trust level function block 230 may identify and/or analyze behavior consistencies or behavior patterns. Examples of behavior consistencies may include regular walks on weekend mornings, persistency of phone numbers called or texted to and from regularly, network behavior, use patterns of certain applications on the mobile device, operating environments, operating condition patterns, etc. Further, trust level function block 230 may identify and/or analyze other contextual patterns such as persistence of geographical locations, repeated patterns of presence at certain locations at regular times (e.g., at work, home, or a coffee shop), persistence of pattern of network access-settings (e.g., home, office, public networks), operating environment patterns, operating condition patterns, etc. Additionally, trust level function block 230 may receive sensor related characterization data, such as a sensor ID, sensor fault tolerance, sensor operation environment and conditions, etc.[0050] Accordingly, trust level function block 230 may receive as inputs persistency of context and behavior and sensor characterization data. Trust level function block 230 may be configured to output a second metric to the trust coefficient calculation function block 240 indicating a level of trust. The second metric may be expressed using characterizations such as high, medium, low, or none; a number or percentage; components of vector; or other formats. The value of this metric may change dynamically or continuously in time when persistence of context, behavioral patterns, sensor characterization data, or preference settings change.[0051] Further, trust coefficient calculation function block 240 may implement a trust coefficient calculation function to determine the trust coefficient based upon the authentication strength of the received input data from the biometric and non-biometric sensors and the trust level received based on the input data from the biometric and non-biometric sensors. Trust coefficient calculation function block 240 may be configured to receive the first metric of authentication strength from authentication strength function block 220, a second metric of trust level from trust level function block 230, preference settings from preference setting function block 210, as well as time/date input, to determine the trust coefficient. Trust coefficient calculation function block 240 may be configured to continuously or quasi-continuously, or discreetly and on demand, output a trust coefficient to authenticating entity 250 in order to provide continuous, quasi- continuous or discrete authentication with authenticating entity 250. [0052] In some embodiments, as will be described in more detail hereinafter, trust coefficient calculation function block 240 may perform processes such as data interpretation and mapping based on a preset look-up table to map the input data and data format into a unified format; data normalization into a predetermined data range; calculations based on a method/formula that may be in accordance with a default or that may be changed based on preference setting changes requested over time by one or more requestors; mapping the calculation results and preferred formats in accordance with preference settings; etc.[0053] Further, in some embodiments, as will be described in more detail hereinafter, the trust coefficient may include composite trust coefficients or trust scores having one or more components. The trust coefficients, scores or levels may be configured as part of a multi-field trust vector. Further, in some embodiments, trust coefficient calculation function block 240 may be configured to output trust coefficient components and include the credentials or other information used to authenticate the user or the device, or to provide other data used to complete a transaction (e.g., data verifying the user is not a computer or robot). In other implementations, trust coefficient calculation function block 240 may output a trust coefficient that is utilized by another system element, such as a trust broker, to release credentials or provide other data used to complete a transaction.[0054] Based on the preference settings, the output format can be defined or changed. The trust coefficient components may change from one time to another due to preference setting changes. The trust coefficient components may change from one request to another due to differences between preference settings and different requestors. For example, an application preference or an institutional preference may be used to provide parameters to formulas that may configure or control the generation of trust coefficient components to meet specific requirements, such as required for the use of particular authentication methods or for altering the time constants of trust coefficient decay.[0055] It should be appreciated that the output of trust coefficient calculation function block 240 may change in various manners in time as a user interacts in various ways with the mobile device 100. An example will be provided hereinafter, illustrating the dynamic nature of trust coefficients and continuous authentication with reference FIG. 3. FIG. 3 illustrates the dynamic nature of the trust coefficient in the continuous authentication methodology. For example, the y- axis illustrates a dynamic trust coefficient with various levels (e.g., level 4 - complete trust; level 3 - high trust; level 2 - medium trust; level 1 - low trust; level 0 - mistrust; and level -1 - high mistrust) and the x-axis represents time. [0056] For example, at point a) the mobile device may begin an authentication process with a non- initialized status and a trust coefficient level of zero (identified at the border between level 1 low trust and level 0 low mistrust). At point b), the mobile device begins high-level authentication. For example, at point b'), high-level authentication has been achieved (e.g., with a fingerprint scan from a fingerprint sensor and a user ID and password). At this point b'), a completely trusted status has been acquired (e.g. level 4 complete trust). However, as shown at point c), the trust level begins to decline as time progresses. At point d), re-authentication of the trust coefficient is needed as the trust level has decreased down to level 3 trust. At this point, another input may be needed such as an eye scan via a camera. Based upon this, at point d'), the completely trusted status has been re-acquired.[0057] Again, at point e), as time proceeds, the trust level again decays. Then, at point f), re- authentication is needed to bring the trust coefficient back to level 4 complete trust. At point f), the completely trusted status has been reacquired based upon an additional sensor input. For example, a previous sensor input may be re-inputted (e.g., an additional fingerprint scan) or a new input may be acquired such as a voice scan through a microphone, which again brings the trust coefficient back to a complete level of trust. As previously described, the previous authentication has brought the dynamic trust coefficient back and forth to the level of complete trust.[0058] However, at point g), the trust level begins to decay significantly all the way to point h), to where the dynamic trust coefficient has completely fallen out of trusted status to a level zero trust level (low mistrust) and re-authentication needs to reoccur. At point h'), a completely trusted status has been re-acquired. For example, the user may have inputted a fingerprint scan via a fingerprint sensor as well as a user ID and password. However again, at point i), as time increase the trust level may begin to decay back to point j), a low trust level.[0059] At this point, request for service provider access may only need a medium trust level (e.g. level two), so at point j '), a medium trust level is acquired, such as, by just a low-resolution touchscreen finger sensor input. Again at point k), as time progresses the dynamic trust coefficient trust level declines all the way back to a level zero low mistrust (point 1) where the trust coefficient is maintained at a baseline level of mistrust. At point ), medium level authentication begins and at point 1") medium level trusted status is re-acquired (e.g. by a touch-screen finger scan). However, at point m), the trust level begins to decay as time proceeds down to the baseline low mistrust level at point n). An attempted spoofing attack may be detected at point o). At point o') the spoofing has failed and a completely mistrusted status has occurred (e.g. level -1 high mistrust), where it is retained for a time until point p). [0060] With time, the high level of mistrust diminishes back to a baseline mistrust level. At point q), the decay is stopped at the baseline mistrusted status. At point r), medium level authentication begins again. At point r'), the medium level authentication has failed and a low mistrusted status level has been acquired (e.g. level 0). For example, the finger scan via the touch-screen may have failed. At this point, the trust level is retained for a time, then begins to decay at point s) back to the baseline level of mistrust at point t). At point t), the trust level is retained at a low level of mistrust until point u). A low level of authentication may begin at point u). For example, a low level authentication such as a GPS location may be acquired at point u') such that there is at least a low level of trust until a point w). However, yet again, as time increases, the level of the dynamic trust coefficient begins to decline to point x), a low level trust, however, the decline may be stopped at point x') (at a baseline low-level trusted status).[0061] The process may begin again with requesting a high level of authentication, such as a fingerprint scan via a fingerprint sensor or a username and password, such that, at point y'), a completely trusted status is again acquired and the dynamic trust coefficient has been significantly increased. However, yet again, as time increases past a point z), the trust level again begins to decay to a baseline low-level trusted status at point aa).[0062] It should be appreciated that, according to various implementations, the trust coefficient is dynamic and as the trust coefficient decreases with time, the user/mobile device may need to re- authenticate itself to keep the trust coefficient at a high enough level to perform operations with various authenticating entities.[0063] FIG. 4 illustrates a wide variety of different inputs 400 that may be inputted into the hardware 420 of the mobile device to continuously or quasi-continuously update the trust coefficient. For example, as shown in FIG. 4, a variety of hard biological biometrics 402 may be utilized as biometric sensor inputs with appropriate biometric sensors 422 of hardware 420. Examples of hard biological biometrics may include a fingerprint scan, palm print, facial scan, skin scan, voice scan, hand/finger shape imaging, etc. Further, FIG. 4 illustrates that a wide variety of soft biometrics 408 may be utilized as biometric sensor inputs with appropriate biometric sensors 422 of hardware 420, such as skin color, hair style/color, beard/mustache, dress color, etc. Furthermore, various behavior biometrics 404 and psychological biometrics 406 may be determined from sensor inputs with appropriate sensors 422 of hardware 420. Examples of these sensor inputs may include voice inflections, heartbeat variations, rapid eye movements, various hand gestures, finger tapping, behavior changes, etc. Further, as previously described, time history 410 may also be utilized as an input. These types of biometrics may be determined, registered, recorded, etc., in association with appropriate sensors 422 of the hardware 420 of the mobile device for generating trust coefficients, as previously described. Such sensors include biometric sensors and non-biometric sensors, as previously described. Examples of these sensors 422 include all of the previously described sensors, such as a fingerprint sensor, camera sensor, microphone, touch sensor, accelerometer, etc.[0064] Further, the hardware 420 may include one or more processing engines 424 and awareness engines 426 to implement analytical models 442 that may analyze the input from the variety of sensors in order to perform continuous or quasi-continuous authentication of the user. These analytical models 442 may take into account security and privacy settings (e.g., predefined security/privacy preference settings). As examples, types of analytical models 422 utilized may include identification models, multimodal models, continuous identification models, probabilistic-based authentication models, etc.[0065] These analytical models may be utilized for continuous authentication by the generation of trust coefficients for use with external sites, authenticating entities, applications or other users with which the user of the mobile device wishes to interact with. Examples of these types of application 450 interactions may include access control 452 (e.g., device access, application access, cloud access, etc.), e-commerce 454 (e.g., credit card transactions, payment methods, ATM, banking, etc.), personalized services 546 (e.g., user-friendly applications, personal health monitoring, medical applications, privacy guards, etc.), or other functions 458 (e.g., improvement of other applications based on customized biometric information, etc.).[0066] With additional reference to FIG. 5, it should be appreciated that the mobile device may implement a system 500 that allows biometrics 502 of a variety of types (e.g. biological, behavioral, physical, hard, soft, etc.) to be combined with or derived from sensor data 504 including location, time history, etc., all of which may be collected and processed to perform strong authentication via a trust coefficient for continuous authentication. These types of measurements may be recorded and utilized for one or more machine learning processes 506. Based upon this collection of data, the continuous authentication process 508 may be utilized, as previously described. In particular, as a result of the collected data, various features may be provided, such as continuous authentication of the user, better utilization of existing sensors and context awareness capabilities of the mobile device, improved accuracy in the usability of biometrics, and improved security for interaction with service providers, applications, devices and other users.[0067] An example of a mobile device utilizing the previously described functionality for continuous authentication with a trust coefficient will be hereinafter described, with reference to FIG. 6. For example, a conventional system that provides authentication when a matching score passes a full access threshold, as shown by graph 602, typically uses only one biometric input (e.g. a fingerprint sensor) for a one-time authentication, and each access is independently processed every time. In the conventional approach, as shown with reference to graph 604, if the one-time authentication (e.g. fingerprint sensor) is not achieved (e.g. the full access threshold not being passed), then no access occurs. On the other hand, utilizing a continuous authentication system, authentication may be continuously and actively performed, and biometrical information may be adaptively updated and changed. Thus, as shown in graph 612, various access controls may be continuously collected and updated, and, as shown in graph 614, based upon this continuous updating for continuous authentication (e.g. first a fingerprint scan, next a facial scan from a camera, next a GPS update, etc.), access control can reach 100% and access will be authenticated. Further, historic information can be collected to improve recognition accuracy.[0068] With reference to FIG. 7, detection of intruders may be improved by utilizing a continuous authentication system. Utilizing conventional biometrics, once the full access threshold is met (graph 702), access control is granted (graph 704) and use by a subsequent intruder may not be identified. On the other hand, by utilizing continuous authentication data (graph 712), inputs may be continuously collected (e.g. GPS location, touch screen finger scan, etc.), and even through access control is met (graph 714) and access is granted, an intruder may still be detected. For example, an intruder designation may be detected (e.g., an unknown GPS location), access control will drop and access will be denied, until a stronger authentication input is requested and received by the mobile device, such as a fingerprint scan.[0069] With additional reference to FIG. 8, it should be appreciated that a wide variety of traditional and additional authentication technologies may be utilized. For example, for traditional authentication technologies, a wide variety of types may be utilized. For example, as shown in block 810, upper-tier traditional authentication technologies may include username, password, PIN, etc. Medium-tier traditional authentication technologies shown in block 812 may include keys, badge readers, signature pads, RFID tags, logins, predetermined call-in numbers, etc. Further, as shown in block 814, low-tier traditional authentication technologies may include location determinations (e.g., at a work location), questions and answers (e.g., Turing test), general call-in numbers, etc. It should be appreciated that the previously described mobile device utilizing continuous authentication to continuously update a trust coefficient may utilize these traditional technologies, as well as the additional authentication technologies to be hereinafter described.[0070] Further, embodiments of the invention related to continuous authentication may include a wide variety of additional biometric authentication technologies. For example, as shown in block 816, upper-tier biometric authentication technologies may include fingerprint scanners, multi- fingerprint scanners, automatic fingerprint identification systems (AFIS) that use live scans, iris scans, continuous fingerprint imaging, various combinations, etc. Further, medium-tier biometric authentication technologies may include facial recognition, voice recognition, palm scans, vascular scans, personal witness, time history, etc. Moreover, as shown in block 820, lower-tier biometric authentication technologies may include hand/finger geometry, cheek/ear scans, skin color or features, hair color or style, eye movements, heart rate analysis, gait determination, gesture detection, behavioral attributes, psychological conditions, contextual behavior, etc. It should be appreciated that these are just examples of biometrics that may be utilized for continuous authentication.With additional reference to FIG. 9, as previously described, a trust coefficient (TC) may convey the current level of authentication of a user of a mobile device 100. As will be described in more detail hereinafter, mobile device 100 and/or authenticating entity 250 may determine the trust coefficient. As will be described, in some embodiments, a continuous authentication engine (CAE), a continuous authentication manager (CAM), and a trust broker (TB) may be configured to dynamically calculate, in real time, a trust coefficient so as to provide continuous or quasi- continuous authentication capability in mobile devices. Further, the term trust coefficient (TC) may be included as a component of a trust vector (TV). The TV may include a composition of one or more data inputs, sensor information, or scores. In particular, each of the TV inputs may be given authentication strengths and/or scores. Additionally, in some embodiments, the mobile device 100 may include a local trust broker (TB) 902 and the authenticating entity 250 may include a remote trust broker (TB) 922. In some embodiments, local TB 902 may transmit a privacy vector (PV) to the authenticating entity 250 that includes predefined user security preferences such as types of user approved biometric sensor information, non-biometric sensor data, and/or user data input that the user approves of. Similarly, remote TB 922 of the authenticating entity 250 may transmit a privacy vector (PV) to the mobile device 100 that includes predefined security preferences such as types of biometric sensor information, non- biometric sensor data, and/or user data input that the authenticating entity approves of. These types of privacy vectors and trust vectors will be described in more detail hereinafter. In particular, local TB 902 of the mobile device may negotiate with the remote TB 922 of the authenticating entity 250 to determine a trust vector TV that incorporates or satisfies the predefined user security preferences, as well as the predefined security preferences of the authenticating entity 250, such that a suitable TV that incorporates or satisfies the authentication requirements of the authenticating entity 250 and the mobile device 100 may be transmitted to the authenticating entity 250 to authenticate mobile device 100.[0072] In one embodiment, mobile device 100 may include a continuous authentication engine 906 that is coupled to a continuous authentication manager 904, both of which are coupled to the local TB 902. With this implementation, the local TB 902 may communicate with the remote TB 922 of the authenticating entity 250. As one example, the continuous authentication manager 904 may consolidate on-device authentication functions such as interaction with the continuous authentication engine 906, and may interact with application program interfaces (APIs) on the mobile device 100 for authentication-related functions. In some implementations, the local TB 902 may be configured to maintain user security/privacy preferences that are used to filter the data offered by the local TB 902 in external authentication interactions with the remote TB 922 of the authenticating entity 250.[0073] As one example, local TB 902 may interact with the remote TB 922, manage user credentials (e.g. user names, PINs, digital certificates, etc.), determine what types of credentials or information (e.g., user data input, sensor data, biometric sensor information, etc.) are to be released to the remote TB 922 of the authenticating entity (e.g., based on privacy vector information and negotiations with the remote TB 922), assemble and send trust and privacy vectors (TVs and PVs), manage user security/privacy settings and preferences, and/or interface with the continuous authentication manager 904.[0074] In one embodiment, the continuous authentication manager 904 may perform functions including interacting with the local TB 902, controlling how and when trust scores for the trust vectors (TVs) are calculated, requesting specific information from the continuous authentication engine 906 when needed (e.g., as requested by the local trust broker 902), providing output to APIs of the mobile device 101 (e.g., device-level trust controls, keyboard locks, unauthorized use, etc.), and/or managing continuous authentication engine 906 (e.g., issuing instructions to or requesting actions from the continuous authentication engine to update trust scores and/or check sensor integrity when trust scores fall below a threshold value, etc.). In some implementations, the local trust broker 902 may determine, in cooperation with the continuous authentication manager 904 and the continuous authentication engine 906, one or more sensor data, biometric sensor information, data input, sensor data scores, biometric sensor information scores, data input scores, trust coefficients, trust scores, credentials, authentication coefficients, authentication scores, authentication levels, authentication system outputs, or authentication information for inclusion in the trust vector. [0075] In one embodiment, the continuous authentication engine 906 may perform one or more functions including responding to the continuous authentication manager 904; generating trust vector (TV) components; calculating TV scores, values or levels; providing raw data, template data or model data when requested; generating or conveying conventional authenticators (e.g., face, iris, fingerprint, ear, voice, multimodal biometrics, etc.), times/dates, hard biometric authenticators, soft biometric authenticators, hard geophysical authenticators, or soft geophysical authenticators; and accounting for trust- level decay parameters. Hard biometric authenticators may include largely unique identifiers of an individual such as fingerprints, facial features, iris scans, retinal scans or voiceprints, whereas soft biometric authenticators may include less unique factors such as persisting behavioral and contextual aspects, regular behavior patterns, face position with respect to a camera on a mobile device, gait analysis, or liveness. Thus, in one embodiment, the continuous authentication engine 906 may calculate TV scores based upon TV components that are based upon data inputs from one or more non-biometric sensors, biometric sensors, user data input from a user interface, or other authentication information as previously described. As previously described, there is a wide variety of different types of sensors that may provide this type of sensor data such as one or more cameras (front side and/or backside), microphones, proximity sensors, light sensors, IR sensors, gyroscopes, accelerometers, magnetometers, GPS, temperature sensors, humidity sensors, barometric pressure sensors, capacitive touch screens, buttons (power/home/menu), heart rate monitors, ECG sensors, fingerprint sensors, biometric sensors, biometric keyboards, etc. A wide variety of these different types of sensors has been described in detail previously, and are well known to those skilled in the art.[0076] Further, it should be appreciated that by utilizing the continuous authentication manager 904 and the continuous authentication engine 906 in cooperation with local TB 902, local TB 902 may periodically, continuously or quasi-continuously update one or more components of the TV in the authentication response to the remote TB 922 of the authenticating entity to allow for continuous authentication of the mobile device 100 with the authenticating entity.[0077] With additional reference to FIG. 10, a variety of different implementations of the trust broker may be configured to support one or more of the following types of trust-broker interactions. For example, with reference to trust-broker interaction 1110, each device (e.g., Device A - mobile and Device B - authenticating entity such as another mobile device, e.g., peer-to-peer) may include a trust broker that interacts with a continuous authentication manager (CAM) and a continuous authentication engine (CAE) on each device. In another example, a trust-broker interaction 1020 conveys an interaction between a user device and a remote (cloud-based) service or application. Both sides include a trust broker; the continuous authentication manager function and the continuous authentication engine function are enabled on the user device side, but are optional on the service/application device side. The continuous authentication engine and continuous authentication manager may be used on the application/service device side to configure the remote trust broker or to provide the ability for the user device to authenticate the application/service device. In yet another example, a cloud-based trust-broker interaction 1030 may be utilized. In this example, the trust broker associated with a mobile device may be located partially or completely away from the mobile device, such as on a remote server. The trust- broker interaction with the continuous authentication manager and/or continuous authentication engine of the user device may be maintained over a secure interface. The continuous authentication manager function and the continuous authentication engine function may be optional on the application/service device side.[0078] With additional reference to FIG. 11, in one embodiment, local trust broker (TB) 902 of mobile device 100 may be configured to exchange one or more privacy vectors (PVs) and trust vectors (TVs) with authenticating entity 250 for authentication purposes. The PVs and TVs may be multi-field messages used to communicate credentials, authentication methods, user security/privacy preferences, information or data. In particular, the TV may comprise a multifield data message including sensor data scores, biometric sensor information scores, user data input, or authentication information to match or satisfy the authentication request from the authenticating entity 250. The PVs may be used to communicate the availability of authentication information and/or to request the availability of authentication information. The TVs may be used to request or deliver specific authentication data, information and credentials. The TV may include one or more trust scores, trust coefficients, aggregated trust coefficients, authentication system output, or authentication information.[0079] For example, as can be seen in FIG. 11 , authenticating entity 250 may initiate a first PV request 1100 to mobile device 100. The PV request 1100 may include a request for authentication and additional data (e.g., authentication credentials, authentication methods, authentication data requests, etc.). This may include specific types of sensor data, biometric sensor information, user input data requests, user interface data, or authentication information requests. The PV request 1100 may occur after an authentication request has been received by the mobile device 100 from the authenticating entity 250. Alternatively, an authentication request may be included with the PV request 1100. Next, mobile device 100 may submit a PV response 1105 to the authenticating entity 250. This may include the offer or availability of user authentication resources and additional data (e.g. authentication credentials, authentication methods, authentication data, user information, user credentials, or authentication information). Again these are the types of sensor data, biometric sensor information, user data input, or authentication information that match or satisfy predefined user security/privacy preferences and/or settings. Based upon this, the authenticating entity 250 may submit a TV request 1110 to the mobile device 100. The TV request 1110 may request authentication credentials, data requests (e.g. sensor data, biometric sensor information, user data input, etc.), and supply authentication parameters (e.g. methods, persistence, etc.). In response, mobile device 100 may submit a TV response 1115. The TV response 1115 may include authentication credentials, requested data (e.g. sensor data, biometric sensor information, user data input, one or more trust coefficients, authentication information, etc.), and authentication parameters (e.g. methods, persistence, etc.). It should be appreciated that the trust broker of the mobile device 100 may negotiate with the trust broker of the authenticating entity 250 to determine a TV response 1115 that incorporates or satisfies both the predefined user security/privacy preferences and the authentication requirements of the authenticating entity via this back and forth of PVs and TVs. Authentication parameters may include, for example, parameters provided by the authenticating entity that describe or otherwise determine which sensor inputs to acquire information from and how to combine the available sensor information. In some implementations, the authenticating parameters may include a scoring method and a scoring range required by the authenticating entity, how to calculate a particular trust score, how often to locally update the trust score, and/or how often to provide the updated trust score to the authenticating entity. A persistence parameter may include, for example, a number indicating the number of seconds or minutes in which a user is authenticated until an updated authentication operation is required. The persistence parameter may be, for example, a time constant in which the trust coefficient or trust score decays over time. The persistence parameter may be dynamic, in that the numerical value may change with time, with changes in location or behavior of the user, or with the type of content requested. Thus, in one embodiment, the local trust broker 902 of the mobile device 100 may determine if the PV request 1100 matches, incorporates, or satisfies predefined user security/privacy preferences and if so, the trust broker may retrieve, extract or otherwise receive the sensor data from the sensor, the biometric sensor information from the biometric sensor, the user data input, and/or authentication information that matches or satisfies the PV request 1100. The mobile device 100 may then transmit the TV 1115 to the authenticating entity 250 for authentication with the authenticating entity. However, if the PV request 1100 does not match or otherwise not satisfy the predefined user security/privacy preferences, the local trust broker may transmit a PV response 1105 to the authenticating entity 250 including predefined user security/privacy preferences having types of user-approved sensor data, biometric sensor information, user data input and/or authentication information. The authenticating entity 250 may then submit a new negotiated TV request 1110 that matches or satisfies the request of the mobile device 100. In this way, the trust broker of the mobile device 100 may negotiate with the trust broker of the authenticating entity 250 to determine a TV that matches or satisfies the predefined user security/privacy preferences and that matches or satisfies the authentication requirements of the authenticating entity 250. In this way the PV and TV requests and responses may be used to exchange authentication requirements as well as other data.In some examples, the PV is descriptive, for example, it may include examples of the form: "this is the type of information I want", or "this is the type of information I am willing to provide". Thus, the PV may be used to negotiate authentication methods before actual authentication credentials are requested and exchanged. On the other hand, the TV may be used to actually transfer data and may include statements of the form: "send me this information, using these methods" or "this is the information requested". In some examples, the TV and PV can be multiparameter messages in the same format. For example, a value in a field in a PV may be used to indicate a request for or availability of a specific piece of authentication information. The same corresponding field in a TV may be used to transfer that data. As another example, a value of a field of the PV may be used to indicate availability of a particular sensor on a mobile device such as a fingerprint sensor, and a corresponding field in the TV may be used to transfer information about that sensor such as raw sensor data, sensor information, a trust score, a successful authentication result, or authentication information. In some examples, the TV may be used to transfer data in several categories as requested by the PV, for example 1) credentials that may be used to authenticate, e.g., user name, password, fingerprint matching score, or certificate; 2) ancillary authentication data such as specific authentication methods or an updated trust coefficient; 3) optional data such as location, contextual information, or other sensor data and sensor information that may be used in authentication, such as a liveness score or an anti-spoof score; and/or 4) parameters used to control the continuous authentication engine, such as sensor preferences, persistence, time constants, time periods, etc. In some examples, requests and responses may be at different levels and not always include individual identification (e.g., "is this a real human?", "is this device stolen?", "is this user X?", or "who is this user?"). According to some examples, various entities that may request authentication may each have their own respective, flexible authentication schemes, but the trust broker in negotiation using PVs and TVs allows the use of user security and privacy settings to negotiate data offered before the data is transmitted. [0082] With additional reference to FIG. 12, examples of TV components 1202 and PV components 1204 will be described. In particular, a better understanding of the aforementioned features of the PVs and the TVs, according to some examples, may be seen with reference to FIG. 12. For example, various TV components 1202 may be utilized. In this example, TV components 1202: TCI; TC2; TC3 . . . TCn are shown. As examples, these components may form part or all of a multi-field data message. The components may be related to session information, user name, password, time/date stamp, hard biometrics, soft biometrics, hard geophysical location, soft geophysical location, authentication information, etc. These may include user data input, sensor data or information and/or scores from sensor data, as previously described in detail. Additionally, for inbound TVs from the authenticating entity there may be indications as to whether the component is absolutely required, suggested, or not at all required. For example, this may be a value from zero to one. As to outbound TVs from the mobile device to the authenticating entity, sensor fields may be included to indicate whether the specific sensors are present or not present (e.g. one or zero) as well as sensor data, sensor information, scoring levels, or scoring values. Such scoring values may be pass or not pass (e.g. one or zero) or they may relate to an actual score value (e.g. 0 - 100 or 0 - 255). Therefore, in some embodiments, the TV may contain specific authentication requests, sensor information or data, or other authentication information.[0083] Further, the PV components 1204 (e.g., PV components 1204: PCI ; PC2; PC3 . . . PCn) may describe the request for the availability of authentication devices or authentication information, and indicate permission (or denial) of the request to provide data or information associated with each device. For example, for inbound PVs from an authenticating entity to a mobile device, various fields may include required fields (e.g., 0 or 1), pass/fail (e.g., 0 or 1), values, level requirements, etc. For example, for outbound PVs from the mobile device to the authenticating entity, the fields may include available fields (e.g., 0 or 1), preferences, user-approved preferences or settings that can be provided (e.g., 0 or 1), enumeration of levels that can be provided, etc.[0084] According to some examples, the TV may include a wide variety of different types of indicia of user identification/authentication. Examples of these may include session ID, user name, password, date stamp, time stamp, trust coefficients or trust scores based upon sensor device input from the previously described sensors, fingerprint template information, template information from multiple fingerprints, fingerprint matching score(s), face recognition, voice recognition, face location, behavior aspects, liveness, GPS location, visual location, relative voice location, audio location, relative visual location, altitude, at home or office, on travel or away, etc. Accordingly, these types of TV types may include session information, conventional authorization techniques, time/date, scoring of sensor inputs, hard biometrics, soft biometrics, hard geophysical information, soft geophysical information, etc. In some implementations, visual location may include input from a still or video camera associated with the mobile device, which may be used to determine the precise location or general location of the user, such as in a home office or out walking in a park. Hard geophysical information may include GPS information or video information that clearly identifies the physical location of the user. Soft geophysical information may include the relative position of a user with respect to a camera or microphone, general location information such as at an airport or a mall, altitude information, or other geophysical information that may fail to uniquely identify where a user is located.[0085] It should be appreciated that a wide variety of TV components may be utilized with a wide variety of different types of sensor inputs and the TV components may include the scoring of those TV components. Additional examples may include one or more TV components associated with sensor output information for iris, retina, palm, skin features, cheek, ear, vascular structure, hairstyle, hair color, eye movement, gait, behavior, psychological responses, contextual behavior, clothing, answers to questions, signatures, PINs, keys, badge information, RFID tag information, NFC tag information, phone numbers, personal witness, and time history attributes, for example.[0086] It should be appreciated that many of the trust vector components may be available from sensors that are installed on the mobile device, which may be typical or atypical dependent on the mobile device. Some or all of the sensors may have functionality and interfaces unrelated to the trust broker. In any event, an example list of sensors contemplated may include one or more of the previously described cameras, microphones, proximity sensors, IR sensors, gyroscopes, accelerometers, magnetometers, GPS or other geolocation sensors, barometric pressure sensors, capacitive touch screens, buttons (power/home/menu), heart rate monitor, fingerprint sensor or other biometric sensors (stand alone or integrated with a mouse, keypad, touch screen or buttons). It should be appreciated that any type of sensor may be utilized with aspects of the invention.[0087] It should be appreciated that the local trust broker 902 of the mobile device 100 utilizing the various types of TVs and PVs may provide a wide variety of different functions. For example, the local trust broker may provide various responses to authentication requests from the authenticating entity 250. These various responses may be at various levels and may not always include individual identifications. For example, some identifications may be for liveness or general user profile. As to other functions, the local trust broker may be utilized to manage user credentials and manage authentication privacy. For example, functions controlled by the trust broker may include storing keys and credentials for specific authentication schemes, providing APIs to change user security/privacy settings in response to user security and privacy preferences, providing an appropriate response based on user security/privacy settings, interacting with a CAM/CAE, interacting with an authentication system, or not revealing personal identities or information to unknown requests. Local trust broker functionality may also provide responses in the desired format. For example, the TV may provide a user name/password or digital certificate in the desired format. The local trust broker functionality may also include managing the way a current trust coefficient value affects the device. For example, if the trust coefficient value becomes too low, the local trust value may lock or limit accessibility to the mobile device until proper authentication by a user is received. Trust broker functionality may include requesting the continuous authentication manager to take specific actions to elevate the trust score, such as asking the user to re-input fingerprint information. Furthermore, the trust broker functionality may include integrating with systems that manage personal data. For example, these functions may include controlling the release of personal information or authentication information that may be learned over time by a user profiling engine, or using that data to assist authentication requests. It should be appreciated that the previously described local trust broker 902 of the mobile device 100 may be configured to flexibly manage different types of authentication and private information exchanges. Requests and responses may communicate a variety of authentication-related data that can be generic, user specific, or authentication-method specific.With reference to FIG. 13A, an example of operations of trust vector (TV) component calculation block 240 that may perform TV component calculations will be described. It should be noted that one or more trust coefficients, levels or scores may be included as a component in the trust vector, so that the term TV is used in place of trust coefficient hereinafter. As previously described, inputs from the authentication strength block 220, inputs from preference settings block 210, inputs from trust level block 230, and times/dates may be inputted into the TV component calculation block 240. Based upon the TV component calculation block 240, one or more TV component values 273 and TV composite scores 275 may be outputted to an authenticating entity for continuous authentication. As previously described, based upon the preference setting from preference settings block 210, trust level inputs from trust level block 230, and authentication strength inputs from authentication strength block 220, TV component values 273 and TV component scores 275 may be calculated and transmitted as needed to the authenticating entity. It should be appreciated that the output format of the TV component values 273 and TV component scores 275 may be defined and/or changed from one time to another due to preference setting changes and/or may change from one request to another request due to differences between preference settings of different requestors and/or may change or otherwise be updated based on one or more continuous authentication parameters such as time constant, time delay, sensor data, sensor information, or scoring method. Also, as previously described, the preference settings block 210 may implement a negotiation function or an arbitration function to negotiate or arbitrate conflicting predefined security/privacy preference settings between the authenticating entity and the mobile device, or to form fused preference settings. In any event, as previously described, TV component values 273 and TV composite scores 275 may be calculated continuously and transmitted as needed to an authenticating entity for continuous, or quasi-continuous, or discrete authentication with the authenticating entity.[0089] It should be appreciated that inputs from the elements of the continuous authentication system 200 including preference settings, authentication strengths, trust levels, and time may be mapped into a required or unified format, such as by the use of a look-up table or other algorithm to output a trust vector (TV) or trust vector components in a desired format. Resulting data may be normalized into a predetermined data range before being presented as inputs to the calculation method, formula or algorithm used by the TV component calculation block 240 to calculate components of the trust vector output including TV component values 273 and TV composite scores 275.[0090] As an example, as shown in FIG. 13A, authentication strengths, trust levels, time and preference settings may be inputted into data mapping blocks 1310 that are further normalized through data normalization blocks 1320, which are then transmitted to the calculation method/formula block 1330 (e.g., for calculating TV values including TV component values 273 and TV composite scores 275) and through calculation result mapping block 1340 for mapping, and the resulting TV including TV component values 273 and TV composite scores 275 are thereby normalized and mapped and outputted.[0091] As to data mapping 1310, data mapping may be based on a preset look-up table to map the inputs of data formats into a unified format. As to data normalization 1320, different kinds of input data may be normalized into a predetermined data range. As to the calculation method 1330 of the TV component calculation block 240, a default calculation formula may be provided, the calculation formula may be changed based on the preference setting changes over time, the calculation formula may be changed based upon preference settings from the mobile device and/or different requestors, etc. As to calculation result mapping 1340, the calculated results for the TV including TV component values 273 and TV composite scores 275 may mapped to predetermined preference setting data formats.[0092] With reference to FIGs. 13B-D, examples of data mapping and data normalization for the formatting, mapping, and normalizing of authentication system inputs will be hereinafter described. For example, authentication strengths may be mapped into a format that represents level strengths of high, medium, low, or zero (no authentication capability) [e.g., Ah, Am, Al and An]. Trust levels may be mapped into a format representing high, medium, low or zero (non- trusted level) [e.g., Sh, Sm, SI and Sn]. There may a time level of t. Preference setting formats may also be used to provide inputs relating to a trust decay period (e.g., a value between -1 and 1). These values may be mapped to values over a defined range and utilized with time data including data representing time periods between authentication inputs. Example of these ranged values may be seen with particular reference to FIG. 13C. Further, with additional reference to FIG. 13D, after going through data mapping 1310, these data values may also be normalized by data normalization blocks 1320. As shown in FIG. 13D, various equations are shown that may be used for the normalization of authentication strengths, trust levels and time. It should be appreciated that these equations are merely for illustrative purposes.[0093] The previously described data, after mapping and normalizing, may be used to form or otherwise update a trust vector (TV) (including TV component values 273 and TV composite scores 275). The TV may vary according to the inputs (e.g., authentication strengths, trust levels, time and/or preference settings) and may vary over time between authentication events. With reference to FIG. 13E, FIG. 13E shows an example of a calculation formula to be used by calculation formula block 1330 for generating an example trust vector or trust coefficient in response to the various authentication system inputs. As shown in the example equation of FIG. 13E, these authentication inputs may include normalized time, normalized trust levels, normalized authentication strengths, etc. It should be appreciated that these equations are merely for illustrative purposes.[0094] FIG. 13F includes a graphical representation of an example trust vector (TV) that has been calculated by calculation formula block 1330 and mapped/normalized by calculation mapping block 1340 such that the TV has a value varying between 1 (high trust) and -1 (high mistrust) [y- axis] over time [x-axis] and illustrates how the trust vector may change in discrete amounts in response to specific authentication inputs (e.g., such as recovering to a high trust level after the input and identification of an authentication fingerprint). Between authentication events, the TV may vary, such as decaying according to time constant parameters that are provided. Inputs may trigger discrete steps in values lowering the trust value (e.g., such as the user connecting from an un-trusted location) or may trigger a rapid switch to a level representing mistrust, such as an event that indicates the device may be stolen (e.g., several attempts to enter a fingerprint that cannot be verified and the mobile device being at an un-trusted location).[0095] For example looking at the graph 1350, period PI line 1360 may indicate a high authentication strength A = 4 (e.g., authenticated fingerprint and camera iris scan match) and a high trust level format S = 4 (e.g., known location via GPS), and, as shown by line 1360, slightly decays over time. As another example, with reference to line 1362 in period P5 in which the authentication strength A = 2 (e.g., medium level such as a gripping via touch sensors) and the trust level equals zero S=0 (e.g., an un-trusted location), line 1362 shows that the trust level decays very quickly to a negative trust level (e.g., -1). As another example, line 1370 in period Pl l indicates that the input authentication strength may be very low (A = 0), but the trust level remains high (e.g. S = 4), such that requested authentication input may not have been received but the mobile device is in a known location via GPS. Based upon this scenario, the trust level 1370 declines over time to zero (e.g. diminished trust but not yet negative). On the other hand, continuing with this example, as later shown at P14 line 1372, with no authentication or wrong authentication (e.g. an iris scan that is not suitable, a fingerprint scan that is not verifiable, etc.) and a decreased medium trust level (S = 2) (e.g., distance away from the known GPS location), the trust level may go to - 1, in which case further authentication is required or no additional action for authentication may be taken.[0096] It should be appreciated that wide variety of trust vectors (TVs) in view of authentication strengths and trust levels over time may be determined in a continuous or quasi-continuous manner for authentication purposes.[0097] In some implementations, the trust broker previously described may be used in conjunction with techniques disclosed in applicant's provisional application entitled "Trust Broker for Authentication Interaction with Mobile Devices", application number 61/943,428 filed February 23, 2014, the disclosure of which is hereby incorporated by reference into the present application in its entirety for all purposes.[0098] It should be appreciated that aspects of the invention previously described may be implemented in conjunction with the execution of instructions by one or more processors of the device, as previously described. For example, processors of the mobile device and the authenticating entity may implement the functional blocks previously described and other embodiments, as previously described. Particularly, circuitry of the devices, including but not limited to processors, may operate under the control of a program, routine, or the execution of instructions to execute methods or processes in accordance with embodiments of the invention. For example, such a program may be implemented in firmware or software (e.g. stored in memory and/or other locations) and may be implemented by processors and/or other circuitry of the devices. Further, it should be appreciated that the terms processor, microprocessor, circuitry, controller, etc., refer to any type of logic or circuitry capable of executing logic, commands, instructions, software, firmware, functionality, etc.[0099] It should be appreciated that when the devices are mobile or wireless devices, they may communicate via one or more wireless communication links through a wireless network that are based on or otherwise support any suitable wireless communication technology. For example, in some aspects the wireless device and other devices may associate with a network including a wireless network. In some aspects the network may comprise a body area network or a personal area network (e.g., an ultra-wideband network). In some aspects the network may comprise a local area network or a wide area network. A wireless device may support or otherwise use one or more of a variety of wireless communication technologies, protocols, or standards such as, for example, 3G, LTE, Advanced LTE, 4G, CDMA, TDMA, OFDM, OFDMA, WiMAX, and WiFi. Similarly, a wireless device may support or otherwise use one or more of a variety of corresponding modulation or multiplexing schemes. A wireless device may thus include appropriate components (e.g., air interfaces) to establish and communicate via one or more wireless communication links using the above or other wireless communication technologies. For example, a device may comprise a wireless transceiver with associated transmitter and receiver components (e.g., a transmitter and a receiver) that may include various components (e.g., signal generators and signal processors) that facilitate communication over a wireless medium. As is well known, a mobile wireless device may therefore wirelessly communicate with other mobile devices, cell phones, other wired and wireless computers, Internet web-sites, etc.[00100] The teachings herein may be incorporated into (e.g., implemented within or performed by) a variety of apparatuses (e.g., devices). For example, one or more aspects taught herein may be incorporated into a phone (e.g., a cellular phone), a personal data assistant ("PDA"), a tablet computer, a mobile computer, a laptop computer, an entertainment device (e.g., a music or video device), a headset (e.g., headphones, an earpiece, etc.), a medical device (e.g., a biometric sensor, a heart rate monitor, a pedometer, an ECG device, etc.), a user I/O device, a computer, a wired computer, a fixed computer, a desktop computer, a server, a point-of-sale device, a set-top box, or any other suitable device. These devices may have different power and data requirements.[00101] Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.[00102] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.[00103] The various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.[00104] The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal or mobile device. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal or mobile device. [00105] In one or more exemplary embodiments, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software as a computer program product, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer- readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.[00106] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A method of operating a shared resource in a mobile device includes extracting a set of features from a plurality of subsystems of the mobile device. The set of features may be extracted from each subsystem of the plurality of subsystems requesting services from one or more shared resources of the mobile device. One or more parameter of the shared resource(s) may be determined based on the extracted set of features from the plurality of subsystems. The shared resource(s) may be operated based on the determined parameter(s).
1. A method for operating shared resources in a mobile device, comprising:extracting a set of features from a plurality of subsystems of the mobile device, each subsystem of the plurality of subsystems requesting services from at least one shared resource of the mobile device;processing workloads from the plurality of subsystems, wherein the workloads include desired performance levels for specified tasks;determining at least one parameter of the at least one shared resource based on the extracted set of features from the plurality of subsystems and the workload; andoperating the at least one shared resource based on the at least one parameter,wherein the at least one shared resource includes a processor, and the extracted feature set includes at least one of current processor frequency, number of instructions to be executed by the processor, processor utilization, measured bandwidth, or a ratio thereof one.2. The method of claim 1, wherein the at least one parameter is learned based at least in part on performance metrics associated with the workload and power consumption of the mobile device.3. The method of claim 1, wherein the at least one parameter is learned based at least in part on a cumulative reward comprising measured performance associated with the workload and Combination of power consumption for load sequences.4. A device for operating shared resources in a mobile device, comprising:storage; andat least one processor coupled to the memory, the at least one processor configured to:extracting a set of features from a plurality of subsystems of the mobile device, each of the plurality of subsystems requesting services from at least one shared resource of the mobile device;processing workloads from the plurality of subsystems, wherein the workloads include desired performance levels for specified tasks;determining at least one parameter of the at least one shared resource based on the extracted set of features from the plurality of subsystems and the workload; andoperating the at least one shared resource based on the at least one parameter,wherein the at least one shared resource includes a processor, and the extracted feature set includes at least one of a current processor frequency, a number of instructions to be executed by the processor, a processor utilization, a measured bandwidth, or a ratio thereof one.5. The apparatus of claim 4, wherein the at least one processor is further configured to learn the at least one parameter.6. The apparatus of claim 4, wherein the at least one processor is further configured to learn the at least one parameter based at least in part on a cumulative reward comprising all Measures the combination of performance and power consumption for processing a predetermined sequence of workloads.7. An apparatus for operating a shared resource in a mobile device, comprising:means for extracting a feature set from a plurality of subsystems of the mobile device, each of the plurality of subsystems requesting services from at least one shared resource of the mobile device;means for processing workloads from the plurality of subsystems, wherein the workloads include desired performance levels for specified tasks;means for determining at least one parameter of the at least one shared resource based on the extracted set of features from the plurality of subsystems and the workload; andmeans for operating said at least one shared resource based on said at least one parameter,wherein the at least one shared resource includes a processor, and the extracted feature set includes at least one of a current processor frequency, a number of instructions to be executed by the processor, a processor utilization, a measured bandwidth, or a ratio thereof one.8. The apparatus of claim 7, further comprising means for learning the at least one parameter based at least in part on performance metrics associated with the workload and power consumption of the mobile device.9. The apparatus of claim 7, further comprising means for learning the at least one parameter based at least in part on a cumulative reward comprising measured performance associated with a workload and an A combination of power consumption for a workload sequence.10. A computer-readable medium storing computer-executable code for operating a shared resource in a mobile device, comprising code for:extracting a set of features from a plurality of subsystems of the mobile device, each of the plurality of subsystems requesting services from at least one shared resource of the mobile device;processing workloads from the plurality of subsystems, wherein the workloads include desired performance levels for specified tasks;determining at least one parameter of the at least one shared resource based on the extracted set of features from the plurality of subsystems and the workload; andoperating said at least one shared resource based on said at least one parameter,wherein the at least one shared resource includes a processor, and the extracted feature set includes at least one of a current processor frequency, a number of instructions to be executed by the processor, a processor utilization, a measured bandwidth, or a ratio thereof one.11. The computer-readable medium of claim 10, further comprising code for learning the at least one parameter based at least in part on performance metrics associated with the workload and power consumption of the mobile device.12. The computer-readable medium of claim 10, further comprising code for learning the at least one parameter based at least in part on a cumulative reward comprising measured performance associated with the workload and power consumption for processing a predetermined sequence of workloads.
Power State Control for Mobile DevicesCross References to Related ApplicationsThis application claims the benefit of U.S. Patent Application No. 15/713,254, filed September 22, 2017, entitled "POWER STATE CONTROL OF A MOBILEDEVICE," which is expressly incorporated by reference in its entirety .backgroundfieldCertain aspects of the present disclosure relate generally to machine learning, and more particularly to systems and methods for improving power state control of mobile devices.Background techniqueMobile devices (e.g., smartphones, tablet computing devices, smart speakers, connected home cameras, Internet of Things (IoT) devices such as smart refrigerators, etc.) have many shared resources such as, for example, memory components, network-on-chip (NOC), Central Processing Unit (CPU) and Graphics Processing Unit (GPU). Each of the shared resources can serve or be used by many applications and subsystems of the mobile device (eg, such as the camera or audio system). However, each of the subsystems, and thus shared resources, may have very different workloads and different performance specifications, yet they typically operate concurrently. As such, determining or setting the power states of various shared resources (eg, CPU) to service various subsystems while preserving mobile device battery is challenging.overviewA brief summary of one or more aspects is presented below to provide a basic understanding of these aspects. This summary is not an exhaustive overview of all contemplated aspects and is intended to neither identify key or critical elements of all aspects nor attempt to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.Mobile devices (eg, smartphones, tablet computing devices, smart speakers, connected home cameras, IoT devices in general (eg, smart refrigerators), etc.) have many shared resources, such as memory components, NOC, CPU, and GPU, for example. Each of the shared resources may serve or be used by many applications and subsystems (eg, such as a camera or audio system). However, each of the subsystems, and thus shared resources, may have very different workloads and different performance specifications, yet they operate concurrently. As such, determining or setting the power states of various shared resources (eg, CPU) to service various subsystems while preserving mobile device battery is challenging.To address issues of battery conservation and device performance, aspects of the present disclosure relate to power state management of shared resources. In an aspect of the present disclosure, a method, a computer-readable medium, and an apparatus for operating a shared resource in a mobile device are provided. The apparatus includes a memory and at least one processor coupled to the memory. The processor(s) are configured to extract a feature set from a plurality of subsystems of the mobile device. The set of features may be extracted from each of a plurality of subsystems requesting service from at least one shared resource of the mobile device. The processor(s) are further configured to determine at least one parameter of the at least one shared resource based on the feature set extracted from the plurality of subsystems. Additionally, the processor(s) are configured to operate the at least one shared resource based on the at least one parameter.Additional features and advantages of the disclosure will be described hereinafter. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages will be better understood when considering the following description in conjunction with the accompanying drawings. It is to be clearly understood, however, that each drawing is provided for purposes of illustration and description only, and is not intended as a definition of the limits of the present disclosure.Brief description of the drawingsThe features, nature and advantages of the present disclosure will become more apparent from the detailed description set forth below when read in conjunction with the accompanying drawings, in which like reference numerals are identified accordingly throughout.1A illustrates an example implementation of power state control using a system-on-chip (SOC), including a general-purpose processor, in accordance with certain aspects of the present disclosure.1B illustrates an example mobile device configured for power state control in accordance with certain aspects of the present disclosure.2 illustrates an example implementation of a system according to aspects of the disclosure.3A is a diagram illustrating a neural network according to aspects of the present disclosure.3B is a block diagram illustrating an exemplary deep convolutional network (DCN) according to aspects of the present disclosure.4 is a block diagram illustrating an example software architecture that can modularize artificial intelligence (AI) functionality in accordance with aspects of the present disclosure.5 is a diagram illustrating exemplary operation of a power state controller using reinforcement learning to manage shared resources.6 is a diagram illustrating an example architecture for power state control in accordance with aspects of the present disclosure.Figure 7 illustrates a method for operating shared resources in a mobile device.A detailed descriptionThe detailed description, set forth below in connection with the accompanying drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. The detailed description includes specific details in order to provide a thorough understanding of various concepts. It will be apparent, however, to one skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.Based on the present teachings, those skilled in the art should appreciate that the scope of the present disclosure is intended to cover any aspect of the present disclosure, whether implemented independently or in combination with any other aspect of the present disclosure. For example, an apparatus may be implemented or a method practiced using any number of the aspects set forth. Furthermore, the scope of the disclosure is intended to cover such apparatus or methods practiced with other structure, functionality, or both, in addition to or different from the various aspects of the disclosure set forth. It should be understood that any aspect of the disclosure disclosed may be embodied by one or more elements of a claim.The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as superior or superior to other aspects.Although certain aspects have been described herein, numerous variations and permutations of these aspects are within the scope of this disclosure. While some benefits and advantages of the preferred aspects are mentioned, the scope of the present disclosure is not intended to be limited to specific benefits, uses or objectives. Rather, aspects of the present disclosure are intended to be broadly applicable to different technologies, system configurations, networks and protocols, some of which are illustrated by way of example in the accompanying drawings and the following description of the preferred aspects. The detailed description and drawings only illustrate the present disclosure rather than limit the present disclosure, and the scope of the present disclosure is defined by the appended claims and their equivalent technical solutions.Power State ControlMobile devices (eg, smartphones, tablet computing devices, smart speakers, connected home cameras, IoT devices (such as smart refrigerators), etc.) have many shared resources. Shared resources may include devices or information that may be accessed by multiple subsystems of the mobile computing device to perform specific tasks or operations. For example, shared resources may include memory components (eg, double data rate (DDR) synchronous dynamic random access memory (SDRAM), GPU, NOC, CPU, etc.). Each of the shared resources can serve or be used by many applications and subsystems of the mobile device, such as, for example, the camera, audio system, or application processor. However, each of the subsystems, and thus the shared resources, may have very different workloads and different performance specifications that may adversely affect the power state of the mobile device. For example, if the power state of the CPU is kept too high (eg, via the CPU frequency setting), the CPU performance may be increased, but the battery may drain quickly, resulting in a poor user experience. As such, determining or setting the power state of a shared resource (eg, CPU) to service various subsystems while preserving mobile device battery is challenging.To address these and other issues, metrics or characteristics of interest to subsystems of a mobile device that characterize demand for shared resources from respective subsystems can be used in feedback control of resources to manage usage of shared resources. According to aspects of the present disclosure, a top-down controller processes features from all subsystems through reinforcement learning, and the output of the controller can be used to manage shared resources (eg, control clock levels of shared resources).FIG. 1A illustrates an example implementation of the foregoing power state control using a system-on-chip (SOC) 100 , which may include a general-purpose processor (CPU) or a multi-core general-purpose processor (CPU) 102 , in accordance with certain aspects of the present disclosure. Variables (e.g., neural signals and synaptic weights), system parameters associated with computing devices (e.g., neural networks with weights), delays, frequency slot information, and task information may be stored in a ) 108, within a memory block associated with CPU 102, within a memory block associated with graphics processing unit (GPU) 104, within a memory block associated with digital signal processor (DSP) 106, dedicated memory block 118, or may be distributed across multiple blocks. Instructions executed at general-purpose processor 102 may be loaded from program memory associated with CPU 102 or may be loaded from dedicated memory block 118 .SOC 100 may also include additional processing blocks tailored for specific functions such as GPU 104, DSP 106, connectivity block 110 (which may include fourth generation long term evolution (4G LTE) connectivity, fifth generation wireless system (5G), Unlicensed Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, etc.) and, for example, a multimedia processor 112 that can detect and recognize gestures and can be used for streaming media playback. In one implementation, the NPU is implemented between the CPU, DSP, and and/or in a GPU. The SOC 100 may also include a sensor processor 114, an image signal processor (ISP), and/or a navigation 120 (which may include a global positioning system).Each of the processing blocks of the SOC 100 may include shared resources. That is, applications and other subsystems of the mobile device included in the SOC 100 (multimedia block 112) may require or request access and/or services performed by the shared resources.SOC 100 may be based on the ARM instruction set. In one aspect of the disclosure, the instructions loaded into the general purpose processor 102 may include code for extracting a set of features from a plurality of subsystems of the mobile device, each of the plurality of subsystems being shared with at least one of the mobile devices. Resource request service. The instructions loaded into the general purpose processor 102 may also include code for determining at least one parameter of the at least one shared resource based on a set of features extracted from a plurality of subsystems. Additionally, the instructions loaded into the general purpose processor 102 may include code for operating the at least one shared resource based on the at least one parameter.1B is an exemplary block diagram illustrating the architecture of a mobile device in accordance with aspects of the present disclosure. Referring to FIG. 1B , mobile device 150 includes one or more shared resources and various subsystems. Subsystems may include, for example, audio subsystem 156 , sensor subsystem 158 , multimedia subsystem 162 , modem subsystem 164 , and Wi-Fi subsystem 160 . Additionally, mobile device 150 may include an applications processor 152 , a graphics processing unit (GPU) 166 and a navigation subsystem 154 . Each of the subsystems may access or request services from one or more shared resources 168 . Shared resources may include, for example, a CPU or memory components (eg, double data rate (DDR) or cache memory), a network on chip (NOC). However, some subsystems can also be used as shared resources. For example, GPU 166 may also be a shared resource invoked to perform image processing tasks or other graphics processing tasks, for example. However, when performing image or graphics processing tasks, GPU 166 may also serve as a subsystem that requests services from another shared resource, such as a CPU, or accesses memory addresses via memory components.According to aspects of the present disclosure, features may be extracted from each of the subsystems (eg, multimedia subsystem 162 ). These characteristics can be related to performance metrics associated with the workload. A workload can include any application, program, or related computation to be performed on a mobile device. Additionally, a workload may include an amount of processing to be performed or may relate to a desired performance level for a given task. For example, where a task involves accessing a website via a browser, the workload may include an expected data transfer speed. In another example, where the task to be performed is to display video, the workload may include or relate to a desired frame rate for displaying the video.In some aspects, workloads from multiple subsystems may be cascaded or combined to more accurately reflect the work to be performed. For example, where the desired task is streaming video over the Internet, the task may involve multiple subsystems, including applications processor 152 , Wi-Fi subsystem 160 , modem subsystem 164 and GPU 166 . Each of these subsystems may request services from shared resources 168 (eg, CPU or memory components). Additionally, multiple tasks can be executed simultaneously. For example, during video playback, text messages or calls may be received. Accordingly, multiple workloads for each of the tasks may be generated.In some aspects, workloads for each shared resource can be categorized and/or combined. For example, workloads from applications processor 152 and GPU 166 may be classified as computing type tasks, while workloads from modem subsystem 164 and Wi-Fi subsystem 160 may include communication type workloads.A set of features may be extracted from each of the subsystems and used to determine parameters for operating shared resources 168 to reduce power consumption and/or improve processing efficiency. The characteristics may be the same or may be different for different subsystems. By way of example only, characteristics of a CPU may include current CPU frequency, CPU utilization, number of instructions, number of cache (eg, L2 cache) misses per core. Characteristics of DDR can include the current or selected DDR frequency, instantaneous bandwidth (IB) from other subsystems (an indication of the amount of latency the bus master is willing to tolerate)/average bandwidth (AB) (which is the amount of data the bus master expects to perform on the memory) An indication of the average throughput of the transfer) votes, and the DDR read/write data rate. Characteristics of the GPU may include selected frequency, utilization, number of instructions, number of cache (eg, L2 cache) misses per core. Wi-Fi/modem characteristics may include transmit and receive packet rates and protocol meta-information (eg, packet headers). Of course, the identified features and subsystems are exemplary only and by no means exhaustive. Similar and/or additional features may be provided for each of the subsystems of the mobile device. Features extracted from multiple subsystems can be used to determine one or more parameters for controlling a shared resource (eg, CPU). In some aspects, the extracted features can be collected and stored with workload-specific information. The stored characteristics and workload information can be used to train the computing network to learn operating parameters for operating the shared resource (eg, 168). In some aspects, operating parameters can be optimized such that power consumption and/or efficiency of shared resources, and thus mobile devices, are improved. In one example, characteristics and corresponding workload information may be used with performance information (eg, time to complete a particular task or operation) to generate a lookup table.FIG. 2 illustrates an example implementation of a system 200 in accordance with certain aspects of the present disclosure. As illustrated in FIG. 2 , system 200 may have a plurality of local processing units 202 that may perform various operations of the methods described herein. Each local processing unit 202 may include a local state memory 204 and a local parameter memory 206 that may store parameters of the neural network. In addition, the local processing unit 202 may have a local (neuron) model program (LMP) memory 208 for storing local model programs, a local learning program (LLP) memory 210 for storing local learning programs, and a local connection memory 212 . Furthermore, as illustrated in FIG. 2 , each local processing unit 202 may interface with a configuration processor unit 214 for providing configuration for the local memories of that local processing unit, and to provide communication between the local processing units 202. The routed route connection processing unit 216 is docked.Deep learning architectures perform object recognition tasks by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building useful feature representations of the input data. In this way, deep learning addresses a major bottleneck of conventional machine learning. Before the advent of deep learning, machine learning approaches to object recognition problems could rely heavily on human-engineered features, perhaps in combination with shallow classifiers. A shallow classifier can be a two-class linear classifier, for example, where a weighted sum of feature vector components can be compared to a threshold to predict which class the input belongs to. Human-engineered features can be templates or kernels customized for a specific problem domain by engineers with domain expertise. In contrast, a deep learning architecture learns to represent features similar to those a human engineer might design, but it learns through training. In addition, deep networks can learn to represent and recognize new types of features that humans may not have considered.Deep learning architectures can learn feature hierarchies. For example, if the first layer is presented with visual data, the first layer can learn to recognize relatively simple features (such as edges) in the input stream. In another example, if the first layer is presented with auditory data, the first layer may learn to recognize spectral power in particular frequencies. A second layer, which takes as input the output of the first layer, can learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For example, higher layers can learn to represent complex shapes in visual data or words in auditory data. Higher layers can learn to recognize common visual objects or spoken phrases.Deep learning architectures can perform particularly well when applied to problems that have a natural hierarchical structure. For example, the classification of motor vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features can be combined in different ways at higher layers to identify cars, trucks and airplanes.Neural networks can be designed with various connectivity patterns. In feed-forward networks, information is passed from lower layers to higher layers, where each neuron in a given layer communicates to neurons in higher layers. As mentioned above, hierarchical representations can be built in successive layers of a feed-forward network. Neural networks may also have reflow or feedback (also known as top-down) connections. In a reflow connection, the output from a neuron in a given layer can be communicated to another neuron in the same layer. A reflow architecture can facilitate identifying patterns across more than one chunk of input data delivered to the neural network in sequence. Connections from neurons in a given layer to neurons in lower layers are called feedback (or top-down) connections. Networks with many feedback connections can be helpful when the recognition of high-level concepts can aid in discerning specific low-level features of the input.Referring to FIG. 3A , connections between layers of a neural network may be fully connected ( 302 ) or locally connected ( 304 ). In a fully-connected network 302, a neuron in the first layer can communicate its output to every neuron in the second layer, so that every neuron in the second layer will learn from every neuron in the first layer Neurons receive input. Alternatively, in a locally connected network 304, neurons in a first layer may be connected to a limited number of neurons in a second layer. Convolutional network 306 may be locally connected and further configured such that connection strengths associated with inputs to each neuron in the second layer are shared (eg, 308 ). More generally, the locally connected layers of a network can be configured such that each neuron in a layer will have the same or similar connectivity pattern, but its connection strength can have different values (e.g., 310, 312, 314 and 316). Connectivity patterns of local connections may produce spatially distinct receptive fields in higher layers, since higher layer neurons in a given region may receive a restricted fraction of the total input to the network tuned by training. input of nature.Locally connected neural networks may be well suited for problems where the spatial location of the input is meaningful. For example, a network 300 designed to recognize visual features from an on-board camera may develop high-level neurons with different properties depending on whether they are associated with lower or upper images. For example, neurons associated with the lower portion of an image may learn to recognize lane markings, while neurons associated with the upper portion of an image may learn to recognize traffic lights, traffic signs, and the like.A DCN can be trained with supervised learning. During training, an image (such as a cropped image 326 of a speed limit sign) may be presented to the DCN, and a “forward pass” may then be computed to produce output 322 . Output 322 may be a vector of values corresponding to features such as "flag", "60", and "100". The network designer may want the DCN to output high scores for some of the neurons in the output feature vector, such as those corresponding to “flag” and “60” shown in the output 322 of the trained network 300 . Before training, the output produced by the DCN is likely to be incorrect, and from this the error between the actual output and the target output can be calculated. The weights of the DCN can then be adjusted so that the DCN's output score more closely aligns with the target.To adjust the weights, the learning algorithm can compute gradient vectors for the weights. This gradient may indicate the amount by which the error would increase or decrease if the weights were adjusted slightly. At the top layer, this gradient may correspond directly to the value of the weights connecting the activated neurons in the penultimate layer to the neurons in the output layer. At lower layers, this gradient may depend on the values of the weights and the calculated error gradients of the higher layers. The weights can then be adjusted to reduce errors. This way of adjusting weights may be called "backpropagation" because it involves a "backward pass" in a neural network.In practice, the error gradient of the weights may be computed over a small number of examples such that the computed gradient approximates the true error gradient. This approximation method may be called stochastic gradient descent. Stochastic gradient descent can be repeated until the achievable error rate of the overall system has stopped decreasing or until the error rate has reached a target level.After learning, the DCN may be presented with new images 326 and a forward pass through the network may produce an output 322, which may be considered an inference or prediction of the DCN.Deep Belief Networks (DBNs) are probabilistic models that include multiple layers of hidden nodes. DBNs can be used to extract hierarchical representations of training datasets. DBN can be obtained by stacking multiple layers of Restricted Boltzmann Machines (RBMs). RBM is a class of artificial neural networks that can learn a probability distribution over an input set. RBMs are often used in unsupervised learning because they can learn a probability distribution without information about which class each input should be classified into. Using a hybrid unsupervised and supervised paradigm, the bottom RBM of a DBN can be trained in an unsupervised manner and can be used as a feature extractor, while the top RBM can be trained in a supervised manner (on the joint input and target categories from previous layers distribution) is trained and can be used as a classifier.A deep convolutional network (DCN) is a network of convolutional networks configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. A DCN can be trained using supervised learning, where both the input and output targets are known for many exemplars and are used to modify the weights of the network by using gradient descent.A DCN can be a feed-forward network. In addition, as described above, connections from neurons in the first layer of a DCN to groups of neurons in the next higher layer are shared across neurons in the first layer. DCN's feed-forward and shared connections can be exploited for fast processing. The computational burden of a DCN may be much smaller than, for example, a similarly sized neural network including reflow or feedback connections.The processing of each layer of a convolutional network can be thought of as a spatially invariant template or base projection. If an input is first decomposed into channels, such as the red, green, and blue channels of a color image, then a convolutional network trained on that input can be considered three-dimensional, with two The spatial dimension as well as a third dimension that captures color information. The output of the convolutional connection can be thought of as forming a feature map in subsequent layers 318 and 320, each element in the feature map (e.g., 320) is drawn from a range of neurons in the previous layer (e.g., 318) and from the multiple Each of the channels receives input. The values in the feature maps can be further processed with non-linearities such as rectifying max(0,x) . Values from neighboring neurons can be further pooled (this corresponds to downsampling) and can provide additional local invariance and dimensionality reduction. Normalization can also be applied by lateral inhibition between neurons in the feature map, which corresponds to whitening.The performance of deep learning architectures can increase as more labeled data points become available or as computing power increases. Modern deep neural networks are routinely trained with thousands of times more computing resources than were available to the typical researcher just fifteen years ago. New architectures and training paradigms can further boost deep learning performance. Rectified linear units reduce a training problem known as vanishing gradients. New training techniques reduce over-fitting and thus enable better generalization of larger models. Encapsulation techniques can abstract the data in a given receptive field and further improve the overall performance.FIG. 3B is a block diagram illustrating an exemplary deep convolutional network 350 . The deep convolutional network 350 may include multiple layers of different types based on connectivity and weight sharing. As shown in FIG. 3B, the exemplary deep convolutional network 350 includes multiple convolutional blocks (eg, C1 and C2). Each convolutional block can be configured with a convolutional layer, a normalization layer (LNorm), and a pooling layer. A convolutional layer may include one or more convolutional filters that may be applied to input data to generate a feature map. Although only two convolutional blocks are shown, the present disclosure is not limited thereto, but any number of convolutional blocks may be included in the deep convolutional network 350 according to design preferences. A normalization layer can be used to normalize the output of the convolutional filters. For example, a normalization layer can provide whitening or lateral suppression. Pooling layers provide spatially downsampled aggregates for local invariance and dimensionality reduction.For example, a parallel filter bank of a deep convolutional network can optionally be loaded onto the CPU 102 or GPU 104 of the SOC 100 based on the ARM instruction set to achieve high performance and low power consumption. In an alternate embodiment, the parallel filter banks may be loaded onto the DSP 106 or the ISP 116 of the SOC 100 . Additionally, the DCN may have access to other processing blocks that may exist on the SOC, such as processing blocks dedicated to sensors 114 and navigation 120 .The deep convolutional network 350 may also include one or more fully connected layers (eg, FC1 and FC2 ). The deep convolutional network 350 may further include logistic regression (LR) layers. Between each layer of the deep convolutional network 350 are weights (not shown) to be updated. The output of each layer can be used as input to subsequent layers in the deep convolutional network 350 to learn from input data (e.g., image, audio, video, sensor data, and/or other input data) provided at the first convolutional block C1 Hierarchical feature representation.4 is a block diagram illustrating an exemplary software architecture 400 that can modularize artificial intelligence (AI) functions. Using this architecture, an application 402 can be designed such that various processing blocks of the SOC 420 (eg, CPU 422 , DSP 424 , GPU 426 , and/or NPU 428 ) perform supporting computations during runtime operation of the application 402 .AI application 402 may be configured to invoke functions defined in user space 404, which may, for example, provide management of shared resources. For example, AI application 402 may adjust clock frequency, and in turn, adjust power consumption of a shared resource (eg, CPU) differently based on a set of characteristics collected from multiple subsystems (eg, image processing) associated with a workload. AI application 402 may request compiled program code associated with a library defined in application programming interface (API) 406 to determine parameter values (eg, clock frequency) for shared resources. This request may ultimately rely on the output of a deep neural network configured, for example, based on the workload and characteristics collected from each of the subsystems (e.g., current CPU frequency, DDR clock frequency, GPU utilization and Wi-Fi packet rate) to provide parameter estimatesRuntime engine 408 , which may be compiled code of a runtime framework, may further be accessible by AI application 402 . For example, the AI application 402 may cause the runtime engine to request power management at specific time intervals or triggered by events detected by the application's user interface. The runtime engine may in turn send a signal to an operating system 410 (such as a Linux kernel 412 ) running on the SOC 420 when enabling power management of the mobile device. Operating system 410, in turn, may cause computations to be performed on CPU 422, DSP 424, GPU 426, NPU 428, features to be extracted therefrom, or some combination thereof. CPU 422 is directly accessible by the operating system, while other processing blocks are accessible through drivers, such as drivers 414-418 for DSP 424, GPU 426, or NPU 428. In an illustrative example, a deep neural network may be configured to run on a combination of processing blocks, such as CPU 422 and GPU 426 , or may run on NPU 428 if present.FIG. 5 is a diagram 500 illustrating exemplary operation of a power state controller using reinforcement learning to manage shared resources. According to aspects of the present disclosure, reinforcement learning techniques may be used to determine parameters for controlling power states in one or more shared resources. Reinforcement learning techniques may include cross-entropy methods, policy gradients, actor-critic methods, trust region policy optimization, underappreciated reward exploration, and other reinforcement learning techniques. In the example of FIG. 6, the workload component 506 provides workload information from each of the applications, which can be concatenated and provided to the power state control used to control the power state of the shared resource at runtime. device 504. Additionally, workload information can also be provided to reinforcement learning component 502 for parameter training. Parameter training can be done offline or online. The reinforcement learning component 502 can generate a linear map from feature states to frequencies. This linear mapping can be expressed as f=As+b, where f is the frequency of the shared resource(s) (e.g., CPU and DDR frequency), s is the characteristic state of the subsystem (e.g., application processor), and θ= (A,b) are runtime model parameters. In some aspects, reinforcement learning component 502 can include a linear map for each shared resource.The kth entry of the runtime model parameters θ at iteration t can be plotted according to the following distribution:where μk,t, is the mean and standard deviation of the model parameters θ at iteration t. For example, model parameters may be initialized from random values or other initial values (eg, 1). Using the model parameters, the operating parameters of the shared equipment can be determined. In turn, the shared device performs a given workload. After each iteration, power and performance can be determined based on workload. This procedure can be repeated for nθ samples of workloads w1,w2,...wn based on the joint distribution pt to estimate the cumulative reward rt on the training set S(w1),S(w2),...S(wn). In some aspects, a cumulative reward can be determined based on a combination of measured performance (eg, workload completion time) and power used to process a given workload sequence.After each iteration, the mean and standard deviation can be updated as follows:where I corresponds to the top pn values, where p is the selection ratio (eg, 10% of the values of the parameter set θ). For example, the top pn values may correspond to the model parameters that yield the best (x%) performance, where performance may include, for example, completion time, power consumption, or a combination thereof. In some aspects, an update can be performed for each of the shared resources. Thus, at runtime, for a given workload, the power state controller 504 can calculate operating parameters for each of the shared resources (e.g., CPU and DDR frequency).6 is a diagram 600 illustrating an example architecture for power state control in accordance with aspects of the present disclosure. Referring to FIG. 6 , an example architecture 600 includes one or more applications 602 operating on a mobile device (eg, mobile device 150 ). For example, applications 602 may include a browser, a video game, or establishing a Wi-Fi connection to a local area network. Each of applications 602 can interface and communicate with an operating system (eg, kernel) 604 that can coordinate resources to perform tasks requested via the application. Operating system 604 may be configured with components for managing one or more shared resources 606 . Each of the applications may request services from one or more shared resources 606 to perform application-specific tasks. By way of example only, shared resources may include CPU, DDR, GPU, modem, and Wi-Fi components.The status characteristic acquisition component 608 can extract or otherwise receive characteristic status (eg, cache miss count, CPU frequency) from each subsystem in response to a request for service from an application. In some aspects, feature status can be received from each subsystem associated with an application request for service. Feature state information can be provided to frequency determination component 610 to determine an operating frequency for one or more shared resources based on model parameters (eg, A, b) retrieved from model component 614 . In some aspects, the frequency of operation can be determined by cascading the workload of each of the shared resources. The extracted feature states of the subsystems can be used with the workload to learn the model parameters. For example, model parameters can be learned via reinforcement learning techniques. Frequency determining component 610 can calculate operating parameters for operating each of the shared resources. In some aspects, operating parameters can be determined to optimize operation of shared resources to improve performance (eg, reduce power consumption or completion time, increase efficiency or speed). Although in this example the operating parameter is frequency, the disclosure is not limited thereto. Rather, other operating parameters can additionally or alternatively be determined, and in some aspects, optimized.The determined frequency(s) can in turn be provided to a frequency control component 612 to manage operation of the one or more shared resources 606 .FIG. 7 illustrates a method 700 for operating shared resources in a mobile device in accordance with aspects of the present disclosure. In block 702, the method may optionally learn a mapping of features to parameters for operating the shared resource. Mappings can be learned by reinforcement learning techniques which can be linear or nonlinear. For example, the mapping can be learned using reinforcement learning techniques such as cross-entropy methods, policy gradients, actor-critic methods, value iteration methods, trust-region policy optimization, underappreciated reward exploration, and other learning techniques (e.g., supervised learning approach).In block 704, the method extracts a feature set from a plurality of subsystems of the mobile device. The set of features may be extracted from each of a plurality of subsystems requesting services from at least one shared resource. As shown in FIG. 1B , shared resource 168 may receive requests for services from multiple subsystems (eg, multimedia subsystem 162 , application processor 152 , and GPU 166 ). Each of the subsystems may provide a feature set. For example, the extracted feature set may include current processor frequency, number of instructions to be executed by the processor, processor utilization, measured bandwidth, or a ratio thereof.In block 704, the method determines at least one parameter of the at least one shared resource based on a set of features extracted from the plurality of subsystems. For example, as shown in FIG. 6, characteristic status information for each subsystem can be used by frequency determining component 610 to determine an operating frequency for one or more shared resources. The frequency determination component can perform a mapping from the extracted features to model parameters, which can in turn be used to compute operating parameters for the shared resource.In some aspects, parameters (eg, operating parameters such as CPU frequency) can be learned based on performance metrics associated with workload and power consumption of the mobile device. Performance metrics may include specified goals for the application (eg, frame rate for video playback or completion time for processing). In some aspects, parameters can be learned based on cumulative rewards. The cumulative reward may be based on a combination of measured performance associated with a workload and power consumption for processing a predetermined sequence of workloads. In some aspects, performance rewards can be calculated and averaged for each workload. Alternatively, the power bonus may be calculated upon completion of processing the determined workload. Further, operating parameters and model parameters may be learned using, for example, a neural network such as DCN 350 .Additionally, in block 706, the method operates the at least one shared resource based on the at least one parameter. As shown, in the example of FIG. 6, the determined frequency(s) can be provided to a frequency control component 612 to manage the frequency of operation of the shared resource.In one configuration, the machine learning model is configured to extract a set of features from a plurality of subsystems of the mobile device, each of the plurality of subsystems requesting services from at least one shared resource of the mobile device. The machine learning model is further configured to determine at least one parameter of the at least one shared resource based on the feature set extracted from the plurality of subsystems. Additionally, the machine learning model is configured to operate the at least one shared resource based on the at least one parameter. The model includes extraction means, determination means and/or manipulation means. In one aspect, the extracting means, determining means and/or operating means may be the general purpose processor 102, the program memory associated with the general purpose processor 102, the memory block 118, the local processing unit 202 and/or the general purpose processor 102 configured to perform the described functions Or the routing connection processing unit 216 . In another configuration, the aforementioned means may be any module or any means configured to perform the functions recited by the aforementioned means.According to certain aspects of the present disclosure, each local processing unit 202 may be configured to determine parameters of the model based on expected one or more functional characteristics of the model, and when the determined parameters are further adapted, tuned and updated The one or more functional features are developed towards a desired functional feature.In some aspects, method 700 may be performed by SOC 100 (FIG. 1) or system 200 (FIG. 2). That is, by way of example and not limitation, each element of method 700 is performed by SOC 100 or system 200, or one or more processors (e.g., CPU 102 and local processing unit 202) and/or other components included therein .The various operations of the methods described above may be performed by any suitable means capable of performing the corresponding functions. These means may include various hardware and/or software component(s) and/or module(s), including but not limited to circuits, application specific integrated circuits (ASICs), or processors. In general, where there are operations illustrated in the figures, those operations may have corresponding counterpart means-plus-function components with like numbering.As used herein, the term "determining" encompasses a wide variety of actions. For example, "determining" may include calculating, calculating, processing, deriving, studying, looking up (eg, looking up in a table, database, or other data structure), ascertaining, and the like. Additionally, "determining" may include receiving (eg, receiving information), accessing (eg, accessing data in memory), and the like. Additionally, "determining" may include resolving, selecting, selecting, establishing, and the like.As used herein, a phrase referring to at least one of a list of items refers to any combination of those items, including individual members. As an example, "at least one of a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.The various illustrative logical blocks, modules, and circuits described in connection with this disclosure may be implemented with general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable Gate array signals (FPGA) or other programmable logic devices (PLDs), discrete gate or transistor logic, discrete hardware components, or any combination thereof. A general-purpose processor can be a microprocessor, but in the alternative, the processor can be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.The steps of a method or algorithm described in connection with this disclosure may be implemented directly in hardware, in a software module executed by a processor, or in a combination of both. A software module may reside in any form of storage medium known in the art. Some examples of storage media that may be used include Random Access Memory (RAM), Read Only Memory (ROM), Flash Memory, Erasable Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM) , registers, hard disk, removable disk, CD-ROM, etc. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read and write information from, and to, the storage medium. Alternatively, the storage medium may be integrated into the processor.Methods disclosed herein include one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be altered without departing from the scope of the claims.The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may include a processing system in a device. A processing system may be implemented using a bus architecture. Depending on the particular application and overall design constraints of the processing system, the bus may include any number of interconnecting buses and bridges. The bus may link together various circuits including processors, machine-readable media, and bus interfaces. The bus interface can be used to connect, inter alia, network adapters or the like to the processing system via the bus. Network adapters can be used to implement signal processing functions. For some aspects, a user interface (eg, keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits, such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art and therefore will not be described further.The processor may be responsible for managing the bus and general processing, including executing software stored on the machine-readable medium. A processor can be implemented with one or more general and/or special purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry capable of executing software. Software should be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. By way of example, a machine-readable medium may include random access memory (RAM), flash memory, read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically programmable Erasable programmable read-only memory (EEPROM), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. A machine-readable medium can be embodied in a computer program product. The computer program product may include packaging materials.In a hardware implementation, the machine-readable medium may be a portion of the processing system separate from the processor. However, the machine-readable medium, or any portion thereof, can be external to the processing system, as will be readily appreciated by those skilled in the art. As examples, a machine-readable medium may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all of which are accessible by a processor through a bus interface. Alternatively or additionally, the machine-readable medium, or any portion thereof, may be integrated into the processor, such as may be the case with cache memory and/or general register files. Although the various components discussed may be described as having specific locations, such as local components, they may also be configured in various ways, such as certain components being configured as part of a distributed computing system.The processing system may be configured as a general-purpose processing system having one or more microprocessors providing processor functionality and external memory providing at least a portion of the machine-readable medium, all of which communicate with the Other supporting circuitry is linked together. Alternatively, the processing system may include one or more neuromorphic processors for implementing the neuron models and nervous system models described herein. As another alternative, the processing system may be implemented as an application specific integrated circuit (ASIC) with a processor, bus interface, user interface, support circuitry, and at least a portion of the machine-readable medium integrated into a single chip, or One or more Field Programmable Gate Arrays (FPGAs), Programmable Logic Devices (PLDs), controllers, state machines, gating logic, discrete hardware components, or any other suitable circuitry, or capable of implementing the general Any combination of the various functional circuits described in this article can be realized. Those skilled in the art will recognize how best to implement the described functionality with respect to a processing system depending on the particular application and the overall design constraints imposed on the overall system.A machine-readable medium may include several software modules. These software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. These software modules may include a transmitting module and a receiving module. Each software module may reside on a single storage device or be distributed across multiple storage devices. As an example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of a software module, the processor may load some instructions into the cache for faster access. The one or more cache lines can then be loaded into the general purpose register file for execution by the processor. Where the functionality of a software module is discussed below, it will be understood that such functionality is implemented by the processor when it executes instructions from the software module. In addition, it should be appreciated that aspects of the present disclosure result in improvements in the functionality of processors, computers, machines, or other systems implementing such aspects.If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage, magnetic disk storage, or other magnetic storage devices, or other desired program code and any other medium that can be accessed by a computer. Also, any connection is also properly termed a computer-readable medium. For example, if the software is transmitted from a web site, server, or other remote source using coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technology such as infrared (IR), radio, and microwave Consequently, coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and disc, where a disk usually reproduces data magnetically, Discs, on the other hand, use laser light to optically reproduce data. Thus, in some aspects computer readable media may comprise non-transitory computer readable media (eg, tangible media). In addition, for other aspects, computer readable media may comprise transitory computer readable media (eg, a signal). Combinations of the above should also be included within the scope of computer-readable media.Accordingly, certain aspects may include a computer program product for performing the operations presented herein. For example, such a computer program product may include a computer-readable medium having stored thereon (and/or encoded) instructions executable by one or more processors to perform the operations described herein. For some aspects, a computer program product may include packaging materials.Furthermore, it should be appreciated that modules and/or other suitable means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by user terminals and/or base stations, where applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, the various methods described herein can be provided via a storage device (e.g., RAM, ROM, a physical storage medium such as a compact disk (CD) or floppy disk, etc.), such that once the storage device is coupled to or provided To a user terminal and/or a base station, the device can acquire various methods. Furthermore, any other suitable technique suitable for providing the methods and techniques described herein to a device may be utilized.It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various changes, substitutions and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
A page processing circuit (1040) includes a memory (1034) for pages, a processor (1030) coupled to the memory, and a page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics. Processes of manufacture, processes of operation, circuits, devices, telecommunications products, wireless handsets and systems are also disclosed.
CLAIMS What is claimed is: 1. A page processing circuit comprising: a memory for pages; a processor coupled to said memory; and a page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics. 2. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes a page access counter for a time- varying page- specific entry that is settable to an initial value in response to loading a page into the memory, and resettable to a value approximating the initial value in response to a memory access to that page. 3. The page processing circuit claimed in Claim 2, wherein the page access counter is operable to automatically change in value in a progressive departure from the initial value in response to a memory access to a page other than a page to which the counter value pertains, the page access counter operable to contribute to the usage statistics. 4. The page processing circuit claimed in Claim 1, wherein the page wiping advisor includes a concatenation case table having a page-specific entry formed from a corresponding page-specific entry from the page access table, and from a page type entry and from an entry indicating whether the page has been written. 5. The page processing circuit claimed in Claim 3, wherein the page wiping advisor includes a conversion circuit responsive to the concatenation case table to generate a page priority code for each page. 6. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable to generate a page priority code having a singleton bit value accompanied by complement bit values, the singleton bit value having a position across the page priority code representing page priority. 7. The page processing circuit claimed in Claim 6, wherein the advisor circuit has a detector to sort the page priority codes for an extreme position of the singleton bit value 8. The page processing circuit claimed in Claim 6, wherein the advisor circuit is operable to identify a page to wipe by the singleton bit value in its priority code being in an extreme position indicative of highest wiping priority compared to priority codes of other pages. 9. The page processing circuit claimed in Claim 1, wherein the page wiping advisor includes a priority sorting table for page-specific wiping priority codes. 10. The page processing circuit claimed in Claim 1, wherein the page wiping advisor includes a priority sorting circuit operable to identify at least one page in a priority sorting table having a highest priority for page wiping. 11. The page processing circuit claimed in Claim 1, wherein the page wiping advisor includes a priority sorting circuit and a page selection logic fed by the priority sorting circuit for selecting a page in the memory to wipe. 12. The page processing circuit claimed in Claim 1, wherein the page type includes code and data types of pages. 13. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes a priority code generating circuit responsive to the page type and usage statistic for a page. 14. The page processing circuit claimed in Claim 13, wherein the advisor circuit further includes a priority detector circuit coupled to the priority code generating circuit and operable to identify at least one page to wipe based on priority code. 15. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable to prioritize pages of one page type and separately prioritize pages of another page type. 16. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes an allocation circuit operable to allocate page space in the memory for a first type of page and for a second type of page. 17. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes an allocation circuit operable to dynamically respond to page swaps by page type, to allocate page space in the memory. 18. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes a register for holding page wiping advice and the register coupled to the processor. 19. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes an interrupt coupled to the processor. 20. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes a page access counter and a usage level encoder operable to generate a usage level code in response to the page access counter. 21. The page processing circuit claimed in Claim 1, further comprising a cryptographic circuit coupled to the memory and operable to perform a cryptographic operation on a page identified by the advisor circuit. 22. The page processing circuit claimed in Claim 1, further comprising a secure state machine situated on a single integrated circuit chip with the processor and the memory, the secure state machine monitoring accesses to the memory, whereby the memory has security. 23. The page processing circuit claimed in Claim 1, wherein the advisor circuit includes a page access counter operable to count both read and write accesses to respective pages in the memory. 24. The page processing circuit claimed in Claim 1, further comprising an instruction bus and a data bus coupled to the memory and wherein the advisor circuit is responsive to both the instruction bus and the data bus to form the usage statistics. 25. The page processing circuit claimed in Claim 1, further comprising an instruction bus and a data bus both coupled between the memory and the processor, and a further comprising a third bus, and the advisor circuit is coupled to both the instruction bus and the data bus and the advisor circuit is additionally coupled by the third bus to the processor. 26. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable to prioritize an unmodified page in the memory as having more priority for wiping than a modified page. 27. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable to prioritize a code page in the memory as having more priority for wiping than a data page. 28. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable to prioritize a first page that has one level of use in the memory as having more priority for wiping than a second page that has another level indicative of greater use in the memory. 29. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable, when more than one page has the highest page wiping priority, to select a page to wipe from the pages having the highest page priority. 30. The page processing circuit claimed in Claim 1, wherein the advisor circuit is operable, when all pages have the lowest page wiping priority, to select a page to wipe from the pages. 31. The page processing circuit claimed in Claim 1, wherein the memory sometimes has an empty page and an occupied page, and the advisor circuit is operable, when memory has an empty page, to bypass wiping an occupied page. 32. A page processing method for use with a memory having pages, the method comprising representing a page by a first entry indicating whether the page is modified or not; and further representing the page by a second entry that is set to an initial value by storing a page corresponding to that entry in the memory, reset to a value approximating the initial value in response to a memory access to that page, and changed in value by some amount in response to an access to another page in the memory other than the page to which the second entry pertains. 33. The page processing method claimed in Claim 32, further comprising generating a page priority code for the page from the first and second entries. 34. The page processing method claimed in Claim 32, further comprising generating a plurality of page priority codes respectively corresponding to at least some of the pages in the memory, and each page priority code derived from the first entry and the second entry pertaining to each of the at least some pages, and identifying at least one page having a highest priority for wiping from the page priority codes. 35. The page processing method claimed in Claim 32 for use with a second memory having a larger capacity than the first memory, and further comprising demand paging between the memory and the second memory based on the page priority codes. 36. The page processing method claimed in Claim 32, further comprising swapping out the page based on the first and second entries, to another memory. 37. The page processing method claimed in Claim 32, further comprising performing a cryptographic operation on the page based on the first and second entries. 38. A telecommunications unit comprising a telecommunications modem; a microprocessor coupled to the telecommunications modem; secure demand paging processing circuitry coupled to the microprocessor and including a secure internal memory for pages; a less-secure, external memory larger than the secure internal memory; and a secure page wiping advisor for prioritizing pages based both on page type and usage statistics; and a user interface coupled to the microprocessor, whereby the telecommunications unit has effectively-increased space for secure applications. 39. The telecommunications unit claimed in Claim 38, wherein the secure page wiping advisor represents each of the pages by a respective entry that is set to an initial value by storing a page corresponding to that entry in the memory, reset to a value approximating the initial value in response to a memory access to that page, and changed in value by some amount in response to an access to another page in the memory other than the page to which the entry pertains, whereby a usage statistic is obtained. 40. The wireless communications unit claimed in Claim 38, further comprising a digital video interface and an encrypted digital rights management application securely demand paged by the secure demand paging processing circuitry. 41. The wireless communications unit claimed in Claim 38, wherein the external memory includes a flash memory and a DRAM, the microprocessor operable to initially load the DRAM with pages from the flash memory and responsive to the wiping advisor to swap pages between the DRAM and the secure internal memory. 42. The wireless communications unit claimed in Claim 38, wherein the user interface and modem provide functionality selected from the group consisting of 1) mobile phone handset, 2) personal digital assistant (PDA), 3) wireless local area network (WLAN) gateway, 4) personal computer (PC), 5) WLAN access point, 6) set top box, 7) internet appliance, 8) entertainment device, and 9) base station. 43. A process of manufacturing an integrated circuit comprising preparing a particular design of a page processing circuit including a memory for pages, a processor coupled to the memory, and a page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics; verifying the design of the page processing circuit in simulation; and manufacturing to produce a resulting integrated circuit according to the verified design. 44. A process of manufacturing a telecommunications unit comprising preparing a particular design of the telecommunication unit having a telecommunications modem, a microprocessor coupled to the telecommunications modem, a secure demand paging processing circuitry coupled to the microprocessor and including a secure internal memory for pages, a less-secure, external memory larger than the secure internal memory, and a secure page wiping advisor for prioritizing pages based both on page type and usage statistics and at least one wiping advisor parameter, and a user interface coupled to the microprocessor; testing the design of the page processing circuit and adjusting the wiping advisor parameter for increased page wiping efficiency; and manufacturing to produce a resulting telecommunications unit according to the tested and adjusted design.
PAGE PROCESSING CIRCUITS, DEVICES, METHODS AND SYSTEMS FOR SECURE DEMAND PAGING AND OTHER OPERATIONSThis invention is in the field of electronic computing hardware and software and communications, and is more specifically directed to improved processes, circuits, devices, and systems for page processing and other information and communication processing purposes, and processes of making them. Without limitation, the background is further described in connection with demand paging for communications processing. BACKGROUNDWireline and wireless communications, of many types, have gained increasing popularity in recent years. The personal computer with a wireline modem such as DSL(digital subscriber line) modem or cable modem communicates with other computers over networks. The mobile wireless (or "cellular") telephone has become ubiquitous around the world. Mobile telephony has recently begun to communicate video and digital data, and voice over packet (VoP or VoIP), in addition to cellular voice. Wireless modems, for communicating computer data over a wide area network, using mobile wireless telephone channels and techniques are also available.Wireless data communications in wireless local area networks (WLAN), such as that operating according to the well-known IEEE 802.11 standard, has become popular in a wide range of installations, ranging from home networks to commercial establishments. Short- range wireless data communication according to the "Bluetooth" technology permits computer peripherals to communicate with a personal computer or workstation within the same room. Numerous other wireless technologies exist and are emerging.Security techniques are used to improve the security of retail and other business commercial transactions in electronic commerce and to improve the security of communications wherever personal and/or commercial privacy is desirable. Security is important in both wireline and wireless communications.As computer and communications applications with security become larger and more complex, a need has arisen for technology to inexpensively handle large amounts of software program code and the data in a secure manner such as in pages for those applications and not necessarily require substantial amounts of additional expensive on-chip memory for a processor to handle those applications. Processors of various types, including DSP (digital signal processing) chips, RISC (reduced instruction set computing) and/or other integrated circuit devices are important to these systems and applications. Constraining or reducing the cost of manufacture and providing a variety of circuit and system products with performance features for different market segments are important goals in DSPs, integrated circuits generally and system-on-a- chip (SOC) design.Further alternative and advantageous solutions would, accordingly, be desirable in the art. SUMMARY Generally and in one form of the invention, a page processing circuit includes a memory for pages, a processor coupled to the memory, and a page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics.Generally, another form of the invention involves a page processing method for use with a memory having pages. The method includes representing a page by a first entry indicating whether the page is modified or not, and further representing the page by a second entry that is set to an initial value by storing a page corresponding to that entry in the memory, reset to a value approximating the initial value in response to a memory access to that page, and changed in value by some amount in response to an access to another page in the memory other than the page to which the second entry pertains.Generally, a still further form of the invention involves a telecommunications unit including a telecommunications modem, a microprocessor coupled to the telecommunications modem, and secure demand paging processing circuitry coupled to the microprocessor. The secure demand paging processing circuitry includes a secure internal memory for pages, a less-secure, external memory larger than the secure internal memory; and a secure page wiping advisor for prioritizing pages based both on page type and usage statistics. The unit further has a user interface coupled to the microprocessor. All the foregoing provides a telecommunications unit, whereby it has effectively-increased space for secure applications. Generally, a yet further form of the invention involves a process of manufacturing an integrated circuit including preparing a particular design of a page processing circuit including a memory for pages, a processor coupled to the memory, and a page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics, and verifying the design of the page processing circuit in simulation and manufacturing to produce a resulting integrated circuit according to the verified design. Generally, another further form of the invention involves a process of manufacturing a telecommunications unit including preparing a particular design of the telecommunication unit having a telecommunications modem, a microprocessor coupled to the telecommunications modem, a secure demand paging processing circuitry coupled to the microprocessor and including a secure internal memory for pages, a less-secure, external memory larger than the secure internal memory, and a secure page wiping advisor for prioritizing pages based both on page type and usage statistics and at least one wiping advisor parameter, and a user interface coupled to the microprocessor; testing the design of the page processing circuit and adjusting the wiping advisor parameter for increased page wiping efficiency; and manufacturing to produce a resulting telecommunications unit according to the tested and adjusted design.Other forms of the invention involving processes of manufacture, processes of operation, circuits, devices, telecommunications products, wireless handsets and systems are disclosed and claimed. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a pictorial diagram of a communications system including system blocks, for example a cellular base station, a WLAN AP (wireless local area network access point), a WLAN gateway, a personal computer, and two cellular telephone handsets, any one, some or all of the foregoing improved according to the invention.FIG. 2 is a block diagram of inventive integrated circuit chips for use in the blocks of the communications system of FIG. 1.FIG. 3 is a block diagram of inventive hardware and process blocks for selectively operating one or more of the chips of FIG. 2 for the system blocks of FIG. 1.FIG. 4 is a partially-block, partially data structure diagram for illustrating an inventive process and circuit for secure demand paging (SDP). FIG. 5 is a block diagram further illustrating an inventive process and circuit for secure demand paging performing a Swap In. FIG. 6 is a block diagram further illustrating an inventive process and circuit for secure demand paging performing a Swap Out.FIG. 7 is a block diagram further illustrating an inventive process and circuit for secure demand paging with encryption and DMA (direct memory access). FIG. 8 is a block diagram further illustrating an inventive process and circuit for secure demand paging with a hash.FIG. 9 is a block diagram of an inventive integrated circuit for inventively advising and selecting which page(s) to wipe in a demand paging process.FIG. 10 is a block diagram of inventive registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9.FIG. 11 is a block diagram of another inventive embodiment of registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9. FIGS. 12A and 12B are flow diagrams of inventive process embodiments for data structures and operations for inventively generating statistics, advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9. FIGS. 12A and 12B are two halves of a composite flow.FIG. 13 is a block diagram of a further inventive embodiment of registers, data structures and operations for inventively advising and selecting which page(s) to wipe in the demand paging process performed by the inventive integrated circuit of FIG. 9.FIG. 14 is a flow diagram of an inventive of process of manufacture of various embodiments of FIGS. 1-13.DETAILED DESCRIPTION OF THE EMBODIMENTS In FIG. 1, an improved communications system 1000 has system blocks as described next. Any or all of the system blocks, such as cellular mobile telephone and data handsets 1010 and 1010', a cellular (telephony and data) base station 1050, a WLAN AP (wireless local area network access point, IEEE 802.11 or otherwise) 1060, a Voice WLAN gateway 1080 with user voice over packet telephone 1085 (not shown), and a voice enabled personal computer (PC) 1070 with another user voice over packet telephone 1055 (not shown), communicate with each other in communications system 1000. Each of the system blocks 1010, 1010', 1050, 1060, 1070, 1080 are provided with one or more PHY physical layer blocks and interfaces as selected by the skilled worker in various products, for DSL (digital subscriber line broadband over twisted pair copper infrastructure), cable (DOCSIS and other forms of coaxial cable broadband communications), premises power wiring, fiber (fiber optic cable to premises), and Ethernet wideband network. Cellular base station 1050 two-way communicates with the handsets 1010, 1010', with the Internet, with cellular communications networks and with PSTN (public switched telephone network).In this way, advanced networking capability for services, software, and content, such as cellular telephony and data, audio, music, voice, video, e-mail, gaming, security, e- commerce, file transfer and other data services, internet, world wide web browsing, TCP/IP (transmission control protocol/Internet protocol), voice over packet and voice over Internet protocol (VoP/VoIP), and other services accommodates and provides security for secure utilization and entertainment appropriate to the just- listed and other particular applications. The embodiments, applications and system blocks disclosed herein are suitably implemented in fixed, portable, mobile, automotive, seaborne, and airborne, communications, control, set top box, and other apparatus. The personal computer (PC) 1070 is suitably implemented in any form factor such as desktop, laptop, palmtop, organizer, mobile phone handset, PDA personal digital assistant, internet appliance, wearable computer, personal area network, or other type. For example, handset 1010 is improved and remains interoperable and able to communicate with all other similarly improved and unimproved system blocks of communications system 1000. On a cell phone printed circuit board (PCB) 1020 in handset 1010, FIGS. 1 and 2 show a processor integrated circuit and a serial interface such as a USB interface connected by a USB line to the personal computer 1070. Reception of software, intercommunication and updating of information are provided between the personal computer 1070 (or other originating sources external to the handset 1010) and the handset 1010. Such intercommunication and updating also occur automatically and/or on request via WLAN, Bluetooth, or other wireless circuitry.For example, handset 1010 is improved for selectively determinable security and economy when manufactured. Handset 1010 remains interoperable and able to communicate with all other similarly improved and unimproved system blocks of communications system 1000. On a cell phone printed circuit board (PCB) 1020 in handset 1010, there is provided a higher- security processor integrated circuit 1022, an external flash memory and SDRAM 1024, and a serial interface 1026. Serial interface 1026 is suitably a wireline interface, such as a USB interface connected by a USB line to the personal computer 1070 when the user desires and for reception of software intercommunication and updating of information between the personal computer 1070 (or other originating sources external to the handset 1010) and the handset 1010. Such intercommunication and updating also occur via a processor such as for cellular modem, WLAN, Bluetooth, or other wireless or wireline modem processor and physical layer (PHY) circuitry 1028. Processor integrated circuit 1022 includes at least one processor (or central processing unit CPU) block 1030 coupled to an internal (on-chip read-only memory) ROM 1032, an internal (on-chip random access memory) RAM 1034, and an internal (on-chip) flash memory 1036. A security logic circuit 1038 is coupled to secure-or-general-purpose- identification value (Security/GPI) bits 1037 of a non- volatile one-time alterable Production ID register or array of electronic fuses (E-Fuses). Depending on the Security/GPI bits, boot code residing in ROM 1032 responds differently to a Power-On Reset (POR) circuit 1042 and to a secure watchdog circuit 1044 coupled to processor 1030. A device-unique security key is suitably also provided in the E-fuses or downloaded to other non- volatile, difficult-to- alter parts of the cell phone unit 1010. It will be noted that the words "internal" and "external" as applied to a circuit or chip respectively refer to being on-chip or off-chip of the applications processor chip 1022. All items are assumed to be internal to an apparatus (such as a handset, base station, access point, gateway, PC, or other apparatus) except where the words "external to" are used with the name of the apparatus, such as "external to the handset." ROM 1032 provides a boot storage having boot code that is executable in at least one type of boot sequence. One or more of RAM 1034, internal flash 1036 and external flash 1024 are also suitably used to supplement ROM 1032 for boot storage purposes.Secure Demand Paging SDP circuitry 1040 effectively multiplies the memory space that secure applications can occupy. Processor 1030 is an example of circuitry coupled to the Secure Demand Paging block 1040 to execute a process defined by securely stored code and data from a Secure RAM 1034 as if the secure RAM were much larger by using SDRAM 1024. As described further herein SDP Circuitry 1040 includes real-estate circuitry for determining which secure RAM memory page to wipe, or make available for a new page of code and/or data for a secure application.FIG. 2 illustrates inventive integrated circuit chips including chips 1100, 1200, 1300, 1400, 1500 for use in the blocks of the communications system 1000 of FIG. 1. The skilled worker uses and adapts the integrated circuits to the particular parts of the communications system 1000 as appropriate to the functions intended. For conciseness of description, the integrated circuits are described with particular reference to use of all of them in the cellular telephone handsets 1010 and 1010' by way of example. It is contemplated that the skilled worker uses each of the integrated circuits shown inFIG. 2, or such selection from the complement of blocks therein provided into appropriate other integrated circuit chips, or provided into one single integrated circuit chip, in a manner optimally combined or partitioned between the chips, to the extent needed by any of the applications supported by the cellular telephone base station 1050, personal computer(s) 1070 equipped with WLAN, WLAN access point 1060 and Voice WLAN gateway 1080, as well as cellular telephones, radios and televisions, Internet audio/video content players, fixed and portable entertainment units, routers, pagers, personal digital assistants (PDA), organizers, scanners, faxes, copiers, household appliances, office appliances, combinations thereof, and other application products now known or hereafter devised in which there is desired increased, partitioned or selectively determinable advantages next described.In FIG. 2, an integrated circuit 1100 includes a digital baseband (DBB) block 1110 that has a RISC processor (such as MIPS core, ARM processor, or other suitable processor) and a digital signal processor such as from the TMS320C55x(TM) DSP generation from Texas Instruments Incorporated or other digital signal processor (or DSP core) 1110, communications software and security software for any such processor or core, security accelerators 1140, and a memory controller. Security accelerators block 1140 provide additional computing power such as for hashing and encryption that are accessible, for instance, when the integrated circuit 1100 is operated in a security level enabling the security accelerators block 1140 and affording types of access to the security accelerators depending on the security level and/or security mode. The memory controller interfaces the RISC core and the DSP core to Flash memory and SDRAM (synchronous dynamic random access memory). On chip RAM 1120 and on-chip ROM 1130 also are accessible to the processors 1110 for providing sequences of software instructions and data thereto. A security logic circuit 1038 of FIGS. 1 and 2 has a secure state machine (SSM) to provide hardware monitoring of any tampering with security features. Secure Demand Paging (SDP) circuit 1040 of FIGS. 1 and 2 is provided and described further herein.Digital circuitry 1150 on integrated circuit 1100 supports and provides wireless interfaces for any one or more of GSM, GPRS, EDGE, UMTS, and OFDMA/MIMO (Global System for Mobile communications, General Packet Radio Service, Enhanced Data Rates for Global Evolution, Universal Mobile Telecommunications System, Orthogonal Frequency Division Multiple Access and Multiple Input Multiple Output Antennas) wireless, with or without high speed digital data service, via an analog baseband chip 1200 and GSM/CDMA transmit/receive chip 1300. Digital circuitry 1150 includes ciphering processor CRYPT for GSM ciphering and/or other encryption/decryption purposes. Blocks TPU (Time Processing Unit real-time sequencer), TSP (Time Serial Port), GEA (GPRS Encryption Algorithm block for ciphering at LLC logical link layer), RIF (Radio Interface), and SPI (Serial Port Interface) are included in digital circuitry 1150.Digital circuitry 1160 provides codec for CDMA (Code Division Multiple Access), CDMA2000, and/or WCDMA (wideband CDMA or UMTS) wireless suitably with HSDPA/HSUPA (High Speed Downlink Packet Access, High Speed Uplink Packet Access) (or IxEV-DV, IxEV-DO or 3xEV-DV) data feature via the analog baseband chip 1200 and RF GSM/CDMA chip 1300. Digital circuitry 1160 includes blocks MRC (maximal ratio combiner for multipath symbol combining), ENC (encryption/decryption), RX (downlink receive channel decoding, de-interleaving, viterbi decoding and turbo decoding) and TX (uplink transmit convolutional encoding, turbo encoding, interleaving and channelizing.). Block ENC has blocks for uplink and downlink supporting confidentiality processes of WCDMA.Audio/voice block 1170 supports audio and voice functions and interfacing. Speech/voice codec(s) are suitably provided in memory space in audio/voice block 1170 for processing by processor(s) 1110. An applications interface block 1180 couples the digital baseband chip 1100 to an applications processor 1400. Also, a serial interface in block 1180 interfaces from parallel digital busses on chip 1100 to USB (Universal Serial Bus) of PC (personal computer) 1070. The serial interface includes UARTs (universal asynchronous receiver/transmitter circuit) for performing the conversion of data between parallel and serial lines. Chip 1100 is coupled to location-determining circuitry 1190 for GPS (Global Positioning System). Chip 1100 is also coupled to a USIM (UMTS Subscriber Identity Module) 1195 or other SIM for user insertion of an identifying plastic card, or other storage element, or for sensing biometric information to identify the user and activate features.In FIG. 2, a mixed-signal integrated circuit 1200 includes an analog baseband (ABB) block 1210 for GSM/GPRS/EDGE/UMTS/HSDPA/HSUPA which includes SPI (Serial Port Interface), digital-to-analog/analog-to-digital conversion DAC/ ADC block, and RF (radio frequency) Control pertaining to GSM/GPRS/EDGE/UMTS/HSDPA/HSUPA and coupled to RF (GSM etc.) chip 1300. Block 1210 suitably provides an analogous ABB for CDMA wireless and any associated IxEV-DV, IxEV-DO or 3xEV-DV data and/or voice with its respective SPI (Serial Port Interface), digital-to-analog conversion DAC/ ADC block, and RF Control pertaining to CDMA and coupled to RF (CDMA) chip 1300. An audio block 1220 has audio I/O (input/output) circuits to a speaker 1222, a microphone 1224, and headphones (not shown). Audio block 1220 has an analog-to-digital converter (ADC) coupled to the voice codec and a stereo DAC (digital to analog converter) for a signal path to the baseband block 1210 including audio/voice block 1170, and with suitable encryption/decryption activated. A control interface 1230 has a primary host interface (VF) and a secondary host interface to DBB-related integrated circuit 1100 of FIG. 2 for the respective GSM and CDMA paths. The integrated circuit 1200 is also interfaced to an I2C port of applications processor chip 1400 of FIG. 2. Control interface 1230 is also coupled via access arbitration circuitry to the interfaces in circuits 1250 and the baseband 1210. A power conversion block 1240 includes buck voltage conversion circuitry for DC- to-DC conversion, and low-dropout (LDO) voltage regulators for power management/sleep mode of respective parts of the chip regulated by the LDOs. Power conversion block 1240 provides information to and is responsive to a power control state machine between the power conversion block 1240 and circuits 1250. Circuits 1250 provide oscillator circuitry for clocking chip 1200. The oscillators have frequencies determined by one or more crystals. Circuits 1250 include a RTC real time clock (time/date functions), general purpose I/O, a vibrator drive (supplement to cell phone ringing features), and a USB On-The-Go (OTG) transceiver. A touch screen interface 1260 is coupled to a touch screen XY 1266 off-chip.Batteries such as a lithium-ion battery 1280 and backup battery provide power to the system and battery data to circuit 1250 on suitably provided separate lines from the battery pack. When needed, the battery 1280 also receives charging current from a Battery Charge Controller in analog circuit 1250 which includes MADC (Monitoring ADC and analog input multiplexer such as for on-chip charging voltage and current, and battery voltage lines, and off-chip battery voltage, current, temperature) under control of the power control state machine.In FIG. 2 an RF integrated circuit 1300 includes a GSM/GPRS/EDGE/UMTS/CDMA RF transmitter block 1310 supported by oscillator circuitry with off-chip crystal (not shown). Transmitter block 1310 is fed by baseband block 1210 of chip 1200. Transmitter block 1310 drives a dual band RF power amplifier (PA) 1330. On-chip voltage regulators maintain appropriate voltage under conditions of varying power usage. Off-chip switchplexer 1350 couples wireless antenna and switch circuitry to both the transmit portion 1310, 1330 and the receive portion next described. Switchplexer 1350 is coupled via band-pass filters 1360 to receiving LNAs (low noise amplifiers) for 850/900MHz, 1800MHz, 1900MHz and other frequency bands as appropriate. Depending on the band in use, the output of LNAs couples to GSM/GPRS/EDGE/UMTS/CDMA demodulator 1370 to produce the I/Q or other outputs thereof (in-phase, quadrature) to the GSM/GPRS/EDGE/UMTS/CDMA baseband block 1210.Further in FIG. 2, an integrated circuit chip or core 1400 is provided for applications processing and more off-chip peripherals. Chip (or core) 1400 has interface circuit 1410 including a high-speed WLAN 802.1 la/b/g interface coupled to a WLAN chip 1500. Further provided on chip 1400 is an applications processing section 1420 which includes a RISC processor (such as MIPS core, ARM processor, or other suitable processor), a digital signal processor (DSP) such as from the TMS320C55x(TM) DSP generation from Texas Instruments Incorporated or other digital signal processor, and a shared memory controller MEM CTRL with DMA (direct memory access), and a 2D (two-dimensional display) graphic accelerator. Speech/voice codec functionality is suitably processed in chip 1400, in chip 1100, or both chips 1400 and 1100.The RISC processor and the DSP in section 1420 have access via an on-chip extended memory interface (EMIF/CF) to off-chip memory resources 1435 including as appropriate, mobile DDR (double data rate) DRAM, and flash memory of any of NAND Flash, NOR Flash, and Compact Flash. On chip 1400, the shared memory controller in circuitry 1420 interfaces the RISC processor and the DSP via an on-chip bus to on-chip memory 1440 with RAM and ROM. A 2D graphic accelerator is coupled to frame buffer internal SRAM (static random access memory) in block 1440. A security block 1450 in security logic 1038 of FIG 1 includes secure hardware accelerators having security features and provided for secure demand paging 1040 as further described herein and for accelerating encryption and decryption. A random number generator RNG is provided in security block 1450. Among the Hash approaches are SHA-I (Secured Hashing Algorithm), MD2 and MD5 (Message Digest version #). Among the symmetric approaches are DES (Digital Encryption Standard), 3DES (Triple DES), RC4 (Rivest Cipher), ARC4 (related to RC4), TKIP (Temporal Key Integrity Protocol, uses RC4), AES (Advanced Encryption Standard). Among the asymmetric approaches are RSA, DSA, DH, NTRU, and ECC (elliptic curve cryptography). The security features contemplated include any of the foregoing hardware and processes and/or any other known or yet to be devised security and/or hardware and encryption/decryption processes implemented in hardware or software. Security logic 1038 of FIG. 1 and FIG. 2 (1038, 1450) includes hardware-based protection circuitry, also called security monitoring logic or a secure state machine 2060 of FIG. 3. Security logic 1038 is coupled to and monitors busses and other parts of the chip for security violations and protects and isolates the protected areas. Security logic 1038 makes secure ROM space inaccessible, makes secure RAM and register space inaccessible and establishes any other appropriate protections to additionally foster security. In one embodiment such a software jump from Flash memory to secure ROM, for instance, causes a security violation wherein, for example, the security logic 1038 produces an automatic immediate reset of the chip. In another embodiment, such a jump causes the security monitoring logic to produce an error message and a re- vectoring of the jump away from secure ROM. Other security violations would include attempted access to secure register or RAM space. On-chip peripherals and additional interfaces 1410 include UART data interface and MCSI (Multi-Channel Serial Interface) voice wireless interface for an off-chip IEEE 802.15 ("Bluetooth" and high and low rate piconet and personal network communications) wireless circuit 1430. Debug messaging and serial interfacing are also available through the UART. A JTAG emulation interface couples to an off-chip emulator Debugger for test and debug. Further in peripherals 1410 are an I2C interface to analog baseband ABB chip 1200 and an interface to applications interface 1180 of integrated circuit chip 1100 having digital baseband DBB.Interface 1410 includes a MCSI voice interface, a UART interface for controls, and a multi-channel buffered serial port (McBSP) for data. Timers, interrupt controller, and RTC (real time clock) circuitry are provided in chip 1400. Further in peripherals 1410 are a Micro Wire (u-wire 4 channel serial port) and multi-channel buffered serial port (McBSP) to Audio codec, a touch-screen controller, and audio amplifier 1480 to stereo speakers. External audio content and touch screen (in/out) and LCD (liquid crystal display) are suitably provided. Additionally, an on-chip USB OTG interface couples to off-chip Host and Client devices. These USB communications are suitably directed outside handset 1010 such as to PC 1070 (personal computer) and/or from PC 1070 to update the handset 1010.An on-chip UART/IrDA (infrared data) interface in interfaces 1410 couples to off- chip GPS (global positioning system block cooperating with or instead of GPS 1190) and Fast IrDA infrared wireless communications device. An interface provides EMT9 andCamera interfacing to one or more off-chip still cameras or video cameras 1490, and/or to a CMOS sensor of radiant energy. Such cameras and other apparatus all have additional processing performed with greater speed and efficiency in the cameras and apparatus and in mobile devices coupled to them with improvements as described herein. Further in FIG. 2, an on-chip LCD controller and associated PWL (Pulse- Width Light) block in interfaces 1410 are coupled to a color LCD display and its LCD light controller off-chip.Further, on-chip interfaces 1410 are respectively provided for off-chip keypad and GPIO (general purpose input/output). On-chip LPG (LED Pulse Generator) and PWT (Pulse- Width Tone) interfaces are respectively provided for off-chip LED and buzzer peripherals. On-chip MMC/SD multimedia and flash interfaces are provided for off-chip MMC Flash card, SD flash card and SDIO peripherals. In FIG. 2, a WLAN integrated circuit 1500 includes MAC (media access controller) 1510, PHY (physical layer) 1520 and AFE (analog front end) 1530 for use in various WLAN and UMA (Unlicensed Mobile Access) modem applications. PHY 1520 includes blocks for Barker coding, CCK, and OFDM. PHY 1520 receives PHY Clocks from a clock generation block supplied with suitable off-chip host clock, such as at 13, 16.8, 19.2, 26, or 38.4MHz. These clocks are compatible with cell phone systems and the host application is suitably a cell phone or any other end-application. AFE 1530 is coupled by receive (Rx), transmit (Tx) and CONTROL lines to WLAN RF circuitry 1540. WLAN RF 1540 includes a 2.4 GHz (and/or 5GHz) direct conversion transceiver, or otherwise, and power amplifer and has low noise amplifier LNA in the receive path. Bandpass filtering couples WLAN RF 1540 to a WLAN antenna. In MAC 1510, Security circuitry supports any one or more of various encryption/decryption processes such as WEP (Wired Equivalent Privacy), RC4, TKIP, CKIP, WPA, AES (advanced encryption standard), 802. Hi and others. Further in WLAN 1500, a processor comprised of an embedded CPU (central processing unit) is connected to internal RAM and ROM and coupled to provide QoS (Quality of Service) IEEE 802.1 Ie operations WME, WSM, and PCF (packet control function). A security block in WLAN 1500 has busing for data in, data out, and controls interconnected with the CPU. Interface hardware and internal RAM in WLAN 1500 couples the CPU with interface 1410 of applications processor integrated circuit 1400 thereby providing an additional wireless interface for the system of FIG. 2. Still other additional wireless interfaces such as for wideband wireless such as IEEE 802.16 "WiMAX" mesh networking and other standards are suitably provided and coupled to the applications processor integrated circuit 1400 and other processors in the system.Further described next are improved secure circuits, structures and processes and improving the systems and devices of FIGS. 1 and 2 with them.FIG. 3 illustrates an advantageous form of software modes and architecture 2000 for the integrated circuits 1100 and 1400. Encrypted secure storage 2010 and a file system 2020 provide storage for this arrangement. Selected contents or all contents of encrypted secure storage 2010 are further stored in a secure storage area 2025. Next a secure mode area of the architecture is described. In a ROM area of the architecture 2000, secure ROM code 2040 together with secure data such as cryptographic key data are manufactured into an integrated circuit such as 1100 or 1400 including processor circuitry. Also a secure RAM 2045 is provided. Secret data such as key data is copied or provided into secure RAM 2045 as a result of processing of the Secure ROM Code 2040. Further in the secure mode area are modules suitably provided for RNG (Random Number Generator), SHA-1/MD5 hashing software and processes, DES/3DES (Data Encryption Standard single and triple-DES) software and processes, AES (Advanced Encryption Standard) software and processes, and PKA (Private Key Authentication) software and processes.Further in FIG. 3, secure demand paging SDP 1040 hardware and/or software effectively increases Secure RAM 2045 by demand paging from secure storage 2010. A hardware-implemented secure state machine (SSM) 2060 monitors the buses, registers, circuitry and operations of the secure mode area of the architecture 2000. In this way, addresses, bits, circuitry inputs and outputs and operations and sequences of operations that violate predetermined criteria of secure operation of the secure mode area are detected. SSM 2060 then provides any or all of warning, denial of access to a space, forcing of reset and other protective measures. Use of independent on-chip hardware for SSM 2060 advantageously isolates its operations from software-based attacks. SSM 2060 is addressable and configurable to enable a Hashing module, enable an Encryption/ Decryption module, and lock Flash and DRAM spaces. SSM 2060 monitors busses and other hardware blocks, pin boundary and other parts of the chip for security violations and protects and isolates the protected areas. SSM 2060 makes secure ROM and register space inaccessible, and secure RAM space inaccessible and establishes any other appropriate protections to additionally foster security. In one embodiment such a software jump from flash to secure ROM, for instance, causes a security violation wherein, for example, SSM 2060 produces an automatic immediate reset of the chip. In another embodiment, such a jump causes the security monitoring logic to produce an error message and a re-vectoring of the jump away from secure ROM. Other security violations would include attempted access to reconfigure the SSM 2060 or attempted access to secure RAM space. In FIG. 3, a kernel mode part of the software architecture includes one or more secure environment device drivers 2070. Driver 2070 of FIG. 3 suitably is provided as a secure environment device driver in kernel mode.Further in FIG. 3, a user application 2080 communicates to and through a secure environment API (application peripheral interface) software module 2085 to the secure environment device driver 2070. Both the user app 2080 and API 2085 are in a user mode part of the software architecture.A protected application 2090 provides an interface as security may permit, to information in file system 2020, secure storage 2025, and a trusted library 2095 such as an authenticated library of software for the system.Turning to FIG. 4, a Secure Demand Paging (SDP) 1040 secure hardware and software mechanism desirably has efficient page wiping for replacement in internal Secure RAM 1034 of physical pages not currently or often used by the software application, such as protected application 2090. Such pages include pages that may or may not need to be written back to external or other memory.An SDP 1040 hardware and software process efficiently governs the finding of appropriate pages to wipe and various embodiments confer different mixes of low complexity, low memory space and chip real-estate space occupancy, and low time consumption, low power consumption and low processing burden. The quality of the choice of the page to wipe out for replacement is advantageously high. "Wipe" includes various alternatives to overwrite, erase, simply change the state of a page-bit that tags or earmarks a page, and other methods to free or make available a page space or slot for a new page.A hardware-based embodiment efficiently identifies the appropriate page to wipe and applies further efficient SDP swap and other structures and operations. In this embodiment, a hardware mechanism monitors different internal RAM pages used by the SDP software mechanism. The hardware mechanism also detects and flags via registers accessible by software, which page is Dirty (modified) or Clean (unmodified). (A Write access to a page makes it become Dirty.)This embodiment also computes according to the ordered Read and Write accesses that occurred on the different pages, statistical information about the internal RAM page Usage Level. Usage Level is divided into Very Low usage, Low usage, Medium Usage, and High Usage, for instance.SDP 1040 then computes from all the information, according to an embedded sorting process, which pages are the more suitable pages to be wiped. SDP 1040 variously considers, for example, impact of each page on the current application and the time required for a page to be wiped out. Wiping a low usage page impacts the application slightly, but a higher usage page is needed by the application more. A Dirty page consumes writeback time to external memory and a Clean page does not need to be written back. SDP 1040, in one example, prioritizes the pages that are more suitable to be wiped out for less time consumption and application impact in the following priority order:CODE page tagged as VERY LOW usageCODE page tagged as LOW usageDATA READ page tagged as VERY LOW usage DATA READ page tagged as LOW usageDATA WRITE page tagged as VERY LOW usageDATA WRITE page tagged as LOW usageCODE page tagged as MEDIUM usageDATA READ page tagged as MEDIUM usage DATA WRITE page tagged as MEDIUM usageThen the process logs the results to prognostic registers such as page counters described hereinbelow. Subsequently, the SDP software mechanism just reads the prognostic registers to find the best pages to wipe. In the case of a strong security embodiment, the SDP 1040 hardware and/or software just described herein is configured and accessed by the main processing unit in Secure Mode, or highly privileged modes without impact on the main processing unit functionality. Restrictions on Secure Mode and privilege are removed in whole or in part for less secure embodiments. Some embodiments make demand paging itself more efficient without an SSM 2060. Other embodiments provide security features that together with the improved demand paging provide a Secure Demand Pager or SDP. Some embodiments improve very significantly the page selection mechanism with regard to competing demands of time and power consumption, and the quality of the choice of the page to wipe out for replacement.Some embodiments generate automatically and with little or no time overhead the dirty page status and the best page to wipe.Hardware-based embodiments are often more resistant to tampering by software running in other processor modes besides Secure or Privileged Modes. That is, such embodiments are less sensitive to Denial of Service (DoS) attack on an internal mechanism which might force a software application not to run properly. Some embodiments having Dirty page status generating circuits further detect whether Code pages used in internal RAM have been modified by an attacker. This capability contributes to the security robustness of SDP paging methods.Any demand paging system, whether secure or not, can be improved according to the teachings herein, with benefits depending on relative system Swap Out times and Swap In times, and also systems wherein the access time mix of various types of external storage devices from which even the Swap In times to on-chip RAM vary, and other factors. The improvements taught herein are of benefit in a Secure Demand Paging system with Swaps between on-chip RAM and off-chip DRAM, for instance, because Swap Out is used for modified pages and not used for unmodified pages and in some systems the Swap Out time with encryption and/or hashing adds relative to the Swap In time is greater than the Swap Out time would be in a less-secure system lacking the encryption and/or hashing.Various embodiments are implemented in any integrated circuit manufacturing process such as different types of CMOS (complementary metal oxide semiconductor), SOI (silicon on insulator), SiGe (silicon germanium), and with various types of transistors such as single-gate and multiple-gate (MUGFET) field effect transistors, and with single-electron transistors and other structures. Embodiments are easily adapted to any targeted computing hardware platform supporting or not supporting a secure execution mode, such as UNIX workstations and PC-desktop platforms.FIGS. 4 and 8 depict external storage SDRAM 1024 and secure Swapper of 4K pages being Swapped In and Swapped Out of the secure environment. A process of the structure and flow diagram of FIG. 4 suitably executes inside the secure environment as an integral part of the SDP manager code. Note that many pages illustrated in the SDP 1040 are held or stored in the external SDRAM 1024 and greatly increase the effective size of on-chip secure memory 1034.The SDP 1040 has a pool of pages that are physically loaded with data and instructions taken from a storage memory that is suitably encrypted (or not) external to the secure mode. SDP 1040 creates virtual memory in secure mode and thus confers the advantages of execution of software that far exceeds (e.g., up to 4 Megabytes or more in one example) the storage space in on-chip Secure RAM.In FIG. 4, Secure RAM 1034 stores a pool of 4K pages, shown as a circular data structure in the illustration. The pool of pages in Secure RAM 1034 is updated by the SDP according to Memory Management Unit (MMU) page faults resulting from execution of secure software currently running on the system.In FIG. 4, a processor such as an RISC processor has a Memory Management Unit MMU with Data Abort and Prefetch Abort outputs. The processor runs SDP Manager code designated Secure Demand Paging Code in FIG. 4. The SDP Manager is suitably fixed in a secure storage of the processor and need not be swapped out to an insecure area. See coassigned, co-filed application U.S. non-provisional patent application TI-38213 "Methods, Apparatus, and Systems for Secure Demand Paging and Other Paging Operations for Processor Devices" U.S. Ser. No. I , which is incorporated herein by reference. At left, Protected Applications (PAs) occupy a Secure Virtual Address Space 2110 having Virtual Page Slots of illustratively 4K each. In this way, a Secure Virtual Memory (SVM) is established. Secure Virtual Address Space 2110 has Code pages I,J,K; Data pages E,F,G; and a Stack C. The Secure Virtual Address Space as illustrated has a Code page K and a Data page G which are respectively mapped to physical page numbers 6 and 2 in MMU Mapping Tables 2120, also designated PA2VA (physical address to virtual address). In some embodiments, the PA has its code secured by PKA (public key acceleration).Some embodiments have MMU Mapping Table 2120 in block MMU of FIG. 4 that have Page Table Entries (PTEs) of 32 bits each, for instance. In operation, the PA (Protected Application) and the MMU Mapping Table 2120 are maintained secure on-chip. In other embodiments, a Physical-Address-to-Virtual-Address table PA2VA 2120 provided for SDP 1040 has PTEs pertaining specifically to pages stored in Secure RAM and as illustrated in FIG. 4.One of the bits in a PTE is a Valid/Invalid bit (also called an Active bit ACT[N] herein) illustrated with zero or one for Invalid (I) or Valid (V) entries respectively. An Invalid (I) bit state in ACT[N] or in the MMU Mapping Table for a given page causes an MMU page fault or interrupt when a virtual address is accessed corresponding to a physical address in that page which is absent from Secure RAM.Further in FIG. 4, a hardware arrangement is located in, associated with, or under control of a RISC processor. The RISC processor has an MMU (memory management unit) that has data abort and/or prefetch abort operations. The hardware supports the secure VAS (virtual address space) and includes a Secure static RAM. The Secure RAM is illustrated as a circular data structure, or revolving scavengeable store, with physical pages 1, 2, 3, 4, 5, 6. Stack C is swapped into physical page 5 of Secure SRAM, corresponding with the previously-mentioned Page Table Entry 5 for Stack C in the MMU Mapping Tables. Similarly, Code K is swapped into physical page 6 of Secure SRAM, corresponding with the previously-mentioned Page Table Entry 6 for Code K in the MMU Mapping Tables.Associated with the Secure RAM is a Secure Swapper 2160. Secure Swapper 2160 is illustrated in FIGS. 5-8 and has secure Direct Memory Access (DMA) that feeds AES (encryption) and SHA (hashing) hardware accelerators. The secure swapping process and hardware protect the PA information at all times.In FIG. 4, coupled to the Secure Swapper DMA is a non-secure DRAM 1024 holding encrypted and authenticated pages provided by SDP secure swapper 2160. The DRAM pages are labeled pages A, B, C(mapped to physical page 5), D, E, F, G (mapped to physical page 2), H, I, J, K (mapped to physical page 6), and L. SDP hardware provides secure page swapping, and the virtual address mapping process is securely provided under Secure Mode. Code and Data for SDP Manager software are situated in Secure RAM in a fixed PPA (primary protected application) memory address space from which swapping is not performed. Execution of code sequences 2150 of the SDP Code control Secure Swapper 2160. For example, a High Level Operating System (HLOS) calls code to operate Public Key Acceleration (PKA) or secure applet. The PKA is a secure-state application (PA) that is swapped into Secure RAM as several pages of PKA Code, Data and Stack.In FIG. 4, a number N-I Valid Bits exist in the page entries of the MMU Mapping Tables 2120 at any one time because of a number N (e.g. six in the illustration) of available Secure RAM 1034 pages. In some embodiments, one spare page is suitably kept or maintained for performance reasons. Page Data is copied, swapped, or ciphered securely to and from the DRAM 1024 to allow the most efficient utilization of expensive Secure RAM space. Secure RAM pages are positioned exactly at the virtual address positions where they are needed, dynamically and transparently in the background to PAs. In FIG. 4, SDP software coherency with the hardware is maintained by the MMU so that part of the software application is virtually mapped in a Secure OS (Operating System) virtual machine context VMC according to Virtual Mapping 2120. In this example, the VMC is designated by entries "2" in a column of PA2VA. If a context switch is performed, then the VMC entries in PA2VA are changed to a new VMC identification number. The part of the software application is that part physically located in the Secure RAM and has a Physical Mapping 2120 according to a correspondence of Virtual Pages of Virtual Mapping 2110 to respective physical pages of the Physical Mapping 2120.The information representing this correspondence of Virtual Mapping to Physical Mapping is generated by the MMU and stored in internal buffers of the MMU. The virtual space is configured by the MMU, and the DRAM 1024 is physically addressed. Some embodiments use a single translation vector or mapping PA2VA from the virtual address space to physical address space according to a specific mapping function, such as by addition (+) by itself or concatenated with more significant bits (MSB), given asVirtual_address_space = phy_address_space + x,where x is an MSB offset in an example 4GBytes memory range [0 : 4GB] + y, and where y is an LSB offset between the virtual address and the physical address in Secure RAM. In FIG. 4 the scavenging process puts a new page in a location in physical SecureRAM 1034 space depending on where a previous page is swapped out. Accordingly, in Secure RAM space, the additional translation table PA2VA 2120 is provided to provide an LSB address offset value to map between the virtual address and the physical address in Secure RAM. MSB offsets x are stored in a VMC_MMU_TABLE in Secure RAM.In some mixed- memory embodiments DRAM 1024 has enough shorter access time or lower power usage than Flash memory to justify loading and using DRAM 1024 with pages that originally reside in Flash memory. In other embodiments, SDP swaps in PA from Flash memory for read-only pages like PA code pages and the PA is not copied to DRAM. In still other embodiments, parts of the PA are in Flash memory and other parts of the PA are copied into DRAM 1024 and accessed from DRAM 1024. Accordingly, a number of embodiments accommodate various tradeoffs that depend on, among other things, the relative economics and technology features of various types of storage.In establishing Mappings 2110 and 2120 and the correspondence therebetween, the following coherency matters are handled by SDP.When loading a new page into Page Slot N in Secure RAM as described in FIG. 5, the previous Virtual to Physical mapping is no longer coherent. The new page corresponds to another part of the source application. The Virtual Mapping 2110 regarding the Swapped Out previous page N is obsolete regarding Page N. Entries in the MMU internal buffers representing the previous Virtual to Physical Mapping correspondence are now invalidated. An access to that Swapped Out page generates a Page Fault signal. Also, entries in an instruction cache hierarchy at all levels (e.g. Ll and L2) and in a data cache hierarchy at all levels are invalidated to the extent they pertain to the previous Virtual to Physical Mapping correspondence. Accordingly, a Swapped Out code page is handled for coherency purposes by an instruction cache range invalidation relative to the address range of the Code page. A Data page is analogously handled by a data cache range invalidation operation relative to the address range of the Data page. Additionally, for loading Code pages, a BTAC (Branch Target Address Cache or Branch Target Buffer BTB) flush is executed at least in respect of the address tags in the page range of a wiped Code page, in order to avoid taking a predicted branch to an invalidated address.When wiping out a page from Secure RAM, some embodiments Code pages that are always read-only. Various of these embodiments distinguish between Data (Read/Write) pages and Code (Read Only) pages. If the page to wipe out is a Data page, then to maintain coherency, two precautions are executed. First, the Data cache range is made clean (dirty bit reset) in the range of addresses of the Data Page. Second, the Write Buffer is drained so that any data retained in the data caches (L1/L2) are written and posted writes are completed. If the page to wipe out is a Code page, the wiping process does not need to execute the just- named precautions because read-only Code pages were assumed in this example. If Code pages are not read-only, then the precautions suitably are followed.The SDP paging process desirably executes as fast as possible when wiping pages. Intelligent page choice reduces or minimizes the frequency of unnecessary page wipes or Swaps since an intelligent page choice procedure as disclosed herein leaves pages in Secure RAM that are likely to be soon used again. Put another way, if a page were wiped from Secure RAM that software is soon going to use again, then SDP would consume valuable time and power to import the same page again.An additional consideration in the SDP paging process is that the time consumption for wiping pages varies with Type of page. For example, suppose a Code Page is not required to be written back to the external memory because the Code Page is read-only and thus has not been modified. Also, a Data Page that has not been modified does not need to be written back to the external memory. By contrast, a Data Page that has been modified is encrypted and hashed and written back to the external memory as described in connection with FIG. 6. FIG. 5 depicts SDP hardware and an SDP process 2200 when importing a new page from SDRAM 1024. Consider an encrypted application in the SDRAM 1024. The description here equally applies to Code pages and Data pages. A step 2210 operates so that when a new page is needed by a processor and that page is missing from Secure RAM 1034, then that page is read from an application source location in the SDRAM 1024. Next a step 2220 performs a Secure DMA (Direct Memory Access) operation to take the new page and transfer the new page to a decryption block 2230. In a step and structure 2240, the decryption block 2230 executes decryption of the page by AES (Advanced Encryption Standard) or 3DES (Triple Data Encryption Standard) or other suitable decryption process. As the AES/3DES accelerator 2230 is decrypting the content, the output of the AES/3DES accelerator 2230 is taken by another Secure DMA operation in a step 2250. Then, in FIG. 5, Secure DMA overwrites a wiped Secure RAM page with the new page, e.g., at page position Page4 in the Secure RAM 1034. Further, Secure DMA in a step 2260 takes the new page from Secure RAM 1034 and transfers the new page in a step 2270 to a hashing accelerator 2280 in process embodiments that authenticate pages. The hashing accelerator 2280 calculates the hash of the new page by SHAl hashing or other suitable hashing process to authenticate the page. A comparison structure and step 2285 compares the page hash with a predetermined hash value. If the page hash fails to match the predetermined hash value, the page is wiped from Secure RAM in a step 2290, or alternatively not written to Secure RAM 1034 in step 2250 until the hash authentication is successful. If the page hash matches the predetermined hash value for that page, the page remains in Secure RAM, or alternatively is written to Secure RAM by step 2250, and the page is regarded as successfully authenticated. A suitable authentication process is used with a degree of sophistication commensurate with the importance of the application.FIG. 6 depicts an SDP process 2300 of wiping out and Swapping Out a page. The SDRAM, Secure RAM, Secure DMA, encryption/decryption accelerator 2330, and hashing accelerator 2390 are the same as in FIG. 5, or provided as additional structures analogous to those in FIG. 5. The process steps are specific to the distinct SDP process of wiping out a page such as Page4. In a version of the wiping out process 2300, a step 2310 operates Secure DMA to take a page to wipe and Swap Out, e.g., Page 4 from Secure RAM 1034. A step 2320 transfers the page by Secure DMA to the AES/3DES encryption accelerator 2330. Then in a step 2340 the AES/3DES encryption accelerator encrypts the content of the page. Secure DMA takes the encrypted page from AES/3DES encryption accelerator in a succeeding step 2350 and transfers and writes the page into the external SDRAM memory and overwrites the previous page therein. In the process, the wiped out Page4 information may be destroyed in the internal Secure RAM 1034, such as by erasing or by replacement by a replacement page according to the process of FIG. 5. Alternatively, the Page4 may be wiped out by setting a page-specific bit indicating that Page4 is wiped.In FIG. 6 a further SDP process portion 2360 substitutes for step 2310 the following steps. Secure DMA in a step 2370 takes the page from Secure RAM and transfers the page in a step 2385 to the hashing accelerator 2390 in process embodiments involving authenticated pages. The hashing accelerator 2390 calculates and determines the hash value of the new page by SHAl hashing or other suitable hashing process. In this way, accelerator 2390 thus provides the hash value that constitutes the predetermined hash value for use by step 2285 of FIG. 5 in looking for a match (or not) to authenticate a page hash of a received Swapped In page. The page content of Page4 and the thus-calculated hash value are then obtained by Secure DMA in a step 2395 whereupon the process continues through previously-described steps 2320, 2330, 2340, 2350 to write the page and hash value to the external memory SDRAM 1024.In FIG. 7, AES/xDES block encryption/decryption functional architecture includes a System DMA block 2410 coupling Secure RAM 2415 to encryption HWA 2420. A RISC processor 2425 operates Secure Software (SAV) in Secure Mode. On Swap Out, an encrypted data block is supplied to Memory 2430 such as a DRAM, Flash memory or GPIOs (General Purpose Input/Outputs). The decryption process on Swap In is the same as the one described in FIG. 7 but with memory 2430 as data block source and Secure RAM 2415 as data block destination.Now consider the flow of an encrypted Swap Out process executed in FIG. 7. In a step 2450, RISC processor 2425 in Secure Mode configures the DMA channels defined by Internal registers of System DMA 2410 for data transfer to cryptographic block 2420. Upon completion of the configuration, RISC processor 2425 can go out of secure mode and execute normal tasks. Next, in a step 2460 Data blocks are automatically transferred from Secure RAM via System DMA 2410 and transferred in step 2470 to encryption block 2420 for execution of AES or xDES encryption of each data block. Then in a step 2480, Data blocks are computed by the chosen HWA (hardware accelerator) crypto-processor 2420 and transmitted as encrypted data to System DMA 2410. The process is completed in a step 2490 wherein encrypted Data blocks are transferred by DMA 2410 to memory 2430.In FIG. 8, SHA1/MD5 Hashing architecture includes the System DMA block 2410 coupling Secure RAM 2415 to Hash HWA 2520. RISC processor 2425 operates Secure Software (SAV) in Secure Mode. System DMA 2410 has Internal Registers fed from the RISC processor. Hash block 2520 has Result registers coupled to the RISC processor. An Interrupt Handler 2510 couples Hash block 2520 interrupt request IRQ to the RISC processor 2425.The flow of a Hash process executed in FIG. 8 is described next. In a step 2550, RISC processor 2425 in Secure Mode configures the DMA channels defined by Internal registers of System DMA 2410 for data transfer to Hash block 2520. Upon completion of theconfiguration, RISC processor 2425 can go out of secure mode and execute normal tasks. Next, in a step 2560 a Data block is automatically transferred from Secure RAM 2415 via System DMA 2410 and transmitted in step 2570 to Hash block 2420. A hash of the data block is generated by the chosen HWA crypto-processor 2520 by SHA-I or MD5 or other suitable Hash. In a succeeding step 2580, HWA 2520 signals completion of the Hash by generating and supplying interrupt IRQ to Interrupt Handler 2510. Interrupt Handler 2510 suitably handles and supplies the hash interrupt in a step 2590 to RISC processor 2425. When the interrupt is received, if RISC processor 2425 is not in Secure Mode, then RISC processor 2425 re-enters Secure Mode. The process is completed in a step 2595 wherein RISC processor 2425 operating in Secure Mode gets Hash bytes from Result registers of HWA 2520.The description now turns to FIGS. 9, 10 and 11. FIG. 9 shows details of a processor 1030 and SDP 1040. The processor 1030 includes a RISC processor with functional ports. Secure RAM 1034 is coupled via interconnect 2705 to an on-chip Instruction INST Bus and an on-chip DATA Bus. A bus interface 2707 couples the functional ports of the RISCProcessor 1030 to the INST and DATA buses. RISC Processor 1030 also has a RISC CPU (central processing unit) coupled to a Peripheral Port block which is coupled in turn via a bus interface 2709 to an on-chip bus 2745.Further in FIG. 9, SDP circuitry 1040 is coupled to the INST bus, DATA bus, and on- chip bus 2745. SDP circuitry 1040 is under hardware protection of a Secure State Machine (SSM). SDP circuitry 1040 has Dirty Bits Checker and Write Access Finder circuit 2710 to detect modifications to pages, Usage Level Builder circuit 2720, Page Wiping Advisor circuit 2730, and secure register group 2740.Register group 2740 has added secure registers for SDP. These registers ACT, TYPE, WR, STAT, and ADV have bit entries respective to each of the pages 0 to N and are accessible by Secure Supervisor software of SDP. Register group 2740 is coupled to on-chip bus 2740.Dirty Bits Checker and Write Access Finder circuit 2710 is coupled to the DATA bus and coupled to register group 2740. Usage Level Builder circuit 2720 has a first block Instruction (I) and Read (RD) Access Finder coupled to the INST bus and DATA bus. This circuit detects each instance of RD access to a Code page via INST bus, or RD access to any page via DATA bus.Usage Level Builder 2720 has a second Usage Level Builder block coupled to receive information from Dirty Bits Checker and Write Access Finder 2710 and from the I and RD Access Finder block in circuit 2720. This second block receives page activation bits from the ACT register in register group 2740 and generates Usage Level data.Next, the Usage Level data is coupled to Usage Level Encoder block in circuit 2720. Codes for tiers of Usage Level are fed to the STAT register and to Page Wiping Advisor 2730 Priority Sorting block.In Page Wiping Advisor 2730, the Priority Sorting block is coupled to receive page- specific Type data from the TYPE register. Also, Priority Sorting block is suitably coupled, depending on embodiment to receive Usage Level information from the middle Usage Level Builder block in circuit 2730. Further, Priority Sorting block is suitably coupled to feed back sorting information to that middle Usage Level Builder block.Further in Page Wiping Advisor 2730, Priority Sorting block feeds sorting information as described in FIGS. 10 and 11 to Priority Result block. Priority Result block determines which page(s) has highest priority for wiping and writes this information to the Advice register ADV in register group 2740. The wiping Advice information in Advice register ADV is accessed by RISCProcessor 1030 via bus 2745, such as by interface 2709 and Peripheral Port. Based on the information in register group 2740, RISC Processor 1030 executes SDP software to swap out a Dirty Page N identified by ADV[N] register and WR[N] register one bit and swap in a new page, or simply swap in a new page if the wiped page N identified by register bits ADV[N] and WR[N]=O (zero bit) was a Clean (unmodified) page.Four Process and structure areas are performed in one or more exemplary SDP paging processes and structures.First, the Code Pages are differentiated from the Data Pages by identifying and entering an entry in a page- specific field in the TYPE register respective to each such Code or Data page. Second, write access activity is monitored in a register WR[N] to determine each page most likely to be a READ page in a pool of Data Pages. Register WR[N], in other words, has bits indicating of which pages are Clean (unmodified) or Dirty (modified).Third, the ACT register is loaded with page activate entries and statistical information is built up to populate the STAT register for the activated pages represented in the ACT register.Fourth, the foregoing three types of information are then utilized according to a four- STEP process described next to produce wiping Advice for one or more pages in an ADV register. In FIG. 9, a process called Wiping Advisor herein operates in one of various alternative ways, and two examples are described in FIGS. 10 and 11. "&" stands for concatenation and AND means Boolean- And in the text that follows.Registers for use in FIGS. 9, 10 and 11 are a TYPE Page Type register having entries of zero (0) for each Data Page and one (1) for each Code page. WR register has a respective dirty bit for signs of modification of each secure RAM page. ACT register has a respective entry to activate or de-activate the Usage Level of a page. STAT register holds a respective entry representing one of four levels of ACT for an activated page. ADV register of FIG. 9 is the Wiping Advisor register 2880 of FIG. 10 and has a respective entry for each page wherein one (1) means the recommendation is to wipe the page and zero (0) means not to wipe the page. Sixteen page counters or registers with a subtractor are also provided.The Wiping Advisor has a process with four STEPs ONE, TWO, THREE and FOUR. STEP ONE handles the First, Second and Third Process and structure areas and sets up priority encodings for each page for the Fourth Process and structure area. STEPS TWO, THREE and FOUR complete the Fourth Process and structure area above. STEP ONEFirst Process and structure area: The Code Pages are differentiated from the Data Pages by TYPE[N] according to TABLE 1. Code Pages come from Instruction Cache accesses and Data Pages come from Data Cache accesses, so the access signals to the caches are used to derive and enter the TYPE[N] register bits. Also some software applications explicitly identify which pages are code pages and which pages are data pages. Kernel and/or SDP software may define data stack pages and data heap pages, and such available information is used by some embodiments according to the teachings herein.Suppose the Code or Data page type information is not directly available, because the architecture does not have separate Instruction Cache and Data Cache and the application does not identify the pages. Then the Write access activity is suitably monitored regarding each page in order to determine in a proactive or preemptive way which page is most likely not a Code Page in the pool of Data Pages. If a page is written, then it is probably a Data Page and not a Code Page. The default configuration is that a page is a Data Page so that both read and write access tabulations are encompassed. When Code Pages can be unambiguously identified, then differentiating Code fromData pages also confers control. When Code Pages are identified, security circuitry is suitably provided to automatically prevent Code pages from being modified or hacked on the fly while each Code Page is in Secure RAM. In cases where a Code Page is obtained from the Data Cache, then the page is tabulated as a Data Page unless the application explicitly identifies it as a Code Page. Since Code Pages take less time to wipe in systems wherein Code Pages are read-only (Clean by definition), the Code Pages are assigned somewhat higher priority to wipe in STEP FOUR than pages of similar Usage Level that are modified, for example.TABLE 1 PAGE TYPE BITS TYPE[N]BITS MEANING0 Data Page1 Code PageSecond Process and structure area: In FIGS. 9-11, a register WR codes a fieldWR[N] where "one" (1) signifies at least one write to page N and zero (0) signifies that the page has not been written. This register field WR[N] implements the time-consumption consideration that a page that has been not been written takes less time to wipe since the page need not be, and suitably is not, written back to external memory. The register field WR[N] is suitably reset to zero (0) by having zero (0) is written in it by the Peripheral Port. TABLE 2 describes the meaning of different values of register field WR[N]. TABLE 2: CODES SIGNIFYING WRITE OR NOT TO PAGE NWR[N] DESCRIPTION0 No write to Page N, called a Read Page 1 One or more actual writes have occurred to Page N, called a WritePage or Dirty PageIn some embodiments the Second Process and structure area considers the characterization of a page as a Read Page to be a matter of initial assumption that needs to be checked by the circuitry. In this approach, when a page is detected to be potentially a Read Page, then a drain of the Write Buffer and a Clean Data Cache Range (applied only to the respective 4K page being processed) is used to determine if the Read Page assumption was correct. If the Read Page assumption is confirmed, then when the page is selected for wiping, the page is wiped out simply by the ADV register bit entry and/or subsequent overwriting. There is no need to execute a FIG. 6 Swap Out in the meantime by write -back to the external memory. If the Read Page assumption is disconfirmed, then the page is written back to the external memory as described in connection with FIG. 6.In FIG. 9, an SSM Dirty Bits Checker 2710 monitors each 4K page N in the Secure RAM and detects any write access to each respective page N. The status of each page N is flagged in the WR[N] bit of register WR. The status of each page N is cleared by the secure demand pager SDP circuitry by writing zeroes into that register WR either from circuit 2710 or from processor 1030 over bus 2745.Some embodiments have write-back cache and other embodiments have write- through cache. Write-through cache may work somewhat more efficiently since an L2 (Level 2) cache can retain substantial amounts of data before a random cache line eviction happens in Write-back mode.Next, various signal designators are used in connection with the SDP coupling to busses. The signal designators are composites build up from abbreviations and interpreted according to the following Glossary Table. GLOSSARY TABLEABBREVIATION REMARKS (also subject to explanation in text)A AddressADDR AddressCLK ClockEN EnableI Instruction (bus)N Page NumberPROT Protected, SecureR ReadREADY ReadyRW Read/WriteSEC SecureVALID ValidW WriteWR WriteIn FIG. 9, processor buses INST bus, DATA bus, and bus 2745 have signals READ_CHANNEL (data and instruction fetch load), and WRITE_CHANNEL (data write) signals. These signals are useful to SDP 1040, such as those signals listed below.READ CHANNEL and WRITE CHANNEL:ACLK Main ClockACLKEN Used to divided the Main clock to create bus clock (generally we have core at 400 MHz and bus clock at 200 MHz)READ_CHANNEL:ARVALID : When High the address on the bus is validARPROT : Indicates if this transaction is Secure/Public; User/Supervisor; Data/OpcodeARADDR : Address requestedWRITE CHANNEL:AWVALID : When High the address and data in the bus are validAWPROT : Indicates if this transaction is Secure/public; User/Supervisor; Data/OpcodeAWADDR : Address requestedSome processor architectures, such as real Harvard architecture, have separate busses for Data Read; Data Write; and Instructions (Opcode Fetch). READY signals AWREADYRW, ARREADYRW, ARREADYI pertain to data- valid signals on different buses. ARREADYI is HIGH when a slave has answered or hand-shaked the read data, indicating data valid, on an Instruction bus to the RISC processor. ARREADYRW is HIGH when a slave has answered or hand-shaked the read data, indicating data valid, on a Data Read bus to the RISC processor. AWREADYI is HIGH when the write data on a Data Write bus is valid, indicating data valid, to a Slave. In various architectures, one bus may carry one, some or all of these types of data, and the appropriate ready signal(s) is provided.The pages are aligned on a 4K bytes boundary. One embodiment example repeatedly operates on all of a set of pages N from 0 (0000 binary) to 15 (1111 binary) and concatenates the four bits representing a page number N (0 to 15) so that all sixteen page addresses PAGE_0_BASE_ADDR, PAGE_1_BASE_ADDR,... PAGE_15_BASE_ADDR are loaded with a respective base address value START_SECRAM[31:16] concatenated with the four binary bits of index N as the offset from that base address and identifying each respective page N= 0, 1, ...15. Each of these base addresses are respectively designated PAGE_N_BASE_ADDR. Next, the process generates the truth value of the following expression.(AWVALIDRW=I and AWREADYRW=I and ACLKENIRW=I and AWPROTRW[2]=0).If the expression is not true, then for N=O to 15, a temporary register for holding information pertaining to each attempted page access has a respective bit zeroed PAGE_N_WR=0 for every page N. If the expression is true, then for the accessed page number N, both the temporary register bit is set PAGE_N_WR=1 and the Dirty/Clean register bit for the accessed page is set WR[N]=I, provided PAGE_N_BASE_ADDR = AWADDRRW[31:12]. The temporary register bits PAGE_N_WR for all the other fifteen pages are zeroed.In words, the SDP hardware 1040 monitors for the instance when not only the high 16 bits of PAGE_N_B ASE_ADDR are equal to the high 16 bits of AWADDRRW[31:16], but also the next 4 page-specific bits of PAGE_N_BASE_ADDR are equal to the next 4 page-specific bits AWADDRRW[15:12] signifying the page to which PAGE_N_BASE_ADDR pertains. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page-specific bits. A match indicates a Write to Page N, which makes Page N Dirty. When a match happens, the Dirty/Clean register bit WR[N] pertaining to page N is set to one (1) (Dirty). The Dirty/Clean register WR is not modified at any other bit position at this time since Dirty indications for any other page should be remembered and not disturbed.Third Process and structure area: Statistics on frequency of use of each page N are kept as 2-bit values designated STAT in registers 2740 of FIG. 9 and as depicted in FIGS. 10 and 11. These statistics are examples of prognostic registers or register values. STAT identifies which pages are used more frequently than others so that paging will be less likely to wipe out a more frequently used page. For example, when the coding represents a VERY LOW usage condition for a page, that page is a good candidate for wiping.To conserve real estate, modulo 2 counters are used in an embodiment, as follows. In FIG. 9, a Usage Level Encoder in Usage Level Builder 2720 encodes Page Counter values according to TABLE 3 so that each of the values is tiered and loaded into a modulo 2 two bit counter called STAT.TABLE 3 shows how the page counter values are compressed into STAT 2-bit values. In this example, and without limitation, sixteen 4K pages N have their two bit statistics recorded in sixteen two-bit entries in a 32-bit STAT register STAT[31:0]. Each two bit entry is designated as STAT[2N+1; 2N] for page number N running from 0 to 15.TABLE 3 STATISTICS CONVERSION P PAAGGEE NN AACCCCEESSS STAT[2N+1;2N] USAGE LEVELCOUNTER RANGE ENCODED VALUE MEANING0 00 VERY LOW1 to 47 01 LOW48 to 95 10 MEDIUM96 to 127 11 HIGHNote that variation of the boundaries like 48 and 96 is readily made in various embodiments. Ranges of variation for the low boundary (0, 1, 48, 96 in TABLE 3) of each Usage Level are essentially as low or as high as the counter range, depending on the selection by the skilled worker. The high boundary (0, 47, 95) is set one less than the low boundary of the next higher Usage Level, so that all possible counter values are assigned to some Usage Level. The number of Usage Levels, when used, are suitably at least two, without an upper limitation of number of Usage Levels, and ordinarily less than nine for inexpensive circuitry. In this example, four Usage Levels were adopted.The highest counter value (e.g., 127 here) suitably has no upper limit, but most applications will empirically have some counter value below which a predetermined percentage (e.g., 90% for upper counter value being one less than a power-of-two) of processor runs lie. The highest counter value can be increased or decreased according to the number of pages available for SDP in the internal Secure RAM. Then, the counter is suitably established to have that counting capacity. If the counter does reach its hardware upper limit, the counter suitably is made to saturate (remain at the upper limit) rather than rolling over to zero, to avoid confusing Usage Levels with each other. For most implementations, a counter capacity between 31 and 1023 appears practical and/or sufficient.The Usage Levels in this example, divide the Usage Level lower count boundary (e.g., 0, 1, 48, 96) so that all but the lowest Usage Level divide the counter range so that some ranges are approximately equal (here, two ranges are 48 counts wide). Other embodiments set the ranges differently. One way is setting some of the upper or lower range boundaries logarithmically - such as approximately 1, 4, 16, 64. Another approach uses 0, 32, 64, 96, or some of those values, and directly loads the two MSB bits of the counter as a Usage Level to register STAT. Another embodiment determines the ranges to improve execution of known software applications by empirical testing beforehand and then configures the Usage Level Encoder with the empirically determined ranges prior to use. Still another embodiment in effect does the empirical testing in the field and dynamically learns as the applications actually execute in use in the field, and then adjusts the boundaries to cause the wiping Advice to keep the execution time and power dissipation very low.The counting operations help avoid prematurely swapping out a newly swapped-in page. Swap is executed when a page fault occurs, which means an attempted access is made to a missing page. SDP software uses Page Wiping Advisor 2730, in the hardware of FIG. 9 added to Secure State Machine SSM, to identify a page slot when the space in Secure RAM for physical pages is full, and in some embodiments under other conditions as well. If the old page in the identified page slot has been modified, SDP securely swaps out the old page as shown in FIGS. 4, 6, 7 and 8. Then SDP software swaps in a new page in secure mode as shown in FIGS. 4, 5, 7 and 8, and thereby replaces the old page in the page slot.Some embodiments have control software to activate access and respond to the SDP hardware of FIG. 9. In some embodiments, that control software is suitably provided as a space-efficient software component in Secure ROM that is patch updatable via a signed Primary Protected Application. See coassigned, co-filed application U.S. non-provisional patent application TI-38213 "Methods, Apparatus, and Systems for Secure Demand Paging and Other Paging Operations for Processor Devices" U.S. Ser. No. I , which is incorporated herein by reference. Registers 2740 has WR register used in conjunction with page access counters 2845 of FIG. 10 and 11, to determine when Page slots with a page currently mapped are Dirty (modified in Secure RAM after Swap In). Often, the Swap In occurs, but with Swap Out of the old page from the slot being omitted. Swap Out is omitted, for instance, for old code pages when self modifying code is not used. The virtual slots (where potential physical pages can be mapped) might change one time for code, and that is when the code is loaded.For a Clean page, the Dirty/Clean WR register information provides a preventive signal to bypass Swap Out and thereby save and avoid the cost of a Swap Out to wipe or steal that page. Then a Swap In is performed into the Page slot occupied by the no-longer-needed Clean page. In other words, Swap In from DRAM of a new page to be accessed writes over an old Clean page residing in a page slot identified by Page Wiping Advisor 2730. Time and processing power are efficiently used in this process embodiment by Swapping Out a page selected for wiping specifically when that page is dirty, prior to the subsequent Swap In of a new page.The data for a secure environment is found to be much smaller in space occupancy than code for many secure applications. Some secure applications like DRM (Digital Rights Management) do use substantial amounts not only of secure code but also secure data such as encrypted data received into non-secure DRAM. The secure environment decrypts encrypted data and puts the resulting decrypted data back into DRAM while keeping the much smaller amount of data represented by key(s) and/or base certificate structure for the DRM in the secure environment. Selective control to Swap Out a page selected for wiping specifically when that page is Dirty, and to apply a bypass control around Swap Out when the wiped page was Clean, still saves time and processing power.Then the page fault error status is released and an access on the previously missing page occurs and is completed, since that page is now swapped in. The SDP hardware of FIG. 9, when receiving an access, updates its internal counter corresponding to this page with the highest value (e.g., 127), which ranks the page HIGH and consequently this page is given the last or lowest priority to be wiped out at this point.The third process and structure area builds statistical information of usage of each page in order to help the SDP Page Wiping Advisor 2730 to choose the right page to wipe out. The statistical information helps avoid wiping out pages that are currently in use or being used more than the appropriate page to wipe out, all other things being equal.The Usage Level Builder 2720 builds a Usage Level of each page by detecting any read or write access occurring on each page. Some embodiments do not differentiate a burst access from a single access for this detection operation. The SSM sends the access detection to Usage Level Builder 2720. Usage Level Builder 2720 outputs, for example, two (2) bits of statistical information that is encoded based on TABLE 3 to statistics register STAT. Statistics register STAT is accessible to the SDP (secure demand pager) software executing on RISC processor 1030 or other processor.Note that the statistical information provided by the STAT register may not always be precisely accurate due to a high amount of cache hits that might occur in an L2 cache in some cache architecture. However, the information is indicative and sufficiently accurate for the SDP.In FIG. 9, the SSM Usage Level Builder 2720 has a set of Page Access Counters 2845 of FIGS. 10 and 11. Those counters include, for example, a seven (7) bit counter (0- 127) for each 4K bytes page N. When page N is accessed, the respective Nth page counter is set to 127, and all other page counters are decremented by one. In operation, the counters of the currently accessed pages have higher count values closer or nearer to 127, and no-longer- used pages have counter values close to zero.In other words, the Page Access Counters 2845, in effect, keep a reverse count of non-uses of each page by decrementing down so that a more-unused page has a lower counter value, in one example, than a less-unused page. In this way both recency of access and frequency of use work together, for a page that should not be wiped, to keep the counter value high.The counters are reset by resetting the particular Page Access Counter in counters 2845 that pertains to the particular page slot that is wiped at any given time. That particular Page Access Counter is reset to 127 (all ones), for example, and the corresponding STAT register bit pair is reset to "11" (HIGH) for the particular physical page slot that is wiped. The counts for other pages need not be reset since those counts still are meaningful. For example, another little-used page that has achieved a count that has been repeatedly decremented down a low value, and wherein that page has not yet been wiped, need not have its Page Access Counter value reset to 127 when some other little -used page has just been wiped.Other embodiments suitably use the opposite end of the range and/or other policies to efficiently differentiate pages to wipe from pages not to wipe.An alternative embodiment operates oppositely to that of TABLE 3, and sets the counter for page N to zero each time page N is accessed. The counters for all the other pages are incremented by one. The values of TABLE 3 are reversed in their meaning, and the results of operation are similar. Operations thus establish count values for recently accessed or more frequently used pages nearer to one end of the count range than count values for hardly-used pages. Another alternative embodiment initializes the counters to zero and increments a counter pertaining to a page when that page is accessed to keep a statistic. Counters for pages that were not accessed have their values unchanged in this alternative. Elapsed time from a time- stamp time pertaining to the page is combined with the counter information to indicate frequency of use. Numbers representing tiers of elapsed time and tiers of counter values are suitably combined by logic to indicate frequency of use. A page that has recently been swapped into Secure RAM and therefore has few accesses is thus not automatically given a high priority for wiping just because its usage happens to be low. In other words, recency of entry of a new page is taken into account.In some embodiments, when Secure RAM has empty page slots, the empty page slots are used as Swap In destinations before wiping and/or Swapping Out any currently resident pages. The Page Access Counters 2845 are utilized for tracking code and data pages separately in some embodiments, to bump (i.e., increment or decrement) a respective counter for each page and page type. Some embodiments keep statistics of several counters, e.g., three (3) counters designated NEW, OLD, and OLDER. On each page fault, the three aged counters are updated on a per virtual address slot basis. Counter NEW holds the latest count. Counter OLD gets an old copy of counter NEW. Counter OLDER gets an older copy of counter NEW. In another embodiment, a weighting system is applied to the three aged counters for dynamically adjusting operations of the page wiping adviser. Some embodiments provide such counters representing separate ranges of age of page since last access.Some embodiments provide additional bits to signify saturation and/or roll-over of the counter, and an interrupt is suitably supplied to RISC processor 1030 to signal such condition. The SDP hardware 1040 generates an interrupt each time a page access counter reaches zero (0), and does not wait for the application program running in SDP to generate a page fault and then servicing the page fault. A process determines which new page to import such as by loading a page that is adjacent in virtual address space to a currently-loaded high usage page, pre-decoding from an instruction queue, or other appropriate new page identification mechanism.Interrupt architecture for SDP hardware 1040 thereby obviates continual statistics management monitoring or polling by RISC processor 1030 (also called tight coupling). A still further variant of the aged-counters approach utilizes a secure timer interrupt, in secure mode when SDP is in use, to vary frequency of reading the three aged counters. Thus, a variety of interrupt based SDP embodiments of hardware 1040 are contemplated as well as polling embodiments. A statistics register for each virtual page slot can be provided in some embodiments because the virtual slots (in secure virtual memory) are the relevant address space to operations of applications code. In other embodiments, Page Access Counters 2845 are kept low in number by having them correspond to the secure RAM physical pages, which map to any virtual slot in general. Also, a single counter circuit here coupled to several count value registers helps keep real estate small. Even though the virtual slots are the relevant address space to the application, the physical page hardware statistics are nevertheless efficiently maintained in some embodiments on a physical page-slot by page-slot basis. This approach represents and handles a lot of data when the virtual address space is very large without need of a statistics register for each virtual page slot. In this physical page statistics approach, physical pages are accessed if mapped into some page slot of the virtual address space. The status registers need only track the much smaller number of physical pages.The Page Access Counters 2845 pertain to each physical page in FIG. 10. The SSM monitors the physical bus and SSM need not be coupled to the MMU mapping. Page Access Counters 2845 ranks the usage of the physical page slot in Secure RAM in order to determine that type (Clean, read; or Dirty, write) and Usage Level of the page by SSM tracking each access going to from MPU to Secure RAM.In FIG. 4, SDP Manager uses virtual memory contexts VMC to context-associate the physical pages to the much larger number of virtual slots and maintain their relevance and association on a per page-slot basis in the virtual address space. Put another way, the physical page statistics data is instanced to a context (VMC) which is associated to each virtual slot where a physical page, at some point in time, is or has been mapped, and tracked physically what occurred to that physical page, but only while mapped into that slot. When the physical page is wiped and/or Swapped Out, such as to free physical space for a new page, the physical statistics counters are cleared, as described hereinabove, because they are no longer relevant to where the page is newly mapped into the virtual address space. New counts are then added to statistics maintained on a virtual slot basis. Suppose the page wiping adviser circuitry makes decisions based upon what the application does in the virtual address space, and not the physical address space. The physical address space is irrelevant to operations of the application over a longer period of time wherein the physical pages have been dynamically re-assigned to many virtual address slots over that longer period of time.The application program micro-operational read/writes to memory are thus tracked by physical page of Secure RAM. Suppose the scavenging decision to wipe a page is based upon virtual page slots (4k each) comprising the entire virtual memory space. Therefore, in the aged counters approach, each page slot (4k) is associated with three different aged counters. Each DRAM backing page, in effect, is supported by these counters, because a linear one to one mapping (an array) relates DRAM backing pages to slots in the virtual address space and thus reduces complexity. Those counters are suitably kept and maintained encrypted in the non-secure DRAM, as part of the other DRAM backing page statistics.Another category of embodiments considers in a greater SDP context the history of what an application program accesses in the larger virtual address space. For these embodiments, the history in virtual address space is regarded as more important than what the application does in the physical address space (pages mapped into slots of the virtual) when relating to scavenging operations. A page that has not been dirtied (modified) since the last Swap Out of data in the virtual slot where that page is currently mapped is a more efficient candidate for stealing than a dirty page. Statistics maintained on access to a given physical page are important, when related to the context of where the physical page is mapped into the virtual address space. That is because the applications actions/accesses with its underlying memory are in the virtual address space, not the physical space which is being used dynamically in a time-sliced manner to create the larger virtual address space. Therefore, if hardware like that of FIG. 9 monitors accesses to physical pages to produce its statistics, a further variation keeps and relates the statistics on physical pages in the larger context of information relating to the virtual address space. Accordingly, before the page is wiped or stolen and moved to a different supporting virtual address slot, the information is retrieved from hardware statistics register STAT and saved into a software maintained statistics data structure that is indexed based on the related and corresponding virtual address slot that produced the statistics data.Returning to the physical page approach of FIG. 9, in order to ensure that the counters are not corrupted by the SDP software which may be resident in Secure RAM, the secure registers 2740 include a page activity register ACT. This page activity register ACT allows disabling of usage level monitoring for any page of the page pool as described next.TABLE 4: CODES SIGNIFYING ACTIVATION OF USAGE LEVEL MONITORING ACT[N] DESCRIPTION0 Usage Level Monitoring not activated 1 Usage Level Monitoring activated When ACT[N] is High (1), this page N is taken into account in updating Usage Level register STAT and Page Wiping Advisor register ADV.In FIG. 9, the SSM Usage Level Builder 2720 handles the 7-bit page counters as follows. The process generates the truth value for the following expression, pertaining to aRead bus, analogous to an expression from hereinabove: (ARVALIDRW=I and ARREADYRW=I and ACLKENIRW=I and ARPROTRW[2]=0).If the expression is not true, then for N=O to 15, each respective read bit is zeroed in another temporary register PAGE_N_RD=0. If the expression is true, then for each page number N, those temporary register bits are respectively zeroed except for setting PAGE_N_RD=1, meaning the temporary register bit pertaining to the page N that was read. Determining which page N was read is determined by finding the N that produces a match PAGE_N_BASE_ADDR = ARADDRRW[31:12]. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page- specific bits. This indication PAGE_N_RD=1 is useful for adjusting the corresponding Page Access Counter in counters 2845 by indicating that the page N has this additional instance of being used.Notice that the process is repeated analogously to the process from hereinabove except that a Read bus is involved instead of a Write bus. "AR" instead of "AW" is the letter pair prefixed to additional variables involving Valid, Ready, and Protected.Next for each page N from 0 to top page, the process generates the truth value for the following expression, for an Instruction bus, analogous to an expression from hereinabove:(ARVALIDI=I and ARREADYI=I and ACLKENIRW=I and ARPROTI[2]=0).If the expression is not true, then for N=O to 15, in another temporary register each respective page bit PAGE_N_I is zeroed. (PAGE_N_I=0).If the expression is true, then for the accessed page number N, both the temporary register bit is set to one PAGE_N_I=1 and the TYPE register bit for the accessed page is set TYPE[N]=I, provided PAGE_N_BASE_ADDR = ARADDRI[31:12]. On Swap In, the high 16 bits are written to PA2VA of FIG. 4 and indexed by the next 4 page- specific bits. The temporary register bits PAGE_N_I for all the other fifteen pages are zeroed. In words, the SDP hardware 1040 monitors for the instance when not only the high 16 bits of PAGE_N_BASE_ADDR are equal to the high 16 bits of ARADDRI[31:16] on the Instruction bus, but also the next 4 page- specific bits of PAGE_N_BASE_ADDR are equal to the next 4 page-specific bits ARADDRI [15:12] signifying the page to which PAGE_N_BASE_ADDR pertains. A match indicates a Write to Page N, which makes Page N a Code page. When a match happens, the TYPE register bit TYPE[N] pertaining to page N is set to one (1) (Code Page). The TYPE register is not modified at any other bit position at this time since current Type indications for any other page should be remembered and not disturbed. The process just above is repeated analogously to the process from hereinabove except that PAGE_N_I and TYPE register are involved instead of WR[N], and "I" for Instruction bus instead of "RW" is the suffix in further additional variables involving Valid, Ready, and Protected.The activated pages are prevented from being corrupted by accesses to not-activated pages by virtue of a respective access-valid register bit PAGE_N_ACCESS_VALID for each page N. Specifically, for each respective page N, that access- valid register bit is generated as follows: PAGE_N_ACCESS_VALID = (PAGE_N_WR OR PAGE_N_RD OR PAGE_N_I) AND ACT[N].Note that letter "N" represents a page index in each of the five register bits of the above equation.Next, if PAGE_N_ACCESS_VALID is valid for any page (determined by OR-ing the access-valid register bits for all pages), then the page counter for the page N for which access is valid is set to 127, and all other page counters are decremented by one or maintained zero if already zero. In this way, even a single isolated instance of a page access is sufficient to confer a HIGH Usage Level to the page for a while.Some other embodiments instead add a predetermined value to the page counter for page N. For example, the predetermined value can be equal to half the counter range or some other value. The page counter is structured to saturate at 127 and not roll over if the result of the addition exceeds the high end value (e.g., 127). In this way, more than a single isolated instance of a page access is applied to regain the HIGH Usage Level. The SSM Usage Level Builder 2720 of FIG. 9 encodes the page counters according to TABLE 3. The page counters for all pages are updated as described for each time an access occurs on one of the activated pages. Also, the statistics STAT value STAT for a respective page is updated for each time an access occurs on one of the activated pages. Fourth process and structure area. The fourth process and structure area computes from the first, second, and third process and structure area results to determine which page N to wipe out. The result of the fourth process and structure area is accessible by the secure demand pager SDP via the secure register ADV according to TABLE 5.TABLE 5: CODES SIGNIFYING RECOMMENDATION TO WIPE A PAGEADV[N] DESCRIPTION0 Page N must not be wiped out1 Page N should be wiped out.When only one single bit ADV[N] is High, then page N has been identified to be the best choice for wiping. Note that the ADV register can have more than one bit high at a time as described next hereinbelow. Also, when no ADV bits are high, such as when every page has High Usage Level and low priority for wiping, then the SDP randomly or otherwise appropriately selects a page for wiping. When the various registers ACT, TYPE, WR, STAT, and ADV are reset, such as on power up, warm reset, or new Virtual Machine Context (VMC), all the bits in those five registers are suitably reset to zero. Those five registers are suitably provided as secure registers protected by SSM, and a Secure Supervisor program is used to access those registers ACT, TYPE, WR, STAT and ADV in Secure Mode. In FIG. 10, the coding concatenates STAT[2N+1:2N] & TYPE[N] & WR[N] to create a Concatenation Case Table 2850 having row entries for each page N. A design code case statement involving this concatenation is also represented by the expressionCASE STAT[2N+1:2N] & TYPE[N] & WR[N] A row entry in this example has four bits entered therein. In other embodiments, more or fewer bits with various selected meanings are suitably used in an analogous way. For example, if a Page has a row 4 entry "1001" in Concatenation Case Table 2850, it means that Page 4 is characterized by {"10"- MEDIUM usage, "0" - Data Page, "1" - Dirty Page}. For a second example, if a Page has a row 5 entry 0110 in Concatenation Case Table 2850, it means that Page 5 is characterized by {"01" LOW usage, "1" - Code Page, "0" - Clean Page}. The Concatenation Case Table 2850 is suitably regarded as a collective designation for the STAT, TYPE, and WR registers in some embodiments; it can be a separate physical table in other embodiments.The Page Access Counter(s) 2845 supplies an identification of an applicable one of a plurality of usage ranges called Usage Levels (VERY LOW, LOW, MEDIUM, HIGH) in which the usage from the Page Access Counter lies to form the Usage Level bits according to TABLE 3 for the STAT register in Concatenation Case Table 2850.In FIG. 10, the Concatenation Case Table 2850 is then converted to a 9-bit field by table lookup from TABLE 6 or by conversion logic implementing TABLE 6 directly to supply each entry to a Priority Sorting Page 2860.TABLE 6 ENCODINGS FOR PRIORITY 9-BIT FIELDCONCATENATION PRIORITY 9-BIT VARIABLE MEANINGSTAT/TYPE/WR PRIORITY_SORTING_PAGE0010 000000001 CODE page, VERYLOW usage0110 000000010 CODE page, LOW usage0000 000000100 DATA READ page,VERY LOW usage0100 000001000 DATA READ page,LOW usage0001 000010000 DATA WRITE page,VERY LOW usage0101 000100000 DATA WRITE page,LOW usage1010 001000000 CODE page, MEDIUM usage1000 010000000 DATA READ page,MEDIUM usage1001 100000000 DATA WRITE page,MEDIUM usage l lxx 000000000 Page should not be wiped out xxl l N/A N/A, Not used where code page is read only For example, a Code page with Very Low usage has the highest priority for wiping and all other items have decreasing priority in order of their listing. A Data Write page is a Dirty page, which is lower in priority than other pages, other things equal. This is because the Dirty page, if selected for wiping, is Swapped Out, and SDP Swap Out involves overhead of Encryption and Hash for security which can be sometimes avoided by setting the priority lower.The TABLE 6 conversion assigns a higher page priority for wiping to a page that is unmodified (Clean) than to a page that has been written (Dirty), other things equal. A higher page priority for wiping is assigned to a code page than a data page, other things equal. A higher page priority for wiping is assigned to a lower usage page than a higher Usage Level page (see STAT register TABLE 3), other things equal.A 9-bit page priority for wiping is assigned in operations according to TABLE 6. The TABLE 6 priorities represent that, for at least one Usage Level of TABLE 3, a code page in that Usage Level has a higher priority than an unmodified data page in the next lower Usage Level. For example, a CODE page with LOW usage has a higher priority than a DATAREAD page with VERY LOW usage. In the example of this paragraph, this prioritization is established mainly because there is an uncertainty on a DATA READ page that can become WRITE after a writeback drain and cache flush. By contrast, pages identified CODE are sure not to become a DATA WRITE page in this example wherein an assumption of no- modification to code pages is established.Similarly, a page priority for wiping is assigned wherein for at least one Usage Level, an unmodified data page in that Usage Level has a higher priority than a written data page in the next lower Usage Level. For example a DATA READ page with LOW usage has a higher priority than a DATA WRITE page with VERY LOW usage. Different embodiments suitably provide other types of priority codings or representations, such as binary, binary-coded-decimal, etc. than the 9-bit singleton-one position-coded priority (or its complement) shown in the example of TABLE 6, even if the listed hierarchy of tabulated MEANINGs remains the same. In still other contemplated embodiments, the hierarchy of MEANING is revised to establish other practically useful priority orderings and more or fewer priority codes depending on experience with different numbers of Usage Levels, Types, Dirty/Clean conditions, and fewer or additional such variables.In FIGS. 9 and 10, the ADV register has a respective bit set high corresponding to each of the pages that have hit the highest priority (000000001) in the sorting scheme of TABLE 6. Thus, the ADV register may have more than one bit set high for pages that all have the same priority level (i.e, all zero or all four). The SDP mechanism consequently chooses the page that is the most suitable for its internal processing without any added distinction required. In such case SDP suitably is arranged to randomly select one Page, for instance, or perform a predetermined selection (e.g., choose highest 4-bit physical page number N) implemented in an inexpensive circuit structure.In FIG. 9 a Page Wiping Advisor 2730 is suitably provided as represented by hardware and operations on a Priority Sorting Page 2860 of FIG. 10 as follows:For each page N, enter a respective 9-bit value of PRIORITY_SORTING_PAGE_N[8:0] as follows. If page activity ACT[N] is zero, then set PRIORITY_SORTING_PAGE_N[8:0] to zero for that page N.If page activity ACT[N] is one, then set PRIORITY_SORTING_PAGE_N[8:0] to the nine-bit value from TABLE 6 representing the page Type TYPE[N], Dirty status WR[N], and its Usage Level STAT[N]. Next, the process considers each of the bit-columns of Priority Sorting Page 2860 inFIG. 10. For example, nine such bit-columns are indexed from column zero (0) high wiping priority on right to column 8 low wiping priority on left. In other words, column zero (0) represents "CODE page, VERY LOW usage, the highest priority for wiping in TABLE 6 if a one (1) entry is in column zero (0). Columns 1, 2, 3, ...8 in Priority Sorting Page 2860 respectively have the successively lower priority Meanings tabulated vertically in TABLE 6 down to low priority 8 "DATA WRITE page, MEDIUM usage. A singleton one (1) is entered in page-specific rows of Priority Sorting Page 2860 to represent the priority assigned to each active physical page that has less than HIGH Usage Level in Secure RAM 1034 governed by SDP. Priority Result 2870 is loaded with OR-bits in the following manner. The bits entered in a given column of Priority Sorting Page 2860 are fed to an OR-gate or OR-process. These bits in a given column of Priority Sorting Page 2860 correspond to the pages from 0 to total number N. Priority Sorting Page 2860 is a N-by-9 array or data structure in this example. The OR operation is performed on each of the nine columns of Priority Sorting Page 2860 to supply nine (9) bits to Priority Result 2870 of FIG. 10 as represented by design pseudocode here:PRIORITY_RESULT[0] = PRIORITY_SORTING_PAGE_0[0] OR...OR... OR PRIORITY_SORTING_PAGE_N[0]PRIORITY_RESULT[8] = PRIORITY_SORTING_PAGE_0[8] OR... OR... OR PRIORITY_SORTING_PAGE_N[8]Next, for each page N, the process successively looks right-to-left by an IF..ELSE IF... ELSE IF successively-conditional sifting structure and procedure (in CAPS hereinbelow) for the highest priority value (right-most one) for which PRIOR[Pi][Upsilon]_RESULT[] 2870 is a one. This successively-conditional procedure moves column- wise in Priority Result 2870 from right to left. As soon as the first "one" (1) is found, which is the right-most one, the process pseudocode hereinbelow loads register ADV and falls through to completion. This right-most one in Priority Result 2870 one identifies corresponding column 2875 of Priority Sorting Page 2860. Column 2875 is significant for determining which page to wipe. The process loads the entire column 2875 of PrioritySorting Page 2860 into the Page Wiping Advice register ADV 2880 to establish the wiping advice bit entries in register ADV.The Page Wiping Advice register ADV is thus loaded by design pseuodocode hereinbelow by concatenation of PRIORIT Y_SORTING_PAGE 2870 bits (pertaining to all the physical pages) in column 2875 based on that highest priority right-most one position detected in Priority Result 2870. If the successive procedure fails to find a PRIORITY_RESULT bit active (1) for any of the priorities from 0 to 8, then operations load register ADV with all zeroes, and also zero a default bit named OTHERS.IF PRIORITY_RESULT[0] = 1 THEN ADV=PRIORITY_SORTING_PAGE_0[0] &...&... & PRIORITY_SORTING_PAGE_N[0] ELSE IF PRIORITY_RESULT[1] = 1 THENADV=PRIORITY_SORTING_PAGE_0[1] &...&... & PRIORITY_SORTING_PAGE_N[1] ELSE IF PRIORITY_RESULT[2] = 1 THENADV=PRIORITY_SORTING_PAGE_0[2] &...&... & PRIORITY_SORTING_PAGE_N[2] ELSE IF PRIORITY_RESULT[8] = 1 THENADV=PRIORITY_SORTING_PAGE_0[8] &...&... & PRIORITY_SORTING_PAGE_N[8] ELSE ADV ≤ 0;OTHERS ≤ 0; ENDIF;In TABLE 6 and FIG. 10, the nine bit Priority field has nine bits for singleton one positions in this example because conceptually multiplying three times three is nine. The first conceptual "three" pertains to the number of types T of pages for TYPE[N] concatenated with WR[N]. In this example, the types of pages are 1) 10-Code Page, 2) 00-Data Read Page, and 3) 01-Data Write Page. Data Read does not necessarily mean a read-only limitation, just a page that has not been written (Clean) while in Secure RAM. The second "three" pertains to three levels of Statistics STAT other than HIGH. Those three lower STAT levels are 1- Very Low, 2-Low, and 3-Medium. In general, "L-I" (L minus one) is the number of Usage Levels represented in theStatistics register STAT less one for the highest Usage Level. High usage pages get all- zeroes. Thus, the number of bits in each row of the Priority Sorting Page 2860 is the product of multiplication (L-1)[(2<[Lambda]>(T+W))-1], where T is the number of bits in the Type register TYPE, and W is the number of bits in the Write or Dirty register WR. "<[Lambda]>" means "raised-to- the-power." In this example wherein L is 4, T is one, and W is one, the number of bits in each row of Priority Sorting Page 2860 is nine.In FIG. 10, each row of the Priority Sorting Page 2860 has nine bits among which is a singleton one, except all-zeroes for HIGH Usage Level pages and inactivated pages (ACT[N]=O). The singleton one can occupy any one of the nine bit positions, depending on the result from the Concatenation Case Table 2850. Nine zeroes (all-zeroes) in a row of the Priority Sorting Page 2860 means that a page is present (valid) and should not be wiped because usage is HIGH, or that a particular page N is not present (not valid, ACT[N]=O) in Secure RAM page space. STEP TWO processes the Priority Sorting Page 2860 by doing a Boolean-OR on the bits in each column of Priority Sorting Page 2860. All the bits are ORed from the first column and the result is put in a corresponding first cell of a Priority Result vector 2870. Similarly, all the bits are ORed from the second column of Priority Sorting Page 2860 and the result in put in the second cell of Priority Result vector 2870, and so one until all nine cells of Priority Result vector 2870 are determined.STEP THREE detects the position R of the cell having the right-most one in the Priority Result vector 2870.STEP FOUR then outputs a column 2875 of the Priority Sorting Page 2860 that has the same position R as the rightmost one cell position detected in STEP THREE. Column 2875 of Priority Sorting Page 2860 is muxed out by a Mux 2878 and supplied to the Page Wiping Advice register ADV 2880. The bits in Priority Result 2870 constitute selector controls for the Mux 2878. The Mux 2878 has Nx9 inputs for the respective nine columns of Priority Sorting Page 2860. The Mux 2878 has an NxI output to couple the selected column to register ADV. In an alternative embodiment, the selected column (e.g., 2875) effectively acts as the Page Wiping Advice and is muxed directly to Page Selection Logic 2885.In FIG. 10, Page Selection logic 2885 has an output that actually wipes a page from Secure RAM and/or loads a page to Secure RAM. Page Selection logic 2885 loads pages to Secure RAM as needed by the application until Secure RAM is full. Regardless of priority of existing pages in Secure RAM, in this example, if Secure RAM is not yet full, no existing pages are wiped since space remains for new pages to be loaded. When Secure RAM becomes full, then the Page Wiping Advice register 2880 contents are used. Page Wiping Advice register feeds Page Selection logic 2885. Each "one" in the Page Wiping Advice ADV 2880 signifies a page that can be wiped from a set of Pages 2890 currently residing in Secure RAM. Typically, there is just a single one (1) in the Page Wiping Advice ADV 2880. When Secure RAM is already full, then a single corresponding page is wiped from Pages 2890 in response to the "one" in the Page Wiping Advice ADV 2880.Consider each interval in which the execution of the application runs during each interval by making read and write accesses to pages in Secure RAM without needing to load any new page from external memory. During each such interval, the Page Access Counter is continually updated with running counts of accesses respective to each Page[N]. Any instance of a write access to a page is used to update WR[N].After Page Activity 2890 has been updated with each given instance of a page N being either loaded or wiped according to the Page Wiping Advice ADV 2880, then the STAT, TYPE, and WR registers (collectively, a Data register 2840) are updated. PageAccess Counters 2845 is set to 127 in the particular counter corresponding to the new page. Register STAT is updated for each such new page from the Page Access Counters 2845 according to TABLE 3. Register TYPE is updated for the new page as Code or Data. Register WR is initially set to the Clean value respective to the new page. In some embodiments, the Concatenation Case Table 2850 is a separate structure and correspondingly updated, and in other embodiments, the registers STAT, TYPE and WR collectively constitute the Concatenation Case Table 2850 itself.The Priority Sorting Page 2860 is also correspondingly updated, and the process of generating Priority Result 2870 and Page Wiping Advice ADV 2880 is repeated in this way continually to keep the Page Selection Logic 2885 fed with wiping advice from register ADV. In that way, upon a Load Page active input due to a page fault occurrence, Page Selection Logic 2885 readily identifies which page to wipe. Page Selection Logic 2885 keeps pages wiped currently, such as by updating the Page Active register 2890 with a zero or making an entry in an additional bit field in the Page Active register 2890 constituting a Page Wipe control register.Then the SDP Swap Manager responds to the updated Page Wipe information. If the WR[N] bit is Dirty for the particular page N that is wiped, then a Swap Out operation is performed, otherwise if Clean, then no Swap Out operation. Then a Swap In operation gets the new page and loads it into the particular physical page slot N overwriting the page N that was wiped. Then Page Activity register 2890 is updated to indicate that the new page is active in page slot N.Initialization is performed at the beginning of the process by clearing all of the data structures 2840, 2850, 2860, 2870, ADV 2880, 2890. Also, when the Secure RAM is being loaded in early phases of execution, as-yet unused page spaces in Secure RAM are available for incoming new pages being loaded according to FIG. 10. Prioritization from Page Wiping Advice is suitably ignored by Page Selection Logic 2885 until all the physical page slots of Secure RAM for data and code pages governed by SDP are full of physical pages.When Secure RAM is full and a new page needs to be loaded, the secure demand paging mechanism chooses an existing page in Secure RAM for wiping that is the most suitable for its internal processing without any added distinction required. When PageWiping Advice ADV register 2880 has two or more bits HIGH, the SDP mechanism of Page Selection Logic 2885 and/or SDP software in different embodiments takes the first page it sees set or takes a page randomly from among the pages with an ADV bit set to one. If all ADV bits are set to zero, the SDP mechanism of Page Selection Logic 2885 and/or SDP software in different embodiments takes the first page it sees set or takes a page randomly from among all the pages for which wiping is permitted. The SDP mechanism also benefits from information indicating that several pages can be replaced and can thus replace more than one page even if only one was requested.Alternative embodiments use the following processes for multiple ones in ADV register 2880: 1) take first one, 2) take randomly, 3) resolve the tie by taking page with lowest page access counter 2845 value, 4) replace more than one page, 5) reserve one slot for DATA pages and a second slot for code pages, and 6) reserve respective slots for respective applications in a multi-threaded SDP. If the usage level is HIGH for all pages in Secure RAM it is also possible for all- zeroes to appear in ADV register 2880. A similar set of the just-listed process alternatives are used in various embodiments when all-zeroes appear in ADV register 2880.To replace more than one page even if only one was requested, a program flow anticipation process suitably determines what page(s) to swap in since the request only identifies one such page. In this program flow anticipation process, when the SDP Page Selection Logic 2885 reads from SSM ADV register that 3 pages can wipe out, the SDP Page Selection Logic 2885 and/or SDP software replaces the first one page with the page requested and the two remaining pages with the two adjacent pages of the page requested. Other rules are suitably instantiated in the practice of various embodiments by modelizing the software behavior so as to leverage information identifying which data or code page is linked with another data or code page. In embodiments that swap out more than one page at a time when appropriate, CPU bandwidth is advantageously saved by avoidance of a future page fault when a future page fault is otherwise likely, such as in the case of a secure process context switch.Page access counters 2845 is an example of a page access table arrangement that has a page-specific entries. Each page-specific entry is set to an initial value by entry of a new page corresponding to that entry in the internal memory. That initial value can be 0 or top-of- range (e.g. 127) or some other value chosen for the purpose. The page-specific entry is reset to a value substantially approximating the initial value in response to a memory access to that page. In some embodiments, the entry is reset to the initial value itself, but some variation is permissible, suitable, and useful for performance. Further, the page-specific entry is changed in value by some amount in response to a memory access to a page other than the page corresponding to that entry. The change may be by a positive or negative amount, or by incrementing or decrementing, or by some random number within a range, and other variations for performance in various applications The concatenation case table 2850 also has various forms in different embodiments.In one type of embodiment, the concatenation case table is suitably a compact array of storage elements having a layout much as shown in FIG. 10. In another embodiment, storage elements for Usage Level STAT, page Type TYPE, and page modified WR are scattered physically. In another embodiment, the page-specific Usage Level range for register STAT is simply derived by a coupling or a few logic gates coupled to the Page Access Counter to determine from high order bits what range or tier in which a given value of page access statistic lies. Thus, a separate storage element for the usage level may be absent, even though a given usage level is formed from the statistic.The conversion circuit 2855 responds to the concatenation case table to generate a page priority code for each page. In some embodiments, conversion circuit 2855 generates a page priority code field having a singleton bit value accompanied by complement bit values, the singleton bit value having a position across the page priority code field representing page priority. Other priority codes are used in other embodiments.Some embodiments include a priority sorting table 2875 as a physical structure accessible by the priority sorting circuit and holding the page priority code for each page generated by the conversion circuit 2855. The priority sorting circuit searches for at least one page priority code in the priority sorting table having a position of the singleton bit value having a position across the page priority field representing a highest page priority, and thereby identifying a page having that page priority code. When more than one page has the highest page priority, one of them is selected by some predetermined criterion such as first, last, highest or lowest, or randomly selected as the page to wipe from the pages having the highest page priority.A right-most ones detector is but one example of a priority sorting circuit for identifying at least one page having a highest page priority. Depending on arrangement an extreme-ones detector such as either a right-most ones detector or a left-most ones detector are suitable and yet other alternative approaches are used depending on the manner of representing the priority from the conversion circuit 2855.In FIG. 11 another embodiment has a set of Page Access Counters 2845, and a Concatenation Case Table 2850 provided with STAT, TYPE, WR bits for each physical page in Secure RAM. A Page Identification Counter 2910 cycles through physical page identifying bits (e.g., 4 bits from 0000 through 1111 binary). Page Identification Counter 2910 provides these bits as selector controls to a 16:1 Mux 2920. Mux 2920 supplies Concatenation Case codes row-by-row, such as 10 0 1 from row for page 0 in the top row of Concatenation Case Table 2850.A Priority Conversion circuit 2955 responds to each Concatenation Case code supplied by Mux 2920 and converts to a Priority Code such as the nine-bit codes of TABLE 6, or four-bit binary codes representing numbers from zero (0) to (9) decimal or otherwise as described herein.Further, a Priority Maximum Detector 2970 is fed directly by the Priority Conversion circuit 2955. The Priority Maximum Detector 2970 finds the maximum Priority Code among the Priority codes (e.g. 16 of them) fed to Detector 2970 on the fly. Detector 2970 is any appropriate maximum detector circuit. One example of a Detector 2970 is an arithmetic subtractor circuit fed with successive Priority Codes, and subtractor output conditionally updating a temporary holding register when a new Priority Code arrives that is greater than any previous Priority Code fed to it in the cycle. Concurrently and conditionally, an associated temporary holding register is updated with the 4-bit page identification (Page ID) bits supplied by Page Identification Counter when each greatest new Priority Code arrives. The temporary holding register for Priority Code is fed back to the subtracter for comparison with succeeding incoming Priority codes. Comparing FIG. 10 with FIG. 11, note that the right-most one detection in FIG. 10 acts as a type of maximum detector of different structure.When Page Identification Counter 2910 rolls over from 1111 to 0000 to begin a new cycle, the associated temporary holding register for Page ID is clocked to an output PAGE_ID_TO_WIPE which remains valid during the entire new cycle until updated.Also, occurrence of roll-over to 0000 by the Page Identification Counter 2910 is fed to circuitry in Priority Maximum Detector 2970 to reset the temporary register for Priority Code so that the maximum is re-calculated on the new cycle. Thus, Priority Maximum Detector 2970 is cycled and reset by Page Identification Counter 2910, and Detector 2970 has a storage element to store the latest page identification having the highest page priority as conversion by Priority Conversion circuit 2955 proceeds through the various pages. The output PAGE_ID_TO_WIPE is suitably fed directly to the ACT register or to analogous control registers. Note that Priority Maximum Detector 2970 automatically operates, in the particular subtractor example, to pick the first or last of multiple pages having the maximum Priority Code for wiping, if that circumstance occurs. If more than one page has the same highest priority value among all the pages, the Priority Maximum Detector 2970 simply stores the first page or last identification of a page tied with any other page having that highest priority value, depending on how the conditional output of the subtractor is arranged for greater-than- zero (picks first tied page), or greater-than-or-equal-to-zero (picks last tied page). Detector 2970 then returns that page identification as the page wiping advice PAGE_ID_TO_WIPE.In FIG. 11, supporting Page Selection Logic analogous to 2885 of FIG. 10 is provided as appropriate. Various types of page selection logic 2885 are suitably fed by the Priority Sorting Page 2860 for selecting a page in the memory to wipe. Page selection logic suitably has an input coupled to the activity register to override the page selection logic when there is an empty page slot in the memory. Another embodiment couples the page activity register 2890 to the conversion circuit 2855, and the conversion circuit 2855 assigns higher page priority for wiping to an empty page than an occupied page. In FIGS. 12A and 12B, operational flow embodiment 3000 for aspects of FIGS. 9, 10,11 and 13 commences operations with a BEGIN 3005 in FIG. 12A and proceeds to a decision step 3010 to determine if a New Page has been Swapped In to page slot [N]. By "New Page" is meant a page that has just previously been absent from Secure RAM 1034 whether or not that page was present in Secure RAM at some time in the past or has never been in Secure RAM at any time in the past. If so, then a step 3015 updates TYPE[N] for Code or Data page Type, and resets counter N to its initialization value (e.g., 127) in Page Access Counters 2845, and goes to a step 3020.In decision step 3010, if no Swap In of a New Page is found, then operations go to a decision step 3020. Decision step 3020 determines whether a Read or Write access has just been made to a page in Secure RAM 1034. If so, then operations proceed to a decision step 3025 to determine whether for each given page N that the access was made to that page N. If the access was made to the given page N, then a step 3030 resets counter CTR[N] to (or increments approximately to) the initialization value (e.g., 127), and a succeeding step 3035 updates the register WR[N] to a Dirty state for page N in case of a Write access.In step 3025, if the access was made to a page other than each given page N, then a step 3040 adjusts the counter CTR[N] such as by decrementing CTR[N]. Thus, page N has a corresponding counter N adjustment indicative of access, which is a sign of usage, and all the other pages have their respective counter values adjusted to indicate the non-access at this moment. In this way, counter statistics are progressively developed in Page Access Counters 2845. After either of steps 3035 and 3040, a step 3050 converts and stores the statistics in all the counters CTR[O], CTR[I], ...CTR[N] into corresponding updated Usage Levels in register STAT in register 2740 of FIG. 9.After either of steps 3050 and No from page access decision step 3020, operations proceed to a decision step 3060 to determine whether a Page Fault has occurred by an attempted access to a page that is absent from Secure RAM 1034. If not, operations loop back and ordinarily reach decision step 3010 to proceed with a new updating cycle.Continuing the flow description in FIG. 12B, if a Page Fault has occurred (Yes) in step 3060 of FIG. 12A, then a decision step 3065 determines whether Secure RAM 1034 has any empty page slot into which a new page could be put. If no empty slot is detected in decision step 3065, then operations in a step 3070 prioritize for wiping the pages currently resident in Secure RAM 1034, find the page(s) having greatest priority for wiping, and load the Page Wiping Advice register ADV of FIG. 9.Then a step 3075 selects a particular page N to wipe based on wiping advice bit in the register ADV, or by selection from two or more wiping advice bits that might have been set in register ADV, or selection from all the pages if no wiping advice bit was set in register ADV.Next, a decision step 3080 determines if the page N selected in step 3075 is a modified page (WR[N]=I). If Yes in step 3080, then a step 3085 performs a cryptographic operation (such as encryption, hashing, or both) and Swaps Out the page N that is wiped. If page N is found to be a not- modified page in step 3080 or step 3085 has been reached and completed, then operations Swap In a New Page in a step 3090.After the Swap In of step 3090 of FIG. 12B, or if there was no Page Fault detected in step 3060 of FIG. 12A, then operations go to a decision step 3095 in FIG. 12A to determine whether there is a circuit reset or other termination of operations. If not, then operations go back to step 3010 to do a new cycle of operation. If Yes, in step 3095, then operations go to RETURN 3098 and are thus complete.STATIC AND DYNAMIC PHYSICAL PAGE ALLOCATION In various embodiments, and in the same embodiment for different software applications, an optimum usage of pages can be established or selectively established by having an Allocation Ratio of Code pages to Data pages be within a predetermined range (statically established).In still other embodiments the Allocation Ratio of Code pages is dynamically learned. The Allocation Ratio of Code pages to Data pages for allocating Secure RAM physical page space is suitably learned or determined given a limited number N of total secure memory pages. Determining the Allocation Ratio statically is either obviated or augmented in some embodiments that dynamically learn the ratio, in effect. Hybrid embodiments that determine the Allocation Ratio statically for some applications or purposes and dynamically for other applications or purposes are also contemplated.The SDP mechanism generates the page selection from the behavior of the application software through an internally activated learning process of SDP that counts actual activity of data pages and code pages to thereby generate statistics to stabilize on the appropriate allocation ratio. In terms of memory organization that re-groups or allocates DATA or CODE pages, the ratio is learned or implicitly results from the activity by SDP register ADV 2880. For example, suppose the allocation ratio of Code page slots to Data page slots in Secure RAM is initialized at unity (1 Code page per 1 Data page). Further suppose that the particular application is swapping in Code pages at the rate of 2: 1 (two Code pages swapped in for every one Data page).Then the SDP mechanism in one embodiment increments the number of page slots in Secure RAM allocated to Code pages by one, and decrements the number of page slots allocated to Data pages by one. Limits are imposed so that there is always a minimum number of at least one or two Code pages and at least one or two Data pages to set limits on the range of the allocation and always include both Code or Data pages in the SDP process.Suppose the particular application continues to be swapping the Code pages at a higher Swapping Ratio to Data pages than the Allocation Ratio of Code Slots divided by Data Slots established by SDP for Secure RAM. Then the Allocation Ratio would be continually altered to increase the allocation ratio by increasing page slots allocated for Code pages and decrementing the number of page slots allocated for Data pages. At some point, the process would settle at a point wherein the Swapping Ratio equals the Allocation Ratio. In this embodiment the pseudocode might provide after, S number of swaps updating Code and Data swap statistics for instance:IF SWAP_RATIO - ALLOCATION_RATIO > EPSILON THEN CODE_SLOTS ≤ CODE_SLOTS + 1 DATA_SLOTS ≤ DATA_SLOTS - 1ELSE IF ALLOCATION_RATIO - SWAP_RATIO > EPSILON THEN CODE_SLOTS ≤ CODE_SLOTS - 1DATA_SLOTS ≤ DATA_SLOTS + 1;Feedback in the above process drives the Allocation Ratio to be approximately equal to the Swap Ratio. The Allocation Ratio is changed by changing the number of CODE_SLOTS and DATA_SLOTS, which always sum to the available number of physical page slots in Secure RAM. Then the Swap Ratio changes in a complex way, partly in response to the change in Allocation Ratio and partly in response to the structure of the area of current execution in the application program. Even though the behavior and dependencies are complex, the dynamic learning feedback process accommodates this complexity. The value EPSILON is set at a predetermined amount such as 0.2 to reduce hunting near a settling point Swap Ratio equal to Allocation Ratio by the learning feedback loop. In actual execution of an application program, a continual adaptation by the dynamic learning feedback process is provided whether a settling point exists or not. Thus, the SDP register ADV 2880, and the process that drives it, not only chooses page locations to wipe but also dynamically evolves a ratio of Code pages to Data pages in Secure RAM. Limits are placed on the increments and decrements so that at least one slot for aCode page and at least one slot for a Data Page are allocated in Secure RAM. In this way, Swap is always possible for either type of page.Pre-existing applications software is suitably used as is, or prepared for use, with the SDP governed Secure RAM space. For instance, software- generating methods used to prepare the application program suitably size-control the program loops to reduce the number of repetitions of loops that cycle through more virtual pages than are likely to be allocated in Secure RAM space for the loops. Big multi-page loops with embedded subroutine calls are scrutinized for impact on Swap overhead and thrashing given particular Allocation Ratio and allocable number of pages in Secure RAM. Various SDP embodiments can perform to the best extent possible given the actual application programs that they service, and some application programs as noted permit SDP efficiency to display itself to an even fuller extent.Minimizing hunting in the dynamic learning process is here explained using some scenarios of operation. In a first scenario, suppose execution of an application program is page-linear in the sense that execution occurs in a virtual page, then proceeds to another virtual page and executes some more there, and then proceeds similarly to a third, and subsequent pages until execution completes. With a page-linear application, a single Code page could suffice since each new Code page needs to be swapped in once to Secure RAM because the application is a secure application and is to be executed from Secure RAM. Since execution does not return to a previously executed page, there is no need to Swap In any Code page twice. There is no thrashing, and there is no need for even a second Code page slot in Secure RAM in this best-case example. In a second scenario, suppose execution of an application program is page-cyclic in the sense that somewhere in the application execution occurs in one virtual page, then proceeds directly or by intervening page(s) to another virtual page and executes some more there, and then loops back to the first-mentioned one virtual page. In this case, Swapping In the first-mentioned page could have been avoided if there were at least one additional Code slot as a physical page slot in Secure RAM. Where loops cycle many times between pages, repeated Swapping is avoided by providing enough physical Code page slots in Secure RAM so that the repeated Swapping is unnecessary since the needed pages are still resident in the Secure RAM. The subject of hunting enters the picture as follows. Suppose allocating a given number M of Code page slots in Secure RAM produces very little thrashing. Then suppose decrementing the allocation from M to just one less number of pages M-I produces a lot of thrashing because the application has a loop that cycles through M number of pages. There may be a stair-step non-linearity, so to speak, in the efficiency. Accordingly, some dynamic learning embodiments herein keep the next two previous statistics on Swap Ratio prior to a given decrement operation. If the previous statistics indicate a large gap between Swap Ratio in the last two previous statistics, the decrement operation is in some embodiments omitted because the next re-allocation might start a cycle of hunting and increase the amount of Swapping and thrashing. Because a settling point might not in fact exist due to the dynamics of an application program, other dynamic learning embodiments that might not have this extra precaution are regarded as quite useful too.A second dynamic learning embodiment recognizes that Data pages include time- consuming Dirty page Swap Out as well as Data page Swap In, and Code pages in this example are always clean. Accordingly, the Swap Ratio in this embodiment should settle at a point that takes a Dirty-Swap-Out factor into account, such as by allocating some more space to data pages than otherwise would happen by equalizing the Allocation Ratio to the Swap Ratio. This second embodiment keeps statistics on number of Code pages, number of Data dirty pages, number of Data not-dirty (clean) pages. The time required for SDP to service these pages is either known by pre-testing or measured in clock cycles on the fly. For this second embodiment, define symbols as follows:C Number of Code page wipe plus new Code page Swap Ins per second Tc Code page wipe plus new Code page Swap In time (milliseconds)Dn Number of Data not-dirty page wipe with new Data page Swap Ins per secondTon Data not-dirty page wipe plus Swap In time (milliseconds)Dd Number of Data dirty page Swap Out with new Data page Swap In per second Tod Data dirty page Swap Out plus Swap In time (milliseconds)Then the time-based ratio of Code page time to Data page time is written down and put in to direct the process ahead of the testing step on the Swap Ratio minus Allocation Ratio. A pseudocode example for this second embodiment is provided next below:SWAP RATIO ≤ C Tc / (Dn TDn + DdTDd);DELTA ≤ 1; ADJUST ≤1;ALLOCATION_RATIO ≤ CODE_SLOTS / DATA_SLOTS;IF SWAP_RATIO * DATA_SLOTS - CODE_SLOTS > DELTA THEN CODE_SLOTS ≤ CODE_SLOTS + ADJUST DATA_SLOTS ≤ DATA_SLOTS - ADJUST;ELSEIF CODE_SLOTS - SWAP_RATIO * DATA_SLOTS > EPSILON THEN CODE_SLOTS ≤ CODE_SLOTS - ADJUSTDATA_SLOTS ≤ DATA_SLOTS + ADJUST;In words, if the time it takes for SDP to service Data pages on average is much higher than the time SDP takes to service Code pages, then the redefined Swap Ratio falls, diminishes and decreases, compared to a ratio C/D of Code to Data page rates in the first embodiment without the relative computational complexity of SDP servicing different types of pages taken into account. DELTA is a threshold of adjustment (e.g., unity or some other number of page slots). EPSILON in the first embodiment might change for a criterion based on a difference between Allocation Ratio and Swap Ratio values. The second embodiment, in effect, multiplies that difference by the number of Data Slots and compares to EPSILON, which is less likely to change over the range of allocation. In other words, a number EPSILON having value of page slot threshold (e.g., one (I)) is compared with the difference between the number of Code Slots allocated and the product of the Swap Ratio times the number of Data Slots allocated.In both the above dynamic learning pseudocode examples, the Allocation Ratio is effectively made to rise by the IF-THEN first part of the conditional pseudocode, and the Allocation Ratio is made to fall by the ELSEIF-THEN second part of the conditional pseudocode. The amount of adjustment ADJUST is given as plus-one (+1) page slot or minus-one (-1) page slot, and empirical testing can show usefulness of other alternative increment values as well.Initialization of the number of CODE_SLOTS and number of DATA_SLOTS is suitably predetermined and loaded in Flash memory as CODE_SLOTS_START ANDDATA_SLOTS_START values. The Initialization is then adjusted by SDP software based on actual operation and stored on an application- specific basis for use as the application is subsequent re-started in many instances of actual use in a given handset or other system.A multi-threaded embodiment reserves respective slots for respective applications in a multi-threaded SDP. When one of the applications in the multi-threaded embodiment is added, the slot assignments are changed as follows. From a software point of view, if the SDP multi-threads several applications, the application not running will fatally be evicted after a while. Thus the slot assignment would consist on deactivating the pages of the application not running in order to keep them out of the sorting machine or process. This still keeps the deactivated pages statistics frozen.Also, consider initial Allocation Ratio and Swap Ratio for a dynamic-learning multithreaded embodiment wherein a context switch in the middle of execution of a first application (Virtual Machine Context VMCl) may switch execution to the beginning or middle of a second application (Virtual Machine Context VMC2). There, the current Swap Ratio and Allocation Ratio for the first application at the time of context switch is stored for use when the first application is then Swapped back in to resume execution later. Upon context switch to the second application, analogous earlier Swap Ratio and Allocation Ratio information is retrieved so that the second application efficiently executes by benefiting from its own previous experience. In FIG. 13, another dynamic learning embodiment 3100 responds to a control signalCODE to selectively perform prioritization and wiping advice for Code pages or selectively perform prioritization and wiping advice for Data pages. Either of the embodiments of FIG. 10 and FIG. 11 are suitably arranged for dynamic learning. FIG. 13 shows some rearrangements based on FIG. 10. FIG. 11 is suitably rearranged analogously.In FIG. 13, Page Access Counters 3145, Concatenation Case Table 3150, Conversion Table Lookup 3155 and Priority Sorting Page 3160 are illustratively analogous to the correspondingly-named structures 2845, 2850, 2855 and 2860 of FIG. 10.Priority Result 3170 is loaded with OR-bits in the following manner. The bits entered in a given column of Priority Sorting Page 3160 are fed to an OR-gate or OR-process. These bits in a given column of Priority Sorting Page 3160 correspond to the pages from 0 to total number N. Priority Sorting Page 3160 is a N-by-9 array or data structure in this example. The OR operation is performed on each of the nine columns of Priority Sorting Page 3160 to supply nine (9) bits to Priority Result 3170 of FIG. 10 as represented by design pseudocode next below.In this embodiment, pseudocode defines structure and process that selectively responds to the CODE signal and the TYPE information. For instance, suppose that control signal CODE is active (e.g, high or one), meaning that only Code pages in the page slots allocated to Code pages are allowed to be prioritized and used to generate wiping advice pertaining to a Code page and not a Data page. In that case, CODE being high agrees with the TYPE[N] of each page N that is actually a Code page by TYPE[N] being high (one). A set of XNOR (Exclusive-NOR) gates equal in number to the number of pages (e.g.,16) are collectively designated XNOR 3183. (An XNOR gate supplies an output high when its two inputs are both high or both low; and the output is otherwise low.) When CODE and TYPE[N] are both high, each particular XNOR gate in XNOR 3183 in such case returns an active output (high, one). The XNOR high output qualifies an AND gate that passes through the state of PRIORITY_SORTING_PAGE_N[0] to PRIORITY_RESULT[0] . The just- described process is similarly performed for each column of Priority Sorting Page 2860 to load each corresponding bit of Priority Result 2870.PRIORITY_RESULT[0] = (PRIORITY-SORTING-PAGE-O[O] AND (TYPE[O] XNOR CODE)) OR...OR ... OR (PRIORITY_SORTING_PAGE_N[0] AND (TYPE[N] XNOR CODE)) PRIORITY_RESULT[8] =(PRIORITY-SORTING-PAGE-O[S] AND (TYPE[O] XNOR CODE)) OR ...OR ... OR (PRIORITY-SORTING-PAGE-N[S] AND (TYPE[N] XNOR CODE))Next, for each page N, the process successively looks right-to-left by an IF.. ELSE IF... ELSE IF successively-conditional sifting structure and procedure (in CAPS hereinbelow) for the highest priority value (right-most one) for which PRIOR[Pi][Upsilon]_RESULT[] 3170 is a one. Because the whole process of loading Priority Result 3170 is conditioned TYPE[N] XNOR CODE, the subsequent right-most ones detection in Priority Result 3170 makes this determination only for the Code pages in Secure RAM.This successively-conditional procedure moves column- wise in Priority Result 3170 from right to left. As soon as the first "one" (1) is found, which is the right- most one, the process pseudocode hereinbelow loads register ADV and falls through to completion. This right-most one in Priority Result 3170 one identifies corresponding column 3175 of Priority Sorting Page 3160. Column 3175 is significant for determining which page to wipe. The process loads the Code-page-related entries in column 3175 of Priority Sorting Page 3160 via Mux 3178 into the Page Wiping Advice register ADV 3180 to establish the wiping advice bit entries in register ADV. The Code-page-related entries fed to register ADV are qualified by action of XNOR 3183.The Page Wiping Advice register ADV is thus loaded by design pseuodocode hereinbelow by concatenation of PRIORITY-SORTING-PAGE 3170 bits (pertaining to all the physical pages) in column 3175 based on that highest priority right-most one position detected in Priority Result 3170. If the successive procedure fails to find a PRIORITY-RESULT bit active (1) for any of the priorities from 0 to 8, then operations load register ADV with all zeroes, and also zero a default bit named CODE-OTHERS.IF PRIORITY-RESULT[O] = 1 THENADV= (PRIORITY-SORTING-PAGE-O[O] AND (TYPE[O] XNOR CODE)) & ...&...& (PRIORITY-SORTING-PAGE-N[O] AND (TYPE[N] XNOR CODE)) ELSE IF PRIORITY-RESULT[I] = 1 THENADV= (PRIORITY-SORTING-PAGE-O[I] AND (TYPE[O] XNOR CODE)) & ...& (PRIORITY_SORTING_PAGE_N[1] AND (TYPE[N] XNOR CODE))ELSE IF PRIORITY_RESULT[2] = 1 THENADV= (PRIORITY-SORTING-PAGE-O[I] AND (TYPE[O] XNOR CODE)) & ...&...& (PRIORITY-SORTING-PAGE-NP] AND (TYPE[N] XNOR CODE))ELSE IF PRIORITY_RESULT[8] = 1 THENADV= (PRIORITY-SORTING-PAGE-O[S] AND (TYPE[O] XNOR CODE)) & ...&...& (PRIORITY-SORTING-PAGE-N[S] AND (TYPE[N] XNOR CODE))ELSE ADV ≤ 0;OTHERS ≤ 0; ENDIF;In cases where the Data pages are prioritized, the control signal CODE goes low. As a result, the expression TYPE[N] XNOR CODE is active high for pages having TYPE[N] = 0, meaning Data pages. Then the Priority Result 3170 is generated only from the entries in Priority Sorting Page 3160 pertaining to Data pages. Further, the right-most ones detection on Priority Result 3170 thereby pertains only to the Data pages, and finds the highest priority column for them in Priority Sorting Page 3160. Then the pseuodocode next above loads the Page Wiping Advice register ADV only with entries pertaining to Data pages from that highest priority column. Page Selection Logic 3185 is similarly qualified by the pages allocated for Code to produce the signal to wipe a particular Code page, or alternatively qualified by the pages allocated for Data to produce the signal to wipe a particular Data page.Suppose the dynamic learning process determines that a reallocation is needed to allocate a slot for another Code page. The process is driven by the decrement of DATA-SLOTS to wipe a Data page to make way for another Code page. Accordingly, the process makes the control signal CODE go low, so that Priority Sorting Page 3160 and Priority Result 3170 are processed to supply Data page wiping advice via Mux 3178 into register ADV 3180. The Data-page-related entries fed to register ADV are qualified by action of XNOR 3183. In FIG. 13 the legend CODE/DATA is used to indicate the selective operation with respect to Code pages CODE PP. separate from Data pages DATA PP. Page Selection Logic 3185 responsively directs a wipe of a particular Data page, and signals SDP Swap Manager software to Swap out that Data page if it is Dirty and in any event Swap in a new Code page into the just- wiped page slot in Secure RAM. Operations resume executing the application and servicing it with SDP given the new allocation. Conversely, suppose the dynamic learning process determines that a reallocation is needed to allocate a slot for another Data page. The process is driven by the decrement of CODE_SLOTS to wipe a Code page to make way for another Data page. Accordingly, the process makes the control signal CODE go high, so that Priority Sorting Page 3160 and Priority Result 3170 are processed to supply Code page wiping advice into register ADV 3180. Page Selection Logic 3185 responsively directs a wipe of a particular Code page, and signals SDP Swap Manager software to Swap in a new Data page into the just-wiped page slot in Secure RAM. Operations resume executing the application and servicing it with SDP given the new allocation.Still other embodiments re-group pages. The SDP software mechanism in some embodiments allocates and organizes physical pages of Secure memory such as DATA page into Secure RAM page slot (0->5) and Code page into page slot 6 up to highest-numbered page slot, for example. In some process embodiments that load multiple applications (multithreaded SDP), some slots are suitably reserved for APPl or APPn. The SDP mechanism suitably operates when possible to re-group pages that have a relationship or meaning in common.An example of usefulness in re-grouping application pages (app page 1,..., app page N) can be seen by considering that in a larger system, the pages can have fragmentation problems like those a hard drive can experience. Re-grouping has particular value because it delivers automatic de-fragmentation. "Fragmentation" pertains to a condition of related pages becoming widely separated in a storage medium so that the access process is slowed down. The re-grouping mechanism herein is advantageously applied for paging network resources onto a hard drive, since the SSM machinery is enhanced to not only account for the number of accesses but also the time to access the resources (Hard drive, USB, Network) to build a trusted sorting process. SDP monitors internal RAM space defined by a range of addresses. One type of SDP embodiment defines a range of addresses for all spaces used by SDP such as Flash memory, DRAM, Hard Drive, and other components. These components have fragmentation problems of their own or have pages in all them (e.g., ten pages in internal RAM; forty pages on Flash memory; forty pages on Hard Drive and all these pages used for the same application). In an embodiment, SDP is used to execute on a distant memory location that has small bandwidth, such as a Network location. Cascading several SDP processes is added to such type of SDP embodiment and other SDP embodiments. Fragmentation matters, for example, when switching from one resource to another introduces access timing latency. Thus, re-grouping all pages of an application into one resource or contiguous space is performed to reduce or eliminate such access timing latency. A hit counter performs a count for SDP purposes herein by adding to the number of accesses the time to access the resources (Hard drive, USB, Network). The time to access resources is combined with the hit count, byResourceHitCount = ResourceHitCount + 1.No running clocks are necessary, so frequency of accesses is used. The arithmetic is suitably either increment a resource count or reset/clear it because the statistic is invalid, and then start hit counts over again. Because SDP supports future code and extensions to the environment, the code behavior is unknown. Therefore, this hardware provides more visibility to what the code does (with respect to frequency of what it does).Some embodiments regard a Code page and an unmodified Data page as similar enough to be given equivalent prioritization and thereby further reduce the relatively modest chip real estate of the prioritization circuitry. Since there are fewer priority levels, the chances of tied (equal) priorities will be higher and more random selection, or otherwise- subsequent page selection, to break a tie will be involved for a given number of Secure RAM physical page slots governed by the SDP. Modified and Unmodified are regarded as the relevant page Types and the TYPE[N] and WR[N] registers are either merged or fed to logic that produces a third merged page type variable MODIF[N]. An example prioritization schedule next still uses a 4-tier Usage Level and reduces the number of priorities by three compared to the TABLE 6 encodings. MODIF=O page tagged as VERY LOW usage MODIF=O page tagged as LOW usage MODIF=I page tagged as VERY LOW usage MODIF=I page tagged as LOW usage MODIF=O page tagged as MEDIUM usage MODIF=I page tagged as MEDIUM usageAnother embodiment has a variable n representing the access timing of the memory. Depending of the value of n, the Statistics counter is decremented by one only each time n accesses occur. This variable n is configured in an SDP register (e.g., 2 bits per page, 00: very fast access timing; 01: fast; 10: medium; 11: slow). The Statistics counter is operated as follows: 00: One access, decrement by one. 01: Two accesses, decrement by one. 10: Four accesses, decrement by one. 11: Eight accesses, decrement by one.Still another embodiment has two or more Priority Result registers 2870A and 2870B and Page Wiping Advice registers 2880A and 2880B muxed for interleaved prioritization fast operations for Code and Data pages respectively.When one of the applications in one type of SDP multi-threading embodiment is terminated, the slot assignments are suitably changed by deactivating the pages of the current running application and re-activating the application that was frozen. For example, the Page Access Counters 2845 are frozen by the de-activation and the statistics are fully operational as if the switch to another application never occurred.Various embodiments are used with one or more microprocessors, each microprocessor having a pipeline is selected from the group consisting of 1) reduced instruction set computing (RISC), 2) digital signal processing (DSP), 3) complex instruction set computing (CISC), 4) superscalar, 5) skewed pipelines, 6) in-order, 7) out-of-order, 8) very long instruction word (VLIW), 9) single instruction multiple data (SIMD), 10) multiple instruction multiple data (MIMD), and 11) multiple-core using any one or more of the foregoing.DESIGN, VERIFICATION AND FABRICATION Various embodiments of an integrated circuit improved as described herein are manufactured according to a suitable process of manufacturing 3200 as illustrated in the flow of FIG. 14. The process begins at step 3205 and a step 3210 preparing RTL (register transfer language) and netlist for a particular design of a page processing circuit including a memory for pages, a processor coupled to the memory, and a hardware page wiping advisor circuit coupled to the processor and operable to prioritize pages based both on page type and usage statistics. The Figures of drawing show some examples, and the detailed description describes those examples and various other alternatives. In a step 3215, the design of the page processing circuit is verified in simulation electronically on the RTL and netlist. In this way, the contents and timing of the memory, of the processor and of the hardware page wiping advisor circuit are verified. The operations are verified pertaining to producing the ACT, WR, TYPE and STAT entries, generating the priority codes for the priority sorting table, sorting the priority codes, generating the page wiping advice ADV, and resolving tied-priority pages. Then a verification evaluation step 3220 determines whether the verification results are currently satisfactory. If not, operations loop back to step 3210.If verification evaluation 3220 is satisfactory, the verified design is fabricated in a wafer fab and packaged to produce a resulting integrated circuit at step 3225 according to the verified design. Then a step 3230 verifies the operations directly on first-silicon and production samples by using scan chain methodology on the page processing circuit. An evaluation decision step 3235 determines whether the chips are satisfactory, and if not satisfactory, the operations loop back as early in the process such as step 3210 as needed to get satisfactory integrated circuits. Given satisfactory integrated circuits in step 3235, a telecommunications unit based on teachings herein is manufactured. This part of the process first prepares in a step 3240 a particular design and printed wiring board (PWB) of the telecommunication unit having a telecommunications modem, a microprocessor coupled to the telecommunications modem, a secure demand paging processing circuitry coupled to the microprocessor and including a secure internal memory for pages, a less-secure, external memory larger than the secure internal memory, and a hardware secure page wiping advisor for prioritizing pages based both on page type and usage statistics and at least one wiping advisor parameter loaded in a step 3245, and a user interface coupled to the microprocessor.The particular design of the page processing circuit is tested in a step 3250 by electronic simulation and prototyped and tested in actual application. The wiping advisor parameters include the usage level tier definitions, any application-specific or all-application static allocation of Secure RAM to Code pages and Data pages. Also, for dynamic learning embodiments, initial application- specific allocations and parameters like DELTA or EPSILON are suitably adjusted.The wiping advisor parameter(s) are adjusted for increased page wiping efficiency in step 3255, as reflected in fast application execution, decreased Swap Rate in executing the same application code, lower power dissipation and other pertinent metrics. If further increased efficiency is called for in step 3255, then adjustment of the parameter(s) is performed in a step 3260, and operations loop back to reload the parameter(s) at step 3245 and do further testing. When the testing is satisfactory at step 3255, operations proceed to step 3270.In manufacturing step 3270, the adjusted wiping advisor parameter(s) are loaded into the Flash memory. The components are assembled on a printed wiring board or otherwise as the form factor of the design is arranged to produce resulting telecommunications units according to the tested and adjusted design, whereupon operations are completed at END 3275.It is emphasized here that while some embodiments may have an entire feature totally absent or totally present, other embodiments, such as those performing the blocks and steps of the Figures of drawing, have more or less complex arrangements that execute some process portions, selectively bypass others, and have some operations running concurrently sequentially regardless. Accordingly, words such as "enable," disable," "operative," "inoperative" are to be interpreted relative to the code and circuitry they describe. For instance, disabling (or making inoperative) a second function by bypassing a first function can establish the first function and modify the second function. Conversely, making a first function inoperative includes embodiments where a portion of the first function is bypassed or modified as well as embodiments where the second function is removed entirely. Bypassing or modifying code increases function in some embodiments and decreases function in other embodiments.A few preferred embodiments have been described in detail hereinabove. It is to be understood that the scope of the invention comprehends embodiments different from those described yet within the inventive scope. Microprocessor and microcomputer are synonymous herein. Processing circuitry comprehends digital, analog and mixed signal (digital/analog) integrated circuits, ASIC circuits, PALs, PLAs, decoders, memories, non- software based processors, and other circuitry, and digital computers including microprocessors and microcomputers of any architecture, or combinations thereof. Internal and external couplings and connections can be ohmic, capacitive, direct or indirect via intervening circuits or otherwise as desirable. Implementation is contemplated in discrete components or fully integrated circuits in any materials family and combinations thereof. Various embodiments of the invention employ hardware, software or firmware. Process diagrams herein are representative of flow diagrams for operations of any embodiments whether of hardware, software, or firmware, and processes of manufacture thereof. While this invention has been described with reference to illustrative embodiments, this description is not to be construed in a limiting sense. Various modifications and combinations of the illustrative embodiments, as well as other embodiments of the invention may be made. The terms "including", "includes", "having", "has", "with", or variants thereof are used in the detailed description and/or the claims to denote non-exhaustive inclusion in a manner similar to the term "comprising". It is therefore contemplated that the appended claims and their equivalents cover any such embodiments, modifications, and embodiments as fall within the true scope of the invention.
Techniques that may be utilized in reduction of snoop accesses are described. In one embodiment, a method includes receiving a page snoop command that identifies a page address corresponding to a memory access request by an input/output (I/O) device. One or more cache lines that match the page address may be evicted. Furthermore, memory access by a processor core may be monitored to determine whether the processor core memory access is within the page address.
1.A device capable of reducing monitoring access includes:Processor core for:Receiving a page monitoring command that identifies a page address corresponding to a memory access request issued by an input / output device; andEvict one or more cache lines that match the page address;Processor monitoring logic for monitoring memory accesses made by the processor core to determine whether the memory accesses of the processor core are within the page address, and if so, allowing monitoring of the processor to be stopped Core memory access; andA chipset that includes input / output monitoring logic for monitoring another memory access by the input / output device to determine whether another memory access by the input / output device is within the page address, And if it is, it is allowed to perform another memory access of the input / output device without generating a listening access to the processor core.2.The apparatus of claim 1, wherein the one or more cache lines are located in a cache coupled with the processor core.3.The apparatus of claim 2, wherein the cache is on the same integrated circuit die as the processor core.4.The apparatus of claim 1, wherein the page address identifies an area of memory coupled to the processor core through a chipset.5.The apparatus of claim 4, wherein the chipset includes a memory controller, and the input / output monitoring logic is coupled between the input / output device and the memory controller.6.The device of claim 5, wherein the input / output monitoring logic is on the same integrated circuit die as the memory controller.7.The apparatus of claim 1, further comprising a plurality of processor cores.8.The apparatus of claim 7, wherein the plurality of processor cores are located on a single integrated circuit die.9.A method to reduce listening access includes:Receiving a page monitoring command that identifies the page address corresponding to the memory access request issued by the input / output device;Evict one or more cache lines that match the page address;Monitor memory accesses made by the processor core to determine whether the memory accesses of the processor core are within the page address, and if so, stop monitoring memory accesses of the processor core; andMonitor another memory access by the input / output device to determine whether another memory access by the input / output device is within the page address, and if so, perform the input / output device's Another memory access without generating a listening access to the processor core.10.The method of claim 9, wherein the processor core memory access performs a read or write operation on a memory coupled to the processor core.11.The method of claim 9, further comprising:The memory access request is received from the input / output device, where the memory access request identifies an area within a memory coupled to the processor core.12.The method of claim 9, further comprising:After receiving the memory access request, processor monitoring logic is enabled to monitor memory accesses made by the processor core.13.A system capable of reducing monitoring access includes:Volatile memory for storing data;Processor core for:Receiving a page monitoring command, the page monitoring command identifying a page address corresponding to an access request to the volatile memory issued by the input / output device; andEvict one or more cache lines that match the page address;Processor monitoring logic for monitoring memory accesses made by the processor core to the volatile memory to determine whether the memory accesses of the processor core are within the page address, and if so, allow Stop monitoring the memory access of the processor core; andA chipset coupled between the volatile memory and the processor core, wherein the chipset includes input / output monitoring logic for monitoring the volatile memory by the input / output device Another memory access to determine whether another memory access of the input / output device is within the page address, and if so, allow another memory access of the input / output device without The processor core generates listening access.14.The system of claim 13, wherein the volatile memory is RAM, DRAM, SDRAM, or SRAM.
Method, device and system capable of reducing monitoring accessTechnical fieldThe present invention relates generally to data processing systems, and more particularly, to methods, devices, and systems that can reduce listening access.Background techniqueTo improve performance, some computer systems may include one or more caches. The cache usually stores data corresponding to the original data stored elsewhere or calculated earlier. To reduce memory access latency, once data is stored in the cache, it can be used in the future by accessing a copy of the cache instead of re-fetching or re-computing the original data.One type of cache used by computer systems is the central processing unit (CPU) cache. Because the CPU cache is closer to the CPU (for example, provided inside or near the CPU), it enables the CPU to access information such as recently used instructions and / or data more quickly. Therefore, using the CPU cache can reduce the delay associated with accessing the main memory provided elsewhere in the computer system. The reduction in memory access latency in turn improves system performance. However, each time the CPU cache is accessed, the corresponding CPU enters a higher power usage state to provide cache access support functions, for example, to maintain the consistency of the CPU cache.The use of higher power increases heat generation. Overheating can damage computer system components. Moreover, the use of higher power increases battery consumption, for example, battery consumption in mobile computing devices, which in turn reduces the time a mobile device can be used before it is recharged. The extra power consumption will additionally result in the use of a larger battery and its weight will be greater. The heavier battery reduces the portability of mobile computing devices.Summary of the inventionAccording to an aspect of the present invention, an apparatus capable of reducing monitor access is disclosed, including: a processor core for receiving a page monitor command, the page monitor command identifier corresponding to a memory access request issued by an input / output device Page address of, and to evict one or more cache lines matching the page address; processor monitoring logic, to monitor memory accesses by the processor core to determine the processor Whether the memory access of the core is within the page address, and if so, it is allowed to stop monitoring the memory access of the processor core; and the chipset, which includes input / output monitoring logic for monitoring by the input / Another memory access by the output device to determine whether another memory access by the input / output device is within the page address, and if so, to allow another memory access by the input / output device to be performed while No listening access is generated to the processor core.According to another aspect of the present invention, a method capable of reducing monitor access is disclosed, including: receiving a page monitor command, the page monitor command identifying a page address corresponding to a memory access request issued by an input / output device; One or more cache lines matching the page address; monitoring memory accesses by the processor core to determine whether the memory accesses of the processor core are within the page address, and if so, Then stop monitoring the memory access of the processor core; and monitor another memory access by the input / output device to determine whether another memory access by the input / output device is within the page address, And if so, perform another memory access of the input / output device without generating a listening access to the processor core.According to yet another aspect of the present invention, a system capable of reducing monitoring access is disclosed, including: a volatile memory for storing data; a processor core for receiving a page monitoring command, the page monitoring command identification and the cause A page address corresponding to the access request to the volatile memory issued by the input / output device, and for eviction of one or more cache lines matching the page address; processor monitoring logic for Monitor memory accesses made by the processor core to the volatile memory to determine whether the memory accesses of the processor core are within the page address, and if so, allow to stop monitoring the processor core Memory access; and a chipset coupled between the volatile memory and the processor core, wherein the chipset includes input / output monitoring logic for monitoring the input / output device pair Another memory access by the volatile memory to determine whether another memory access by the input / output device is within the page address, and if Yes, it is allowed to perform another memory access of the input / output device without generating a listen access to the processor core.BRIEF DESCRIPTIONThe detailed description will be made with reference to the drawings. In the drawings, the left-most digit of a reference mark identifies the drawing in which the reference mark first appears. In different illustrations, the same reference signs are used to indicate similar or identical items.1-3 illustrate block diagrams of computing systems according to some embodiments of the invention;Figure 4 illustrates an embodiment of a method for reducing snoop access performed by a processor.detailed descriptionIn the following description, in order to make the embodiments thoroughly understood, a large number of specific details are set forth. However, various embodiments of the present invention can be implemented without specific details. In other cases, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the specific embodiments of the present invention.FIG. 1 illustrates a block diagram of a computing system 100 according to an embodiment of the present invention. The computing system 100 may include one or more central processing units (CPU) 102 or processors coupled to an interconnection network (or bus) 104. The processor (102) may be any suitable processor, such as a general-purpose processor, a network processor, etc. (including a reduced instruction set computer (RISC) processor or a complex instruction set computer (CISC)). In addition, the processor (102) may have a single-core or multi-core design. A processor (102) with a multi-core design can integrate different types of processor cores on the same integrated circuit (IC) die. Furthermore, the processor (102) with a multi-core design can be implemented as a symmetric or asymmetric multi-processor.The chipset 106 may also be coupled to the interconnect network 104. The chipset 106 may include a memory control center (MCH) 108. The MCH 108 may include a memory controller 110 coupled with the memory 112. The memory 112 may store data and sequences of instructions executed by the CPU 102 or any other devices included in the computing system 100. In one embodiment of the invention, the memory 112 may include one or more volatile storage (or memory) devices, such as random access memory (RAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), static RAM (SRAM) etc. Non-volatile memory, such as a hard disk, can also be used. Additional devices may be coupled to the interconnection network 104, such as multiple CPUs and / or multiple system memories.The MCH 108 may also include a graphics interface 114 coupled to the graphics accelerator 116. In one embodiment of the invention, the graphics interface 114 may be coupled to the graphics accelerator 116 via an accelerated graphics port (AGP). In one embodiment of the invention, a display (eg, a flat panel display) may be coupled to the graphics interface 114 through, for example, a signal converter that will convert the image stored in a storage device such as a video memory or system memory The digital representation is converted into a display signal that is interpreted and displayed by the display. The display signals generated by the display device pass through various control devices before being interpreted by the display and then displayed on the display.The center interface 118 may couple the MCH 108 to the input / output control center (ICH) 120. The ICH 120 may provide an interface to input / output (I / O) devices coupled with the computing system 100. The ICH 120 may be coupled to the bus 122 through a peripheral bridge (or controller) 124, such as a peripheral component interconnect (PCI) bridge, a universal serial bus (USB) controller and many more. The bridge 124 may provide a data path between the CPU 102 and peripheral devices. Other types of topologies can be used. Moreover, for example, through multiple bridges or controllers, multiple buses may be coupled to the ICH 120. In addition, in various embodiments of the present invention, other peripherals coupled to the ICH 120 may include an integrated drive circuit (IDE) or small computer system interface (SCSI) hard drive, USB port, keyboard, mouse, parallel port, serial Ports, floppy disk drives, digital output support (eg, digital video interface (DVI)), and so on.The bus 122 may be coupled to the audio device 126, one or more disk drives 128, and the network interface device 130. Other devices may be coupled to the bus 122. Moreover, in some embodiments of the invention, various components (eg, network interface device 130) may be coupled to MCH 108. In addition, CPU 102 and MCH 108 can be combined to form a single chip. Furthermore, in other embodiments of the present invention, the graphics accelerator 116 may be included in the MCH 108.In addition, the computing system 100 may include volatile and / or non-volatile memory (or storage devices). For example, non-volatile memory may include one or more of the following: read only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), electrical EPROM (EEPROM), disk Drives (eg 128), floppy disks, compact disk ROM (CD-ROM), digital versatile disks (DVD), flash memory, magneto-optical disks or other types of non-volatile machines suitable for storing electronic instructions and / or data may Read the media.FIG. 2 illustrates a computing system 200 configured as a point-to-point (PtP) structure according to one embodiment of the present invention. In particular, FIG. 2 shows a system in which processors, memory, and input / output devices are interconnected through multiple point-to-point interfaces.The system 200 of FIG. 2 may also include multiple processors. For clarity, only two of the processors, namely processors 202 and 204, are shown. The processors 202 and 204 may each include local memory control centers (MCH) 206 and 208 to couple with the memories 210 and 212. The processors 202 and 204 may be any suitable processors, such as those discussed with reference to the processor 102 of FIG. 1. The processors 202 and 204 can exchange data via a point-to-point (PtP) interface 214 using PtP interface circuits 216 and 218, respectively. Each of the processors 202 and 204 can exchange data with the chipset 220 via separate PtP interfaces 222 and 224 using point-to-point interface circuits 226, 228, 230 and 232. The chipset 220 may also use the PtP interface circuit 237 to exchange data with the high-performance graphics circuit 234 via the high-performance graphics interface 236.At least one embodiment of the invention may be located within processors 202 and 204. However, other embodiments of the invention may exist in other circuits, logic units, or devices in the system 200 of FIG. 2. In addition, other embodiments of the present invention may be distributed among multiple circuits, logic units, or devices shown in FIG. 2.The chipset 220 may be coupled to the bus 240 using a PtP interface circuit 241. The bus 240 may have one or more devices coupled thereto, such as a bus bridge 242 and an I / O device 243. Via the bus 244, the bus bridge 242 may be coupled to other devices, such as a keyboard / mouse 245, communication devices 246 (eg, modems, network interface devices, etc.), audio I / O devices 247, and / or data storage devices 248. The data storage device 248 may store code 249 executable by the processors 202 and / or 204.FIG. 3 illustrates an embodiment of a computing system 300. The system 300 may include a CPU 302. In one embodiment, CPU 302 may be any suitable processor, such as processor 102 of FIG. 1 or processors 202-204 of FIG. 2. The CPU 302 may be coupled to the chipset 304 via an interconnection network 305 (eg, interconnection 104 of FIG. 1 or PtP interfaces 222 and 224 of FIG. 2). In one embodiment, the chipset 304 is the same as or similar to the chipset 106 of FIG. 1 or the chipset 220 of FIG. 2.The CPU 302 may include one or more processor cores 306 (such as discussed with reference to the processor 102 of FIG. 1 or the processors 202-204 of FIG. 2). The CPU 302 may also include one or more caches 308 (which may be shared in one embodiment of the invention), such as level 1 (L1) cache, level 2 (L2) cache, or level 3 (L3 ) Cache, etc., to store instructions and / or data used by one or more components of the system 300. The various components of CPU 302 may be directly coupled to cache 308 via a bus and / or a memory controller or control center (eg, memory controller 110 of FIG. 1, MCH 108 of FIG. 1, or MCH 206-208 of FIG. 2). Moreover, one or more components that implement memory monitoring function processing may be included in the CPU 302, which will be further discussed with reference to FIG. 4. For example, processor monitoring logic 310 may be included to monitor memory accesses by processor core 306. The various components of the CPU 302 can be set on the same integrated circuit die.As illustrated in FIG. 3, the chipset 304 may include an MCH 312 (eg, MCH 108 in FIG. 1 or MCH 206 in FIG. 2) that provides access to the memory 314 (eg, the memory 112 in FIG. 1 or the memories 210-212 in FIG. 2). -208). Therefore, the processor monitoring logic 310 can monitor memory accesses to the memory 314 by the processor core 306. The chipset 304 may also include an ICH 316 to provide access to one or more I / O devices 318 (such as those discussed with reference to FIGS. 1 and 2). The ICH 316 may include a bridge to allow communication with various I / O devices 318 through the bus 319, such as the ICH 120 in FIG. 1 or the PtP interface circuit 241 coupled with the bus bridge 242 in FIG. In one embodiment, I / O device 318 may be a block I / O device capable of transferring data to and from memory 314.Moreover, one or more components that implement memory monitoring function processing may be included in the chipset 304, which will be further discussed with reference to FIG. 4. For example, I / O monitoring logic 320 may be included to provide page listen commands that evict one or more cache lines in cache 308. For example, based on traffic from I / O device 318, I / O monitoring logic 320 may also enable processor monitoring logic 310. Therefore, the I / O monitoring logic 320 can monitor traffic to and from the I / O device 318, such as memory accesses by the I / O device 318 to the memory 314. In one embodiment, I / O monitoring logic 320 may be coupled between a memory controller (eg, memory controller 110 of FIG. 1) and a peripheral bridge (eg, bridge 124 of FIG. 1). Moreover, the I / O monitoring logic 320 may be located in the MCH 312. The various components of the chipset 304 may be arranged on the same integrated circuit die. For example, the I / O monitoring logic 320 and the memory controller (eg, memory controller 110 of FIG. 1) may be provided on the same integrated circuit die.FIG. 4 illustrates an embodiment of a method 400 for reducing listening access performed by a processor. Generally, when accessing main memory (eg, 314), a listening access can be issued to the processor core 306, for example, to maintain memory consistency. In one embodiment, the listening access may result from the traffic caused by I / O device 318 of FIG. 3. For example, the controller of the block I / O device (eg, USB controller) may periodically access the memory 314. Each access made by the I / O device 318 may cause a listening access (e.g., of the processor core 306) to determine whether the memory area being accessed (e.g., a portion of the memory 314) is located in, for example, the cache 308 to maintain the cache 308 is consistent with the memory 314.In one embodiment, various components of the system 300 of FIG. 3 may be utilized to perform the operations discussed with reference to FIG. 4. For example, steps 402-404 and (optional) 410 may be performed by I / O monitoring logic 320. Steps 406 and 408 may be performed by the processor core 306. Step 416 may be performed by MCH 312 and / or I / O device 318. Steps 412-414 and 418-420 may be performed by processor monitoring logic 310.3 and 4, I / O monitoring logic 320 may receive a memory access request from one or more block I / O devices 318 (402). I / O monitoring logic 320 may analyze the received request (402) to determine a corresponding area of memory (eg, in memory 314). The I / O monitoring logic 320 may issue a page listening command (404), which identifies the page address corresponding to the memory access by the block I / O device 318. For example, the page address may identify an area within memory 314. In one embodiment, the I / O device 318 can access a continuous memory area of 4K bytes or 8K bytes.I / O monitoring logic 320 may enable processor monitoring logic 310 (406). The processor core 306 may receive (e.g., generated at step 404) page snooping (408), and evict one or more cache lines (e.g., in cache 308) (410). At step 412, memory accesses can be monitored. For example, I / O monitoring logic 320 may monitor traffic to and from I / O device 318, for example, by monitoring transactions on a communication interface (eg, central interface 118 of FIG. 1 or bus 240 of FIG. 2). In addition, after being enabled (406), the processor monitoring logic 310 may monitor memory accesses made by the processor core 306 (412). For example, the processor monitoring logic 310 may monitor memory transactions on the interconnection network 305 that attempt to access the memory 314.At step 414, if the processor monitoring logic 310 determines that the memory access made by the processor core 306 is an access to the page address of step 404, then for example through the processor monitoring logic 310, the processor may be reset at step 416 and / or I / O monitoring logic (310 and 320). Therefore, monitoring of memory access can be stopped (412). After step 416, the method 400 may continue at step 402. Otherwise, if at step 414 the processor monitoring logic 310 determines that the memory access made by the processor core 306 is not an access to the page address of step 404, then the method 400 may continue to step 418.At step 418, if the I / O monitoring logic 320 determines that the memory access made by the block I / O device (318) is an access to the page address of step 404, it may, for example, not generate an In case of access to the memory (314) (420). Otherwise, the method 400 continues at step 404 to process the memory access request of the block I / O device (318) to the new area of the memory (314). Although FIG. 4 illustrates that step 414 may precede step 418, step 414 may also be performed after step 418. Moreover, in one embodiment, steps 414 and 418 may be performed asynchronously.In one embodiment, data to and from I / O device 318 may be loaded into cache 308 less frequently than other content accessed more frequently by processor core 306. Therefore, the method 400 may reduce the listening access performed by the processor (eg, processor core 306), where memory is generated by block I / O device traffic to the page address (404) that has been evicted from the cache 308 access. This implementation enables the processor (eg, processor core 306) to avoid leaving the low power state and perform listening access.For example, according to the implementation of the ACPI specification (Advanced Configuration and Power Interface specification, Revision 3.0, September 2, 2004), the processor (such as the processor core 306) can reduce the time spent in the C2 state, which is used more than the C3 state The power. For each USB device memory access (which occurs every 1 millisecond, regardless of whether the memory access requires monitor access), the processor (eg, processor core 306) can enter the C2 state to perform monitor access. For example, referring to FIGS. 3 and 4, the embodiments discussed herein may limit the generation of unnecessary listening accesses, for example, when a block I / O device is accessing a page address (404, 410) that was previously evicted. Therefore, it is possible to generate a single listening access (404) and evict the corresponding cache line (410) for the common area of the memory (314). The reduced power consumption may result in a longer life and / or smaller size of the battery in the mobile computing device.In various embodiments, one or more operations discussed herein, for example with reference to FIGS. 1-4, may be implemented as hardware (eg, logic circuits), software, firmware, or a combination thereof, which may be provided as a computer program product, The computer program product includes, for example, a machine-readable or computer-readable medium having instructions stored thereon for programming a computer to perform the processing discussed herein. The machine-readable medium may include any suitable storage device, such as those discussed with reference to FIGS. 1-3.In addition, such a computer-readable medium can be downloaded as a computer program product, where the program can be transferred from a remote computer via a communication link (such as a modem or network connection) through a data signal embodied as a carrier wave or other propagation medium (E.g. server) transmitted to the requesting computer (e.g. client). Therefore, here, the carrier wave should be considered to include a machine-readable medium.Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one implementation. The phrase "in one embodiment" that appears throughout the specification may or may not all refer to the same embodiment.Moreover, in the description and claims, the terms "coupled", "connected" and their derivatives may be used. In some embodiments, "connected" may be used to indicate that two or more elements are in direct physical or electrical contact with each other. "Coupled" may mean that two or more elements are in direct physical or electrical contact. However, "coupled" can also mean that two or more elements are not in direct contact with each other, but can still cooperate or interact with each other.Therefore, although the embodiments of the invention have been described with specific structural features and / or method actions, it should be understood that the claimed subject matter may not be limited to the specific features or actions described. Instead, specific features and actions are disclosed as sample forms to achieve the claimed subject matter.
Gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, and methods of fabricating gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, are described. For example, an integrated circuit structure includes an insulator fin on an insulator substrate. A vertical arrangement of horizontal semiconductor nanowires is over the insulator fin. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the gate stack is overlying the insulator fin. A pair of epitaxial source or drain structures is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the insulator fin.
An integrated circuit structure, comprising:an insulator fin on an insulator substrate;a vertical arrangement of horizontal semiconductor nanowires over the insulator fin;a gate stack surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the gate stack overlying the insulator fin; anda pair of epitaxial source or drain structures at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the insulator fin.The integrated circuit structure of claim 1, wherein the insulator fin has a vertical thickness approximately the same as a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.The integrated circuit structure of claim 1, wherein the insulator fin has a vertical thickness greater than a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.The integrated circuit structure of claim 1, wherein the insulator fin has a vertical thickness less than a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.The integrated circuit structure of claim 1, 2, 3 or 4, wherein the insulator fin comprises silicon oxide, and the vertical arrangement of horizontal semiconductor nanowires comprises silicon.The integrated circuit structure of claim 1, 2, 3 or 4, wherein the vertical arrangement of horizontal semiconductor nanowires comprises silicon germanium or a group III-V material.The integrated circuit structure of claim 1, 2, 3, 4, 5 or 6, wherein the insulator substrate comprises a layer of silicon oxide, and the insulator fin is on the layer of silicon oxide.The integrated circuit structure of claim 1, 2, 3, 4, 5 or 6, wherein the insulator substrate comprises a layer of silicon nitride, and the insulator fin is on the layer of silicon nitride.The integrated circuit structure of claim 1, 2, 3, 4, 5, 6, 7 or 8, wherein a bottom of the pair of epitaxial source or drain structures is on the insulator substrate.The integrated circuit structure of claim 9, wherein the bottom of the pair of epitaxial source or drain structures is co-planar with a bottom of the insulator fin.The integrated circuit structure of claim 1, 2, 3, 4, 5, 6, 7, 8, 9 or 10, wherein the insulator substrate comprises a remnant catalyst material beneath the insulator fin.The integrated circuit structure of claim 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or 11, wherein the pair of epitaxial source or drain structures is a pair of non-discrete epitaxial source or drain structures.The integrated circuit structure of claim 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 or 12, wherein the gate stack comprises a high-k gate dielectric layer and a metal gate electrode.A method of fabricating an integrated circuit structure, the method comprising:forming a vertical arrangement of horizontal semiconductor nanowires above a semiconductor fin above an insulator substrate;forming a dummy gate stack surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires, the dummy gate stack overlying the semiconductor fin;forming a pair of epitaxial source or drain structures at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the semiconductor fin;removing the dummy gate stack;oxidizing the semiconductor fin to form an insulator fin; andforming a permanent gate stack surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires, the permanent gate stack overlying the insulator fin.The method of claim 14, wherein forming the pair of epitaxial source or drain structures comprises forming a non-discrete pair of epitaxial source or drain structures.
TECHNICAL FIELDEmbodiments of the disclosure are in the field of integrated circuit structures and processing and, in particular, gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, and methods of fabricating gate-all-around integrated circuit structures having an insulator fin on an insulator substrate.BACKGROUNDFor the past several decades, the scaling of features in integrated circuits has been a driving force behind an ever-growing semiconductor industry. Scaling to smaller and smaller features enables increased densities of functional units on the limited real estate of semiconductor chips. For example, shrinking transistor size allows for the incorporation of an increased number of memory or logic devices on a chip, lending to the fabrication of products with increased capacity. The drive for ever-more capacity, however, is not without issue. The necessity to optimize the performance of each device becomes increasingly significant.In the manufacture of integrated circuit devices, multi-gate transistors, such as tri-gate transistors, have become more prevalent as device dimensions continue to scale down. In conventional processes, tri-gate transistors are generally fabricated on either bulk silicon substrates or silicon-on-insulator substrates. In some instances, bulk silicon substrates are preferred due to their lower cost and because they enable a less complicated tri-gate fabrication process. In another aspect, maintaining mobility improvement and short channel control as microelectronic device dimensions scale below the 10 nanometer (nm) node provides a challenge in device fabrication. Nanowires used to fabricate devices provide improved short channel control.Scaling multi-gate and nanowire transistors has not been without consequence, however. As the dimensions of these fundamental building blocks of microelectronic circuitry are reduced and as the sheer number of fundamental building blocks fabricated in a given region is increased, the constraints on the lithographic processes used to pattern these building blocks have become overwhelming. In particular, there may be a trade-off between the smallest dimension of a feature patterned in a semiconductor stack (the critical dimension) and the spacing between such features.BRIEF DESCRIPTION OF THE DRAWINGSFigure 1A illustrates a cross-sectional view of a gate-all-around integrated circuit structure on an insulator substrate.Figure 1B illustrates a cross-sectional view of a gate-all-around integrated circuit structure on a semiconductor substrate.Figure 1C illustrates a cross-sectional view of a gate-all-around integrated circuit structure on a semiconductor body on an insulator substrate.Figure 2A illustrates a cross-sectional view of a gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with an embodiment of the present disclosure.Figure 2B illustrates a cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Figure 3A and 3B illustrate a gate cut cross-sectional view and a fin cut cross-sectional view, respectively, of a gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with an embodiment of the present disclosure.Figure 4 illustrates a cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Figure 5 illustrates an angled cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Figure 6 illustrates a cross-sectional views of a non-planar integrated circuit structure as taken along a gate line, in accordance with an embodiment of the present disclosure.Figure 7 illustrates cross-sectional views taken through nanowires and fins for a non-endcap architecture (left-hand side (a)) versus a self-aligned gate endcap (SAGE) architecture (right-hand side (b)), in accordance with an embodiment of the present disclosure.Figure 8A illustrates a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure, in accordance with an embodiment of the present disclosure.Figure 8B illustrates a cross-sectional source or drain view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the a-a' axis, in accordance with an embodiment of the present disclosure.Figure 8C illustrates a cross-sectional channel view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the b-b' axis, in accordance with an embodiment of the present disclosure.Figure 9 illustrates a computing device in accordance with one implementation of an embodiment of the disclosure.Figure 10 illustrates an interposer that includes one or more embodiments of the disclosure.DESCRIPTION OF THE EMBODIMENTSGate-all-around integrated circuit structures having an insulator fin on an insulator substrate, and methods of fabricating gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, are described. In the following description, numerous specific details are set forth, such as specific integration and material regimes, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to one skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as integrated circuit design layouts, are not described in detail in order to not unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be appreciated that the various embodiments shown in the Figures are illustrative representations and are not necessarily drawn to scale.Certain terminology may also be used in the following description for the purpose of reference only, and thus are not intended to be limiting. For example, terms such as "upper", "lower", "above", and "below" refer to directions in the drawings to which reference is made. Terms such as "front", "back", "rear", and "side" describe the orientation and/or location of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import.Embodiments described herein may be directed to front-end-of-line (FEOL) semiconductor processing and structures. FEOL is the first portion of integrated circuit (IC) fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are patterned in the semiconductor substrate or layer. FEOL generally covers everything up to (but not including) the deposition of metal interconnect layers. Following the last FEOL operation, the result is typically a wafer with isolated transistors (e.g., without any wires).Embodiments described herein may be directed to back-end of line (BEOL) semiconductor processing and structures. BEOL is the second portion of IC fabrication where the individual devices (e.g., transistors, capacitors, resistors, etc.) are interconnected with wiring on the wafer, e.g., the metallization layer or layers. BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL part of the fabrication stage contacts (pads), interconnect wires, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers may be added in the BEOL.Embodiments described below may be applicable to FEOL processing and structures, BEOL processing and structures, or both FEOL and BEOL processing and structures. In particular, although an exemplary processing scheme may be illustrated using a FEOL processing scenario, such approaches may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be illustrated using a BEOL processing scenario, such approaches may also be applicable to FEOL processing.One or more embodiments described herein are directed to nanowire (NW) or nanoribbon (NR) devices formed on silicon on insulator (SOI) substrates with a removed (oxidized) body beneath a channel region and remaining body beneath source and drain regions to enable high-quality bottom-seeded epitaxial growth and eliminate a need for subfin doping isolation. Embodiments may be directed to isolation schemes for nanowire (NW) and/or nanoribbon (NR) transistors using insulator fins on insulator substrates. Embodiments may be implemented to provide a nanowire/nanoribbon transistor having reduced leakage. Embodiments with reference to a nanowire may encompass wires nanowires sized as wires or ribbons, unless specifically stated for nanowire-only dimensions.To provide context, in state of the art gate-all-around (GAA) technology, the source/drain (S/D) junction can connect to substrate leading to an undesired high leaking path. State-of-the-art solutions for blocking or inhibiting source to drain leakage through semiconductor structures (such as subfin structures) beneath a nanowire device include subfin doping and/or physically increasing a gap between nanowires/nanoribbons and the underlying substrate structure. Both approaches, however, are associated with added process complexity.Embodiments of the present disclosure may be implemented to provide for: (1) the use of a bottom-seeded epitaxial region in a source/drain of a NW/NR device formed on an SOI substrate or layer transferred substrate (such as GeOI, SiGeOI, III-VOI, etc.), (2) a NW/NR process which does not require a subfin isolation scheme, yet provides high channel strain and good quality epi S/D regions, and/or (3) selective depopulation of an SOI body beneath a gated region.To provide further context, there are several integration approaches for forming NW/NR devices: (1) forming a NW/NR device on an SOI or layer transferred bulk substrate with an epitaxial region seeded laterally from channel stubs (e.g., as described below in association with Figure 1A ), and ( 2 ) forming a NW/NR device on a bulk or SOI substrate with an epitaxial region seeded from the substrate beneath and from channel stubs (e.g., as described below in association with Figure 1B for a bulk substrate, and in association with Figure 1C for an SOI or XOI substrate.As a comparative example, Figure 1A illustrates a cross-sectional view of a gate-all-around integrated circuit structure on an insulator substrate.Referring to Figure 1A , an integrated circuit structure 100 is on an insulator substrate 104/102, such as a substrate having an insulator layer 104 (such as silicon oxide) on a bulk semiconductor material 102 (such as crystalline silicon). A vertical arrangement of horizontal semiconductor nanowires 106 is over the insulator substrate 104/102. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 106, the gate stack including a gate electrode 108 and a gate dielectric 110. The gate stack is on the insulator layer 104 of the insulator substrate 104/102. A gate spacer 112 is on either side of the gate stack. A pair of epitaxial source or drain structures 114 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 106 and on the insulator layer 104 of the insulator substrate 104/102. Source or drain contacts 116 are on the pair of epitaxial source or drain structures 114. In one embodiment, the pair of epitaxial source or drain structures 114 includes defects 118.Referring again to Figure 1A , a NW/NR device can be fabricated with source/drain (S/D) epitaxial material (epi) seeded laterally from channel stubs (within circled regions 107). Seeding epi in this manner has been shown to produce defected/low-quality epi in the S/D. For simplicity, the epi shape in the S/D is shown generically, but it may be faceted/incompletely filled or voided/etc. Forming the device on an SOI substrate does, however, eliminate the need for a subfin doping solution to eliminate leakage current and to provide for CMOS isolation. Although, the poor quality epi grown in this and similar devices may not produce high channel stress needed for optimal device performance.As another comparative example, Figure 1B illustrates a cross-sectional view of a gate-all-around integrated circuit structure on a semiconductor substrate.Referring to Figure 1B , an integrated circuit structure 120 is on a bulk semiconductor substrate 122 (such as a bulk crystalline silicon substrate). A vertical arrangement of horizontal semiconductor nanowires 126 is over the bulk semiconductor substrate 122. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 126, the gate stack including a gate electrode 128 and a gate dielectric 130. The gate stack is on the bulk semiconductor substrate 122. A gate spacer 132 is on either side of the gate stack. A pair of epitaxial source or drain structures 134 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 126 and on the bulk semiconductor substrate 122. Source or drain contacts 136 are on the pair of epitaxial source or drain structures 134.Referring again to Figure 1B , a NW/NR structure formed on a bulk substrate with S/D epi seeded largely from the horizontal exposed substrate 123 and less so from the channel stubs 127. This configuration of epi growth has been shown experimentally to produce a much higher-quality/less-defected epi region than the structure shown in Figure 1A . The structure of Figure 1B may, however, require a subfin isolation doping scheme to eliminate the subfin leakage paths (such as 138) and provide for CMOS isolation.As another comparative example, Figure 1C illustrates a cross-sectional view of a gate-all-around integrated circuit structure on a semiconductor body on an insulator substrate.Referring to Figure 1C , an integrated circuit structure 140 is on a semiconductor body 145 (such as a silicon body) on a buried oxide layer 144 (such as a silicon oxide layer) on a bulk semiconductor material 142 (such as crystalline silicon). A vertical arrangement of horizontal semiconductor nanowires 146 is over the semiconductor body 145. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 146, the gate stack including a gate electrode 148 and a gate dielectric 150. The gate stack is on the semiconductor body 145. A gate spacer 152 is on either side of the gate stack. A pair of epitaxial source or drain structures 154 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 146 and on the semiconductor body 145. Source or drain contacts 156 are on the pair of epitaxial source or drain structures 154.Referring again to Figure 1C , similar structure to that of the structure of Figure 1B , a state-of-the-art NW/NR device is fabricated on a SOI or XOI substrate with bottom-seeded epi (e.g., seeded largely from or entirely from the horizontal exposed substrate 143 and less so or not at all from the channel stubs 147). This structure also may require a subfin doping scheme to prevent leakage current (pathway 158) and provide for CMOS isolation.Disadvantages of the structures of Figures 1A-1C include the tradeoff between channel strain and a need for a complicated subfin isolation solution. In many regards, a subfin isolation doping scheme for a NW/NR device is more complicated than that required for a finfet. Specifically, all of the NWs/NRs in same device may need to have the same nominal doping (and ideally be undoped for optimal mobility) so as to have the same electrostatics (i.e., one wire should not conduct before or after the other wires, and the Vt should be the same).Providing further context, to prevent subfin conduction, doping of approximately 3E18/cm3 may be required beneath the gate in the substrate (bulk device) or body region (XOI device). To provide for highest mobility, the lowest NW/NR may be undoped (or effectively less than about 3E16/cm3). Such a doping gradient cannot be easily realized for a wide ribbon/wire via implant alone for greater than two wires spaced at about 10nm apart. Rather, a complicated implant/dose-loss process may be required which will likely result in less optimal performance of the lower-most NW/NR Embodiments described herein eliminate the need for such a complicated integration process and provides for high-quality epitaxial S/D growth.Embodiments described herein may be implemented to include benefits or advantages from each of the approaches described in association with Figures 1A-1C to provide a device structure based on a bottom-seeded epi fabricated from an SOI/XOI body, with the channel region body material "removed from" (oxidized) beneath the channel. The process may eliminate the need for a subfin isolation process. Advantages to implementing embodiments described herein include providing for high channel stress (through less-defected, higher-quality epi) and eliminating the need for a subfin isolation scheme on a NW/NR device. Value can be realized as a higher-performing device (higher channel strain), and a less costly/easier integration (no need for subfin isolation). Processes described herein integrates well with non-Si NW/NR structures (which are potentially much easier to integrate from a XOI or layer-transferred substrate).In accordance with an embodiment of the present disclosure, regarding detectability in a final product, integrated circuit structure described herein can differ from state-of-the-art NW/NR structures in regard to the "depopulated" body region beneath the channel and remnant body region beneath the S/D of the device. Structures can be formed on an SOI/XOI substrate with the body region selectively depopulated beneath the gate of the device.As an exemplary device having an insulator fin on an insulator substrate, Figure 2A illustrates a cross-sectional view of a gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with an embodiment of the present disclosure.Referring to Figure 2A , an integrated circuit structure 200 is in a dielectric layer 218 on an insulator fin 211 on an insulator substrate 204/202, such as a substrate having an insulator layer 204 (such as silicon oxide) on a bulk semiconductor material 202 (such as crystalline silicon). A vertical arrangement of horizontal semiconductor nanowires 206 is over the insulator fin 211. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 206, the gate stack including a gate electrode 208 and a gate dielectric 210. The gate stack is also overlying the insulator fin 211. A gate spacer 212 is on either side of the gate stack. A pair of epitaxial source or drain structures 214 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 206. In one embodiment, the pair of epitaxial source or drain structures 214 is on corresponding lower source or drain portions 205 at first and second ends of the insulator fin 211. Thus, source or drain structures for integrated circuit structure 200 may be formed as epitaxial source or drain structures 214 together with lower source or drain portions 205. Source or drain contacts 216 are on the pair of epitaxial source or drain structures 214.Referring again to Figure 2A , epi is seeded from a crystalline body of an SOI/XOI layer at surface 203 (as opposed to being seeded from channel stubs 207). In one embodiment, the body region is oxidized beneath the channel of the device which eliminates the need for subfin isolation doping (i.e., the oxidized body provides for isolation).As another exemplary device having an insulator fin on an insulator substrate, Figure 2B illustrates a cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Referring to Figure 2B , an integrated circuit structure 220 is in a dielectric layer 238 on an insulator fin 231 on an insulator substrate 224/222, such as a substrate having an insulator layer 224 (such as silicon oxide) on a bulk semiconductor material 222 (such as crystalline silicon). A vertical arrangement of horizontal semiconductor nanowires 226 is over the insulator fin 231. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 226, the gate stack including a gate electrode 228 and a gate dielectric 230. The gate stack is also overlying the insulator fin 231. A gate spacer 232 is on either side of the gate stack. A pair of epitaxial source or drain structures 234 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 226. In one embodiment, the pair of epitaxial source or drain structures 234 is on corresponding lower source or drain portions 225 at first and second ends of the insulator fin 231. Thus, source or drain structures for integrated circuit structure 220 may be formed as epitaxial source or drain structures 234 together with lower source or drain portions 225. Source or drain contacts 236 are on the pair of epitaxial source or drain structures 234.Referring again to Figure 2B , referring again to Figure 2B , epi is seeded from a crystalline body at surface 223 (as opposed to being seeded from channel stubs 237). A body region is oxidized with a catalytic oxidant (catox) material from both above and below to form insulator fin 231. In one such embodiment, the process may also result in a region of the substrate also being oxidized to form oxidized substrate portion 242. A remnant region 240 of the catalytic oxidant source may also exist in this region. In one embodiment, the catalytic oxidant source is alumina (AlOx).With reference to a process flow for fabricating the structures of Figures 2A and 2B , in accordance with an embodiment of the present disclosure, a method of fabricating an integrated circuit structure includes forming a vertical arrangement of horizontal semiconductor nanowires above a semiconductor fin above an insulator substrate. A dummy gate stack is then formed, the dummy gate stack surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the dummy gate stack overlying the semiconductor fin. A pair of epitaxial source or drain structures is formed at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the semiconductor fin. The dummy gate stack is then removed. The semiconductor fin is then oxidized to form an insulator fin. A permanent gate stack is then formed, the permanent gate stack surrounding the channel region of the vertical arrangement of horizontal semiconductor nanowires, and the permanent gate stack overlying the insulator fin.In an embodiment, the semiconductor fin oxidized to form the insulator fin using catalytic oxidation within a gate tub formed upon removal of the dummy gate stack. In one embodiment, alumina (AlOx) is used as a catalytic oxidant. The catalytic oxidant is deposited in the gate tub and then recessed to be confined to a subfin structure or lowest nanowire. The subfin structure or lowest nanowire is then oxidized using the catalytic oxidant which may layer be removed or retained. In an alternative embodiment, a helmeted process is used to enable wet etch of an underlying semiconductor body.Further advantages for implementing embodiments described herein include the ability to fabricate a robust transistor structure for a low power product or application. In some embodiments, a starting Si thickness in the SOI substrate is either the same as or different from the overlying nanowire or nanoribbon channel Si thickness. In some embodiments, the channel material is Si of is different than Si such as SiGe or group III-V materials. In some embodiments, the insulator material is silicon oxide or is different from silicon oxide, such as silicon nitride. In some embodiments, a source or drain structure lands on the top surface of the insulator substrate.As another exemplary device having an insulator fin on an insulator substrate, Figure 3A and 3B illustrate a gate cut cross-sectional view and a fin cut cross-sectional view, respectively, of a gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with an embodiment of the present disclosure.Referring to Figures 3A and 3B , an integrated circuit structure 300 includes an insulator fin 305 on an insulator substrate 304/302, such as a substrate having an insulator layer 304 (such as silicon oxide) on a bulk semiconductor material 302 (such as crystalline silicon). A vertical arrangement of horizontal semiconductor nanowires 306 is over the insulator fin 305. A gate stack 308/308A surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 306. The gate stack 308/308A is also overlying the insulator fin 305 (e.g., is along a top and sides of the fin 305). Gate spacers 314 may also be included. A pair of epitaxial source or drain structures 316 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 306 and at first and second ends of the insulator fin 305. In one embodiment, the pair of epitaxial source or drain structures 316 is formed as lower and upper portions, indicated by dashed line 399.In an embodiment, the insulator fin 305 has a vertical thickness approximately the same as a vertical thickness of each of the horizontal semiconductor nanowires 306 of the vertical arrangement of horizontal semiconductor nanowires 306, as is depicted. In another embodiment, the insulator fin 305 has a vertical thickness greater than a vertical thickness of each of the horizontal semiconductor nanowires 306 of the vertical arrangement of horizontal semiconductor nanowires 306. In another embodiment, the insulator fin 305 has a vertical thickness less than a vertical thickness of each of the horizontal semiconductor nanowires 306 of the vertical arrangement of horizontal semiconductor nanowires 306.In an embodiment, the insulator fin 305 includes silicon oxide, and the vertical arrangement of horizontal semiconductor nanowires 306 includes silicon. In another embodiment, the vertical arrangement of horizontal semiconductor nanowires 306 includes silicon germanium. In another embodiment, the vertical arrangement of horizontal semiconductor nanowires 306 includes a group III-V material.In an embodiment, the insulator substrate 304/302 includes a layer 304 of silicon oxide, and the insulator fin 305 is on the layer of silicon oxide. In another embodiment, the insulator substrate 304/302 includes a layer 304 of silicon nitride, and the insulator fin 305 is on the layer of silicon nitride.In an embodiment, a bottom of the pair of epitaxial source or drain structures 316 is on the insulator substrate 304/302, as is depicted. In one such embodiment, the bottom of the pair of epitaxial source or drain structures 316 is co-planar with a bottom of the insulator fin 305, as is depicted. In an embodiment, the pair of epitaxial source or drain structures 316 is a pair of non-discrete epitaxial source or drain structures, as is depicted, and as is described in greater detail below.For clarity of illustration, gate stacks 308 and 308A are depicted as separate structures. However, in an embodiment, the regions 308 and 308A are continuous structures. In one such embodiment, the gate stack includes a gate electrode 312 and a gate dielectric 310. It is to be appreciated that both the gate electrode 312 and a gate dielectric 310 may be continuous around and between the insulator fin 305 and the vertical arrangement of horizontal semiconductor nanowires 306.As another exemplary device having an insulator fin on an insulator substrate, Figure 4 illustrates a cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Referring to Figure 4 , an integrated circuit structure 400 includes a vertical arrangement of horizontal semiconductor nanowires 406 above an insulator fin 405. A gate stack 408A/408B (with gate electrode 408A and gate dielectric 408B) surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires 406 and overlies the insulator fin 405 (e.g., is along a top and sides of the insulator fin 405, although only the former depicted in the view of Figure 4 where side coverage by the gate stack 408A/408B along sides of the insulator fin 405 is at locations into and out of the page of the perspective of Figure 4 ). A pair of non-discrete epitaxial source or drain structures 410 is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 406 and at first and second ends of the insulator fin 405. In one embodiment, the pair of epitaxial source or drain structures 410 is formed as lower and upper portions, indicated by dashed line 499.A pair of dielectric spacers 412 is between the pair of non-discrete epitaxial source or drain structures 410 and the gate stack 408A/408B. In one embodiment, the pair of dielectric spacers 412 and the gate stack 408A/408B have co-planar top surfaces, e.g., at surface 420, as is depicted. In one such embodiment, an etch stop layer or dielectric layer 416 is formed on the surface 420. In one embodiment, the pair of dielectric spacers 412, the insulator fin 405 and the pair of non-discrete epitaxial source or drain structures 410 have co-planar bottom surfaces, e.g., at surface 430, as is depicted. The surface 430 is on an insulator substrate 454/452, such as a substrate having an insulator layer 454 (such as silicon oxide) on a bulk semiconductor material 452 (such as crystalline silicon).In an embodiment, one or both of the pair of non-discrete epitaxial source or drain structures has a dielectric material thereon (represented by 414 in one embodiment). In one such embodiment, wherein the dielectric material 414, the pair of dielectric spacers 412 and the gate stack 408A/408B have co-planar top surfaces, as is depicted at surface 420. In an embodiment, one or both of the pair of non-discrete epitaxial source or drain structures has a top conductive contact thereon (represented by 414 in another embodiment). In one such embodiment, wherein the top conductive contact 414, the pair of dielectric spacers 412 and the gate stack 408A/408B have co-planar top surfaces, as is depicted at surface 420. In an embodiment, insulator fin 405 blocks or eliminates a parasitic conduction path (e.g., path 460 from Source 410 to Drain 410) for improved device performance.In accordance with an embodiment of the present disclosure, with reference again to Figure 4 , a method of fabricating an integrated circuit structure 400 includes forming a vertical arrangement of horizontal semiconductor nanowires 406 above an insulator fin 405 above a semiconductor substrate (not shown). A gate stack 408A/408B is then formed, the gate stack 408A/408B surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires 406, and the gate stack 408A/408B overlying the insulator fin 405. A pair of epitaxial source or drain structures 410 is formed at first and second ends of the vertical arrangement of horizontal semiconductor nanowires 406 and at first and second ends of the insulator fin 405. The semiconductor substrate is removed to expose a bottom of the insulator fin 405 and a bottom of the epitaxial source or drain structures 410. An insulator substrate 454/452 is bonded to the bottom of the insulator fin 405 and to the bottom of the epitaxial source or drain structures 410.It is to be appreciated that, in a particular embodiment, channel layers of nanowires (or nanoribbons) and initial (pre-oxidation) underlying fins or subfins may be composed of silicon. As used throughout, a silicon layer may be used to describe a silicon material composed of a very substantial amount of, if not all, silicon. However, it is to be appreciated that, practically, 100% pure Si may be difficult to form and, hence, could include a tiny percentage of carbon, germanium or tin. Such impurities may be included as an unavoidable impurity or component during deposition of Si or may "contaminate" the Si upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon layer may include a silicon layer that contains a relatively small amount, e.g., "impurity" level, non-Si atoms or species, such as Ge, C or Sn. It is to be appreciated that a silicon layer as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that, in a particular embodiment, release layers between channel layers of nanowires (or nanoribbons) and underlying fins or subfins may be composed of silicon germanium. As used throughout, a silicon germanium layer may be used to describe a silicon germanium material composed of substantial portions of both silicon and germanium, such as at least 5% of both. In some embodiments, the amount of germanium is greater than the amount of silicon. In particular embodiments, a silicon germanium layer includes approximately 60% germanium and approximately 40% silicon (Si40Ge60). In other embodiments, the amount of silicon is greater than the amount of germanium. In particular embodiments, a silicon germanium layer includes approximately 30% germanium and approximately 70% silicon (Si70Ge30). It is to be appreciated that, practically, 100% pure silicon germanium (referred to generally as SiGe) may be difficult to form and, hence, could include a tiny percentage of carbon or tin. Such impurities may be included as an unavoidable impurity or component during deposition of SiGe or may "contaminate" the SiGe upon diffusion during post deposition processing. As such, embodiments described herein directed to a silicon germanium layer may include a silicon germanium layer that contains a relatively small amount, e.g., "impurity" level, non-Ge and non-Si atoms or species, such as carbon or tin. It is to be appreciated that a silicon germanium layer as described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.It is to be appreciated that the embodiments described herein can also include other implementations such as nanowires and/or nanoribbons with various widths, thicknesses and/or materials including but not limited to Si, Ge, SiGe and/or Group III-V materials. Described below are various devices and processing schemes that may be used to fabricate a device with an insulator fin on an insulator substrate. It is to be appreciated that the exemplary embodiments need not necessarily require all features described, or may include more features than are described.As another exemplary device having an insulator fin on an insulator substrate, Figure 5 illustrates an angled cross-sectional view of another gate-all-around integrated circuit structure having an insulator fin on an insulator substrate, in accordance with another embodiment of the present disclosure.Referring to Figure 5 , an integrated circuit structure includes a plurality of insulator fins 505 on an insulator substrate 595/597, such as a substrate having an insulator layer 595 (such as silicon oxide) on a bulk semiconductor material 597 (such as crystalline silicon). A corresponding vertical arrangement of horizontal semiconductor nanowires 540 is over each of the plurality of insulator fins 505. A corresponding gate stack surrounds a channel region of each of the vertical arrangements of horizontal semiconductor nanowires 540. The gate stack can include a gate electrode 570 and a gate dielectric 562. Each gate stack is also overlying a corresponding insulator fin 505. A gate spacer 550 is on either side of the gate stack. Epitaxial source or drain structures 544 is at first and second ends of each of the vertical arrangement of horizontal semiconductor nanowires 540 and at first and second ends of the insulator fins 505. In one embodiment, the epitaxial source or drain structures 544 are formed as lower and upper portions, indicated by dashed line 544A. An etch stop layer 599 may be formed over the epitaxial source or drain structures 544 and the gate stacks, as is depicted.In another aspect, integrated circuit structures described herein may be fabricated using a back-side reveal of front-side structures fabrication approach. In some exemplary embodiments, reveal of the back-side of a transistor or other device structure entails wafer-level back-side processing. A reveal of the back-side of a transistor approach may be employed for example to remove at least a portion of a carrier layer and intervening layer of a donor-host substrate assembly. The process flow begins with an input of a donor-host substrate assembly. A thickness of a carrier layer in the donor-host substrate is polished (e.g., CMP) and/or etched with a wet or dry (e.g., plasma) etch process. Any grind, polish, and/or wet/dry etch process known to be suitable for the composition of the carrier layer may be employed. For example, where the carrier layer is a group IV semiconductor (e.g., silicon) a CMP slurry known to be suitable for thinning the semiconductor may be employed. Likewise, any wet etchant or plasma etch process known to be suitable for thinning the group IV semiconductor may also be employed.In some embodiments, the above is preceded by cleaving the carrier layer along a fracture plane substantially parallel to the intervening layer. The cleaving or fracture process may be utilized to remove a substantial portion of the carrier layer as a bulk mass, reducing the polish or etch time needed to remove the carrier layer. For example, where a carrier layer is 400-900 µm in thickness, 100-700 µm may be cleaved off by practicing any blanket implant known to promote a wafer-level fracture. In some exemplary embodiments, a light element (e.g., H, He, or Li) is implanted to a uniform target depth within the carrier layer where the fracture plane is desired. Following such a cleaving process, the thickness of the carrier layer remaining in the donor-host substrate assembly may then be polished or etched to complete removal. Alternatively, where the carrier layer is not fractured, the grind, polish and/or etch operation may be employed to remove a greater thickness of the carrier layer.Next, exposure of an intervening layer is detected. Detection is used to identify a point when the back-side surface of the donor substrate has advanced to nearly the device layer. Any endpoint detection technique known to be suitable for detecting a transition between the materials employed for the carrier layer and the intervening layer may be practiced. In some embodiments, one or more endpoint criteria are based on detecting a change in optical absorbance or emission of the back-side surface of the donor substrate during the polishing or etching performed. In some other embodiments, the endpoint criteria are associated with a change in optical absorbance or emission of byproducts during the polishing or etching of the donor substrate back-side surface. For example, absorbance or emission wavelengths associated with the carrier layer etch byproducts may change as a function of the different compositions of the carrier layer and intervening layer. In other embodiments, the endpoint criteria are associated with a change in mass of species in byproducts of polishing or etching the back-side surface of the donor substrate. For example, the byproducts of processing may be sampled through a quadrupole mass analyzer and a change in the species mass may be correlated to the different compositions of the carrier layer and intervening layer. In another exemplary embodiment, the endpoint criteria is associated with a change in friction between a back-side surface of the donor substrate and a polishing surface in contact with the back-side surface of the donor substrate.Detection of the intervening layer may be enhanced where the removal process is selective to the carrier layer relative to the intervening layer as non-uniformity in the carrier removal process may be mitigated by an etch rate delta between the carrier layer and intervening layer. Detection may even be skipped if the grind, polish and/or etch operation removes the intervening layer at a rate sufficiently below the rate at which the carrier layer is removed. If an endpoint criteria is not employed, a grind, polish and/or etch operation of a predetermined fixed duration may stop on the intervening layer material if the thickness of the intervening layer is sufficient for the selectivity of the etch. In some examples, the carrier etch rate: intervening layer etch rate is 3:1-10:1, or more.Upon exposing the intervening layer, at least a portion of the intervening layer may be removed. For example, one or more component layers of the intervening layer may be removed. A thickness of the intervening layer may be removed uniformly by a polish, for example. Alternatively, a thickness of the intervening layer may be removed with a masked or blanket etch process. The process may employ the same polish or etch process as that employed to thin the carrier, or may be a distinct process with distinct process parameters. For example, where the intervening layer provides an etch stop for the carrier removal process, the latter operation may employ a different polish or etch process that favors removal of the intervening layer over removal of the device layer. Where less than a few hundred nanometers of intervening layer thickness is to be removed, the removal process may be relatively slow, optimized for across-wafer uniformity, and more precisely controlled than that employed for removal of the carrier layer. A CMP process employed may, for example employ a slurry that offers very high selectively (e.g., 100:1-300:1, or more) between semiconductor (e.g., silicon) and dielectric material (e.g., SiO) surrounding the device layer and embedded within the intervening layer, for example, as electrical isolation between adjacent device regions.For embodiments where the device layer is revealed through complete removal of the intervening layer, back-side processing may commence on an exposed back-side of the device layer or specific device regions there in. In some embodiments, the back-side device layer processing includes a further polish or wet/dry etch through a thickness of the device layer disposed between the intervening layer and a device region previously fabricated in the device layer, such as a source or drain region.The above described processing scheme may result in a donor-host substrate assembly that includes IC devices that have a back-side of an intervening layer, a back-side of the device layer, and/or back-side of one or more semiconductor regions within the device layer, and/or front-side metallization revealed. Additional back-side processing of any of these revealed regions may then be performed during downstream processing. In accordance with one or more embodiments of the present disclosure, following a backside reveal process an insulator substrate, such as a substrate having an insulator layer (such as silicon oxide) on a bulk semiconductor material (such as crystalline silicon) is bonded to exposed bottom surfaces of the bottommost wires (fins) and to exposed bottom surfaces of epitaxial source or drain structures.It is to be appreciated that the structures resulting from the above exemplary processing schemes may be used in a same or similar form for subsequent processing operations to complete device fabrication, such as PMOS and/or NMOS device fabrication. As an example of a possible completed device, and as another exemplary device having an insulator fin on an insulator substrate, Figure 6 illustrates a cross-sectional views of a non-planar integrated circuit structure as taken along a gate line, in accordance with an embodiment of the present disclosure.Referring to Figure 6 , a semiconductor structure or device 600 includes a non-planar active region (e.g., a fin structure including protruding fin portion 604) on a dielectric layer 695 of an insulator substrate 697/695. In an embodiment, instead of a solid fin, the non-planar active region is separated between regions 604A and 604B to provide a semiconductor nanowire 604A and an insulator fin 604B (e.g., oxidized semiconductor fin) with the gate structure 608 there between. In either case, for ease of description for non-planar integrated circuit structure 600, a non-planar active region 604 is referenced below as a protruding fin portion.A gate line 608 is disposed over the protruding portions 604 of the non-planar active region (including, if applicable, surrounding nanowire 604A and insulator fin 604B), as well as over a portion of the dielectric layer 695. As shown, gate line 608 includes a gate electrode 650 and a gate dielectric layer 652. In one embodiment, gate line 608 may also include a dielectric cap layer 654. A gate contact 614, and overlying gate contact via 616 are also seen from this perspective, along with an overlying metal interconnect 660, all of which are disposed in inter-layer dielectric stacks or layers 670. An etch stop layer 699 may be formed on the interconnect 660 and inter-layer dielectric stacks or layers 670, as is depicted. Also seen from the perspective of Figure 6 , the gate contact 614 is, in one embodiment, disposed over dielectric layer 695, but not over the non-planar active regions. In another embodiment, however, the gate contact 614 is over the non-planar active regions.In an embodiment, the semiconductor structure or device 600 is a non-planar device such as, but not limited to, a fin-FET device, a tri-gate device, a nanoribbon device, or a nanowire device. In such an embodiment, a corresponding semiconducting channel region is composed of or is formed in a three-dimensional body. In one such embodiment, the gate electrode stacks of gate lines 608 surround at least a top surface and a pair of sidewalls of the three-dimensional body.Although not depicted in Figure 6 , it is to be appreciated that source or drain regions of or adjacent to the protruding fin portions 604 are on either side of the gate line 608, i.e., into and out of the page. In one embodiment, the material of the protruding fin portions 604 in the source or drain locations is removed and replaced with another semiconductor material, e.g., by epitaxial deposition to form epitaxial source or drain structures. The source or drain regions may extend to the top surface of the dielectric layer 695. In accordance with an embodiment of the present disclosure, the insulator fin 604B inhibits source to drain leakage.With reference again to Figure 6 , in an embodiment, nanowires 604A are composed of crystalline silicon layer which may be doped with a charge carrier, such as but not limited to phosphorus, arsenic, boron, gallium or a combination thereof. In an embodiment, insulator fin 604B is composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate line 608 may be composed of a gate electrode stack which includes a gate dielectric layer 652 and a gate electrode layer 650. In an embodiment, the gate electrode layer 650 of the gate electrode stack is composed of a metal gate and the gate dielectric layer is composed of a high-k material. For example, in one embodiment, the gate dielectric layer 652 is composed of a material such as, but not limited to, hafnium oxide, hafnium oxy-nitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, strontium titanate, yttrium oxide, aluminum oxide, lead scandium tantalum oxide, lead zinc niobate, or a combination thereof. Furthermore, a portion of gate dielectric layer 652 may include a layer of native oxide formed from the top few layers of the nanowires 604A. In an embodiment, the gate dielectric layer 652 is composed of a top high-k portion and a lower portion composed of an oxide of a semiconductor material. In one embodiment, the gate dielectric layer 652 is composed of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxy-nitride. In some implementations, a portion of the gate dielectric is a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate.In one embodiment, the gate electrode layer 650 is composed of a metal layer such as, but not limited to, metal nitrides, metal carbides, metal silicides, metal aluminides, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium, platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode layer 650 is composed of a non-workfunction-setting fill material formed above a metal workfunction-setting layer. The gate electrode layer 650 may consist of a P-type workfunction metal or an N-type workfunction metal, depending on whether the transistor is to be a PMOS or an NMOS transistor. In some implementations, the gate electrode layer 650 may consist of a stack of two or more metal layers, where one or more metal layers are workfunction metal layers and at least one metal layer is a conductive fill layer. For a PMOS transistor, metals that may be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides, e.g., ruthenium oxide. A P-type metal layer will enable the formation of a PMOS gate electrode with a workfunction that is between about 4.9 eV and about 5.2 eV. For an NMOS transistor, metals that may be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, and aluminum carbide. An N-type metal layer will enable the formation of an NMOS gate electrode with a workfunction that is between about 3.9 eV and about 4.2 eV. In some implementations, the gate electrode may consist of a "U"-shaped structure that includes a bottom portion substantially parallel to the surface of the substrate and two sidewall portions that are substantially perpendicular to the top surface of the substrate. In another implementation, at least one of the metal layers that form the gate electrode layer 650 may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions substantially perpendicular to the top surface of the substrate. In further implementations of the disclosure, the gate electrode may consist of a combination of U-shaped structures and planar, non-U-shaped structures. For example, the gate electrode may consist of one or more U-shaped metal layers formed atop one or more planar, non-U-shaped layers.Spacers associated with the gate electrode stacks may be composed of a material suitable to ultimately electrically isolate, or contribute to the isolation of, a permanent gate structure from adjacent conductive contacts, such as self-aligned contacts. For example, in one embodiment, the spacers are composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride.Gate contact 614 and overlying gate contact via 616 may be composed of a conductive material. In an embodiment, one or more of the contacts or vias are composed of a metal species. The metal species may be a pure metal, such as tungsten, nickel, or cobalt, or may be an alloy such as a metal-metal alloy or a metal-semiconductor alloy (e.g., such as a silicide material).In an embodiment (although not shown), a contact pattern which is essentially perfectly aligned to an existing gate pattern 608 is formed while eliminating the use of a lithographic step with exceedingly tight registration budget. In an embodiment, the contact pattern is a vertically symmetric contact pattern, or an asymmetric contact pattern. In other embodiments, all contacts are front-side connected and are not asymmetric. In one such embodiment, the self-aligned approach enables the use of intrinsically highly selective wet etching (e.g., versus conventionally implemented dry or plasma etching) to generate contact openings. In an embodiment, a contact pattern is formed by utilizing an existing gate pattern in combination with a contact plug lithography operation. In one such embodiment, the approach enables elimination of the need for an otherwise critical lithography operation to generate a contact pattern, as used in conventional approaches. In an embodiment, a trench contact grid is not separately patterned, but is rather formed between poly (gate) lines. For example, in one such embodiment, a trench contact grid is formed subsequent to gate grating patterning but prior to gate grating cuts.In an embodiment, providing structure 600 involves fabrication of the gate stack structure 608 by a replacement gate process. In such a scheme, dummy gate material such as polysilicon or silicon nitride pillar material, may be removed and replaced with permanent gate electrode material. In one such embodiment, a permanent gate dielectric layer is also formed in this process, as opposed to being carried through from earlier processing. In an embodiment, dummy gates are removed by a dry etch or wet etch process. In one embodiment, dummy gates are composed of polycrystalline silicon or amorphous silicon and are removed with a dry etch process including use of SF6. In another embodiment, dummy gates are composed of polycrystalline silicon or amorphous silicon and are removed with a wet etch process including use of aqueous NH4OH or tetramethylammonium hydroxide. In one embodiment, dummy gates are composed of silicon nitride and are removed with a wet etch including aqueous phosphoric acid.Referring again to Figure 6 , the arrangement of semiconductor structure or device 600 places the gate contact over isolation regions. Such an arrangement may be viewed as inefficient use of layout space. In another embodiment, however, a semiconductor device has contact structures that contact portions of a gate electrode formed over an active region, e.g., over a nanowire 604A, and in a same layer as a trench contact via.It is to be appreciated that not all aspects of the processes described above need be practiced to fall within the spirit and scope of embodiments of the present disclosure. Also, the processes described herein may be used to fabricate one or a plurality of semiconductor devices. The semiconductor devices may be transistors or like devices. For example, in an embodiment, the semiconductor devices are a metal-oxide semiconductor (MOS) transistors for logic or memory, or are bipolar transistors. Also, in an embodiment, the semiconductor devices have a three-dimensional architecture, such as a nanowire device, a nanoribbon device, a tri-gate device, an independently accessed double gate device, or a FIN-FET. One or more embodiments may be particularly useful for fabricating semiconductor devices at a sub-10 nanometer (10 nm) technology node.In an embodiment, as used throughout the present description, interlayer dielectric (ILD) material is composed of or includes a layer of a dielectric or insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (e.g., silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon doped oxides of silicon, various low-k dielectric materials known in the arts, and combinations thereof. The interlayer dielectric material may be formed by conventional techniques, such as, for example, chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In an embodiment, as is also used throughout the present description, metal lines or interconnect line material (and via material) is composed of one or more metal or other conductive structures. A common example is the use of copper lines and structures that may or may not include barrier layers between the copper and surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of multiple metals. For example, the metal interconnect lines may include barrier layers (e.g., layers including one or more of Ta, TaN, Ti or TiN), stacks of different metals or alloys, etc. Thus, the interconnect lines may be a single material layer, or may be formed from several layers, including conductive liner layers and fill layers. Any suitable deposition process, such as electroplating, chemical vapor deposition or physical vapor deposition, may be used to form interconnect lines. In an embodiment, the interconnect lines are composed of a conductive material such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au or alloys thereof. The interconnect lines are also sometimes referred to in the art as traces, wires, lines, metal, or simply interconnect.In an embodiment, as is also used throughout the present description, hardmask materials, capping layers, or plugs are composed of dielectric materials different from the interlayer dielectric material. In one embodiment, different hardmask, capping or plug materials may be used in different regions so as to provide different growth or etch selectivity to each other and to the underlying dielectric and metal layers. In some embodiments, a hardmask layer, capping or plug layer includes a layer of a nitride of silicon (e.g., silicon nitride) or a layer of an oxide of silicon, or both, or a combination thereof. Other suitable materials may include carbon-based materials. Other hardmask, capping or plug layers known in the arts may be used depending upon the particular implementation. The hardmask, capping or plug layers maybe formed by CVD, PVD, or by other deposition methods.In an embodiment, as is also used throughout the present description, lithographic operations are performed using 193nm immersion lithography (i193), EUV and/or EBDW lithography, or the like. A positive tone or a negative tone resist may be used. In one embodiment, a lithographic mask is a trilayer mask composed of a topographic masking portion, an anti-reflective coating (ARC) layer, and a photoresist layer. In a particular such embodiment, the topographic masking portion is a carbon hardmask (CHM) layer and the anti-reflective coating layer is a silicon ARC layer.In another aspect, one or more embodiments are directed to neighboring semiconductor structures or devices separated by self-aligned gate endcap (SAGE) structures. Particular embodiments may be directed to integration of multiple width (multi-Wsi) nanowires and nanoribbons in a SAGE architecture and separated by a SAGE wall. In an embodiment, nanowires/nanoribbons are integrated with multiple Wsi in a SAGE architecture portion of a front-end process flow. Such a process flow may involve integration of nanowires and nanoribbons of different Wsi to provide robust functionality of next generation transistors with low power and high performance. Associated epitaxial source or drain regions may be embedded (e.g., portions of nanowires removed and then source or drain (S/D) growth is performed).To provide further context, advantages of a self-aligned gate endcap (SAGE) architecture may include the enabling of higher layout density and, in particular, scaling of diffusion to diffusion spacing. To provide illustrative comparison, Figure 7 illustrates cross-sectional views taken through nanowires and fins for a non-endcap architecture (left-hand side (a)) versus a self-aligned gate endcap (SAGE) architecture (right-hand side (b)), in accordance with an embodiment of the present disclosure.Referring to the left-hand side (a) of Figure 7 , an integrated circuit structure 700 includes a substrate 702 having fins 704 protruding there from by an amount 706 above an isolation structure 708 laterally surrounding lower portions of the fins 704. Corresponding nanowires 705 are over the fins 704. A gate structure may be formed over the integrated circuit structure 700 to fabricate a device. However, breaks in such a gate structure may be accommodated for by increasing the spacing between fin 704/nanowire 705 pairs.Referring again to part (a) of Figure 7 , in an embodiment, during a replacement gate process, the exposed portions of fins 704 are oxidized to form insulator fins beneath the nanowires 705. Oxidation may be only for the exposed portion (i.e., to level 734) but could also extend into the fin (i.e., to level 732) or all the way through the fin (i.e., to level 730), effectively providing an insulator fin on a bulk substrate (as opposed to on an insulator substrate as described above).By contrast, referring to the right-hand side (b) of Figure 7 , an integrated circuit structure 750 includes a substrate 752 having fins 754 protruding therefrom by an amount 756 above an isolation structure 758 laterally surrounding lower portions of the fins 754. Corresponding nanowires 755 are over the fins 754. Isolating SAGE walls 760 (which may include a hardmask thereon, as depicted) are included within the isolation structure 752 and between adjacent fin 754/nanowire 755 pairs. The distance between an isolating SAGE wall 760 and a nearest fin 754/nanowire 755 pair defines the gate endcap spacing 762. A gate structure may be formed over the integrated circuit structure 750, between insolating SAGE walls to fabricate a device. Breaks in such a gate structure are imposed by the isolating SAGE walls. Since the isolating SAGE walls 760 are self-aligned, restrictions from conventional approaches can be minimized to enable more aggressive diffusion to diffusion spacing. Furthermore, since gate structures include breaks at all locations, individual gate structure portions may be layer connected by local interconnects formed over the isolating SAGE walls 760. In an embodiment, as depicted, the SAGE walls 760 each include a lower dielectric portion and a dielectric cap on the lower dielectric portion.Referring again to part (b) of Figure 7 , in an embodiment, during a replacement gate process, the exposed portions of fins 754 are oxidized to form insulator fins beneath the nanowires 755. Oxidation may be only for the exposed portion (i.e., to level 784) but could also extend into the fin (i.e., to level 782) or all the way through the fin (i.e., to level 780), effectively providing an insulator fin on a bulk substrate (as opposed to on an insulator substrate as described above).A self-aligned gate endcap (SAGE) processing scheme involves the formation of gate/trench contact endcaps self-aligned to fins without requiring an extra length to account for mask mis-registration. Thus, embodiments may be implemented to enable shrinking of transistor layout area. Embodiments described herein may involve the fabrication of gate endcap isolation structures, which may also be referred to as gate walls, isolation gate walls or self-aligned gate endcap (SAGE) walls.In an embodiment, as described throughout, self-aligned gate endcap (SAGE) isolation structures may be composed of a material or materials suitable to ultimately electrically isolate, or contribute to the isolation of, portions of permanent gate structures from one another. Exemplary materials or material combinations include a single material structure such as silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride. Other exemplary materials or material combinations include a multi-layer stack having lower portion silicon dioxide, silicon oxy-nitride, silicon nitride, or carbon-doped silicon nitride and an upper portion higher dielectric constant material such as hafnium oxide.To highlight an exemplary integrated circuit structure having two vertically arranged nanowires over an insulator fin, Figure 8A illustrates a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure, in accordance with an embodiment of the present disclosure. Figure 8B illustrates a cross-sectional source or drain view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the a-a' axis. Figure 8C illustrates a cross-sectional channel view of the nanowire-based integrated circuit structure of Figure 8A , as taken along the b-b' axis.Referring to Figure 8A , an integrated circuit structure 800 includes one or more vertically stacked nanowires (804 set) above a substrate 802. In an embodiment, as depicted, an insulator layer 802B and a bulk semiconductor layer 802A are included in substrate 802, as is depicted. Embodiments herein are targeted at both single wire devices and multiple wire devices. As an example, a two nanowire-based devices having nanowires 804B and 804C is shown for illustrative purposes. For convenience of description, nanowire 804B is used as an example where description is focused on one of the nanowires. It is to be appreciated that where attributes of one nanowire are described, embodiments based on a plurality of nanowires may have the same or essentially the same attributes for each of the nanowires. In either case, the one nanowire or the plurality of nanowires is over an insulator fin 899 (which may be an oxidized nanowire 804A).Each of the nanowires 804B and 804C includes a channel region 806 in the nanowire. The channel region 806 has a length (L). The channel region also has a perimeter orthogonal to the length (L). Referring to both Figures 8A and 8C , a gate electrode stack 808 surrounds the entire perimeter of each of the channel regions 806. The gate electrode stack 808 includes a gate electrode along with a gate dielectric layer between the channel region 806 and the gate electrode (not shown). In an embodiment, the channel region is discrete in that it is completely surrounded by the gate electrode stack 808 without any intervening material such as underlying substrate material or overlying channel fabrication materials. Accordingly, in embodiments having a plurality of nanowires 804, the channel regions 806 of the nanowires are also discrete relative to one another.Referring to both Figures 8A and 8B , integrated circuit structure 800 includes a pair of non-discrete source or drain regions 810/812. The pair of non-discrete source or drain regions 810/812 is on either side of the channel regions 806 of the plurality of vertically stacked nanowires 804. Furthermore, the pair of non-discrete source or drain regions 810/812 is adjoining for the channel regions 806 of the plurality of vertically stacked nanowires 804. In one such embodiment, not depicted, the pair of non-discrete source or drain regions 810/812 is directly vertically adjoining for the channel regions 806 in that epitaxial growth is on and between nanowire portions extending beyond the channel regions 806, where nanowire ends are shown within the source or drain structures. In another embodiment, as depicted in Figure 8A , the pair of non-discrete source or drain regions 810/812 is indirectly vertically adjoining for the channel regions 806 in that they are formed at the ends of the nanowires and not between the nanowires.In an embodiment, as depicted, the source or drain regions 810/812 are non-discrete in that there are not individual and discrete source or drain regions for each channel region 806 of a nanowire 804. Accordingly, in embodiments having a plurality of nanowires 804, the source or drain regions 810/812 of the nanowires are global or unified source or drain regions as opposed to discrete for each nanowire. That is, the non-discrete source or drain regions 810/812 are global in the sense that a single unified feature is used as a source or drain region for a plurality (in this case, 3) of nanowires 804 and, more particularly, for more than one discrete channel region 806. In one embodiment, from a cross-sectional perspective orthogonal to the length of the discrete channel regions 806, each of the pair of non-discrete source or drain regions 810/812 is approximately rectangular in shape with top vertex portion, as depicted in Figure 8B . In other embodiments, however, the source or drain regions 810/812 of the nanowires are relatively larger yet discrete non-vertically merged epitaxial structures such as nubs.In accordance with an embodiment of the present disclosure, and as depicted in Figures 8A and 8B , integrated circuit structure 800 further includes a pair of contacts 814, each contact 814 on one of the pair of non-discrete source or drain regions 810/812. In one such embodiment, in a vertical sense, each contact 814 completely surrounds the respective non-discrete source or drain region 810/812. In another aspect, the entire perimeter of the non-discrete source or drain regions 810/812 may not be accessible for contact with contacts 814, and the contact 814 thus only partially surrounds the non-discrete source or drain regions 810/812, as depicted in Figure 8B . In a contrasting embodiment, not depicted, the entire perimeter of the non-discrete source or drain regions 810/812, as taken along the a-a' axis, is surrounded by the contacts 814.Referring again to Figure 8A , in an embodiment, integrated circuit structure 800 further includes a pair of spacers 816. As is depicted, outer portions of the pair of spacers 816 may overlap portions of the non-discrete source or drain regions 810/812, providing for "embedded" portions of the non-discrete source or drain regions 810/812 beneath the pair of spacers 816. As is also depicted, the embedded portions of the non-discrete source or drain regions 810/812 may not extend beneath the entirety of the pair of spacers 816.In an embodiment, the nanowires 804B and 804C may be sized as wires or ribbons, as described below, and may have squared-off or rounder corners. In an embodiment, the nanowires 804B and 804C are composed of a material such as, but not limited to, silicon, germanium, or a combination thereof. In one such embodiment, the nanowires are single-crystalline. For example, for a silicon nanowire, a single-crystalline nanowire may be based from a (100) global orientation, e.g., with a <100> plane in the z-direction. As described below, other orientations may also be considered. In an embodiment, the dimensions of the nanowires, from a cross-sectional perspective, are on the nano-scale. For example, in a specific embodiment, the smallest dimension of the nanowire is less than approximately 20 nanometers. In an embodiment, the nanowires are composed of a strained material, particularly in the channel regions 806.Referring to Figures 8C , in an embodiment, each of the channel regions 806 has a width (Wc) and a height (Hc), the width (Wc) approximately the same as the height (Hc). That is, in both cases, the channel regions 806 are square-like or, if corner-rounded, circle-like in cross-section profile. In another aspect, the width and height of the channel region need not be the same, such as the case for nanoribbons as described throughout.In an embodiment, as described throughout, an integrated circuit structure effectively includes an oxidized non-planar device such as, but not limited to, an oxidized fmFET or an oxidized tri-gate device, with corresponding one or more overlying nanowire structures. In one embodiment, a gate structure surrounds each of the one or more discrete nanowire channel portions and, possibly, a portion of the oxidized finFET or the oxidized tri-gate device.In an embodiment, as described throughout, an underlying semiconductor substrate (which can ultimately be removed and replaced with an insulator substrate or an insulator on semiconductor substrate, or which is already beneath an overlying insulator layer) may be composed of a semiconductor material that can withstand a manufacturing process and in which charge can migrate. In an embodiment, the substrate is a bulk substrate composed of a crystalline silicon, silicon/germanium or germanium layer doped with a charge carrier, such as but not limited to phosphorus, arsenic, boron, gallium or a combination thereof, to form an active region. In one embodiment, the concentration of silicon atoms in a bulk substrate is greater than 97%. In another embodiment, a bulk substrate is composed of an epitaxial layer grown atop a distinct crystalline substrate, e.g. a silicon epitaxial layer grown atop a boron-doped bulk silicon mono-crystalline substrate. A bulk substrate may alternatively be composed of a group III-V material. In an embodiment, a bulk substrate is composed of a group III-V material such as, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, indium gallium arsenide, aluminum gallium arsenide, indium gallium phosphide, or a combination thereof. In one embodiment, a bulk substrate is composed of a group III-V material and the charge-carrier dopant impurity atoms are ones such as, but not limited to, carbon, silicon, germanium, oxygen, sulfur, selenium or tellurium.Embodiments disclosed herein may be used to manufacture a wide variety of different types of integrated circuits and/or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, micro-controllers, and the like. In other embodiments, semiconductor memory may be manufactured. Moreover, the integrated circuits or other microelectronic devices may be used in a wide variety of electronic devices known in the arts. For example, in computer systems (e.g., desktop, laptop, server), cellular phones, personal electronics, etc. The integrated circuits may be coupled with a bus and other components in the systems. For example, a processor may be coupled by one or more buses to a memory, a chipset, etc. Each of the processor, the memory, and the chipset, may potentially be manufactured using the approaches disclosed herein.Figure 9 illustrates a computing device 900 in accordance with one implementation of an embodiment of the present disclosure. The computing device 900 houses a board 902. The board 902 may include a number of components, including but not limited to a processor 904 and at least one communication chip 906. The processor 904 is physically and electrically coupled to the board 902. In some implementations the at least one communication chip 906 is also physically and electrically coupled to the board 902. In further implementations, the communication chip 906 is part of the processor 904.Depending on its applications, computing device 900 may include other components that may or may not be physically and electrically coupled to the board 902. These other components include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g., ROM), flash memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, an antenna, a display, a touchscreen display, a touchscreen controller, a battery, an audio codec, a video codec, a power amplifier, a global positioning system (GPS) device, a compass, an accelerometer, a gyroscope, a speaker, a camera, and a mass storage device (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth).The communication chip 906 enables wireless communications for the transfer of data to and from the computing device 900. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip 906 may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device 900 may include a plurality of communication chips 906. For instance, a first communication chip 906 may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip 906 may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The processor 904 of the computing device 900 includes an integrated circuit die packaged within the processor 904. The integrated circuit die of the processor 904 may include one or more structures, such as gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, built in accordance with implementations of embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 906 also includes an integrated circuit die packaged within the communication chip 906. The integrated circuit die of the communication chip 906 may include one or more structures, such as gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, built in accordance with implementations of embodiments of the present disclosure.In further implementations, another component housed within the computing device 900 may contain an integrated circuit die that includes one or structures, such as gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, built in accordance with implementations of embodiments of the present disclosure.In various implementations, the computing device 900 may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device 900 may be any other electronic device that processes data.Figure 10 illustrates an interposer 1000 that includes one or more embodiments of the present disclosure. The interposer 1000 is an intervening substrate used to bridge a first substrate 1002 to a second substrate 1004. The first substrate 1002 may be, for instance, an integrated circuit die. The second substrate 1004 may be, for instance, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of an interposer 1000 is to spread a connection to a wider pitch or to reroute a connection to a different connection. For example, an interposer 1000 may couple an integrated circuit die to a ball grid array (BGA) 1006 that can subsequently be coupled to the second substrate 1004. In some embodiments, the first and second substrates 1002/1004 are attached to opposing sides of the interposer 1000. In other embodiments, the first and second substrates 1002/1004 are attached to the same side of the interposer 1000. And in further embodiments, three or more substrates are interconnected by way of the interposer 1000.The interposer 1000 may be formed of an epoxy resin, a fiberglass-reinforced epoxy resin, a ceramic material, or a polymer material such as polyimide. In further implementations, the interposer 1000 may be formed of alternate rigid or flexible materials that may include the same materials described above for use in a semiconductor substrate, such as silicon, germanium, and other group III-V and group IV materials.The interposer 1000 may include metal interconnects 1008 and vias 1010, including but not limited to through-silicon vias (TSVs) 1012. The interposer 1000 may further include embedded devices 1014, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio-frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the interposer 1000. In accordance with embodiments of the disclosure, apparatuses or processes disclosed herein may be used in the fabrication of interposer 1000 or in the fabrication of components included in the interposer 1000.Thus, embodiments of the present disclosure include gate-all-around integrated circuit structures having an insulator fin on an insulator substrate, and methods of fabricating gate-all-around integrated circuit structures having an insulator fin on an insulator substrate.The above description of illustrated implementations of embodiments of the disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. While specific implementations of, and examples for, the disclosure are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the disclosure, as those skilled in the relevant art will recognize.These modifications may be made to the disclosure in light of the above detailed description. The terms used in the following claims should not be construed to limit the disclosure to the specific implementations disclosed in the specification and the claims. Rather, the scope of the disclosure is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.Example embodiment 1: An integrated circuit structure includes an insulator fin on an insulator substrate. A vertical arrangement of horizontal semiconductor nanowires is over the insulator fin. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the gate stack is overlying the insulator fin. A pair of epitaxial source or drain structures is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the insulator fin.Example embodiment 2: The integrated circuit structure of example embodiment 1, wherein the insulator fin has a vertical thickness approximately the same as a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.Example embodiment 3: The integrated circuit structure of example embodiment 1, wherein the insulator fin has a vertical thickness greater than a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.Example embodiment 4: The integrated circuit structure of example embodiment 1, wherein the insulator fin has a vertical thickness less than a vertical thickness of each of the nanowires of the vertical arrangement of horizontal semiconductor nanowires.Example embodiment 5: The integrated circuit structure of example embodiment 1, 2, 3 or 4, wherein the insulator fin includes silicon oxide, and the vertical arrangement of horizontal semiconductor nanowires includes silicon.Example embodiment 6: The integrated circuit structure of example embodiment 1, 2, 3 or 4, wherein the vertical arrangement of horizontal semiconductor nanowires includes silicon germanium or a group III-V material.Example embodiment 7: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5 or 6, wherein the insulator substrate includes a layer of silicon oxide, and the insulator fin is on the layer of silicon oxide.Example embodiment 8: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5 or 6, wherein the insulator substrate includes a layer of silicon nitride, and the insulator fin is on the layer of silicon nitride.Example embodiment 9: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5, 6, 7 or 8, wherein a bottom of the pair of epitaxial source or drain structures is on the insulator substrate.Example embodiment 10: The integrated circuit structure of example embodiment 9, wherein the bottom of the pair of epitaxial source or drain structures is co-planar with a bottom of the insulator fin.Example embodiment 11: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5, 6, 7, 8, 9 or 10, wherein the insulator substrate includes a remnant catalyst material beneath the insulator fin.Example embodiment 12: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or 11, wherein the pair of epitaxial source or drain structures is a pair of non-discrete epitaxial source or drain structures.Example embodiment 13: The integrated circuit structure of example embodiment 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 or 12, wherein the gate stack includes a high-k gate dielectric layer and a metal gate electrode.Example embodiment 14: A method of fabricating an integrated circuit structure includes forming a vertical arrangement of horizontal semiconductor nanowires above a semiconductor fin above an insulator substrate. A dummy gate stack is then formed, the dummy gate stack surrounding a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the dummy gate stack overlying the semiconductor fin. A pair of epitaxial source or drain structures is formed at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the semiconductor fin. The dummy gate stack is then removed. The semiconductor fin is then oxidized to form an insulator fin. A permanent gate stack is then formed, the permanent gate stack surrounding the channel region of the vertical arrangement of horizontal semiconductor nanowires, and the permanent gate stack overlying the insulator fin.Example embodiment 15: The method of example embodiment 14, wherein forming the pair of epitaxial source or drain structures includes forming a non-discrete pair of epitaxial source or drain structures.Example embodiment 16: A computing device includes a board, and a component coupled to the board. The component includes an integrated circuit structure including an insulator fin on an insulator substrate. A vertical arrangement of horizontal semiconductor nanowires is over the insulator fin. A gate stack surrounds a channel region of the vertical arrangement of horizontal semiconductor nanowires, and the gate stack is overlying the insulator fin. A pair of epitaxial source or drain structures is at first and second ends of the vertical arrangement of horizontal semiconductor nanowires and at first and second ends of the insulator fin.Example embodiment 17: The computing device of example embodiment 16, further including a memory coupled to the board.Example embodiment 18: The computing device of example embodiment 16 or 17, further including a communication chip coupled to the board.Example embodiment 19: The computing device of example embodiment 16, 17 or 18, wherein the component is a packaged integrated circuit die.Example embodiment 20: The computing device of example embodiment 16, 17, 18 or 19, wherein the component is selected from the group consisting of a processor, a communications chip, and a digital signal processor.
A method of manufacturing a semiconductor device includes forming a silicon germanium layer and a N-channel transistor and a P-channel transistor over the silicon germanium layer. A beta ratio of the N-channel transistor to the P-channel transistor is about 1.8 to about 2.2. A semiconductor device is also disclosed.
What is claimed is:1. A method of forming a semiconductor device, comprising the steps of:forming a silicon germanium layer;forming a N-channel transistor over the silicon germanium layer; andforming a P-channel transistor over the silicon germanium layer, whereina beta ratio of the N-channel transistor to the P-channel transistor is about 1.8 to about 2.2 andthe P-channel transistor includes source/drain regions implanted with a P-type dopant, and the implant energies and dosages for the P-type dopant range respectively from about 1 keV to about 10 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> .2. The method according to claim 1, wherein the N-channel transistor includes source/drain regions implanted with a N-type dopant, and the implant energies and dosages for the N-type dopant range respectively from about 0.5 keV to about 60 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> .3. The method according to claim 2, wherein the depth of the source/drain regions of the N-channel transistor is from about 0.1 to about 1.0 [mu]m, and the and the depth of the source/drain regions of the P-channel transistor is from about 0.1 to about 1.0 [mu]m.4. A semiconductor device, comprising:a silicon-germanium layer;a N-channel transistor over the silicon germanium layer; anda P-channel transistor over the silicon germanium layer, whereina beta ratio of the N-channel transistor to the P-channel transistor is about 1.8 to about 2.2,wherein the P-channel transistor includes source/drain regions implanted with a P-type dopant, and the implant energies and dosages for the P-type dopant range respectively from about 1 keV to about 10 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> .5. The semiconductor device according to claim 4, wherein the depth of source/drain regions of the N-channel transistor is from about 0.1 to about 1.0 [mu]m, and the and the depth of source/drain regions of the P-channel transistor is from about 0.1 to about 1.0 [mu]m.6. A method of forming a semiconductor device, comprising the steps of:forming a silicon germanium layer;forming N-channel devices over the silicon germanium layer;forming P-channel devices over the silicon germanium layer; andadjusting VT of the P-channel devices,wherein a beta ratio of the N-channel devices to the adjusted P-channel devices is about 1.8 to about 2.2 andthe VT of the P-channel devices is adjusted by implanting source/drain regions of the P-channel devices with a P-type dopant, and the implant energies and dosages for the P-type dopant range respectively from about 1 keV to about 10 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> .7. The method according to claim 6, wherein the N-channel devices include source/drain regions implanted with a N-type dopant, and the implant energies and dosages for the N-type dopant range respectively from about 0.5 keV to about 60 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> .
FIELD OF THE INVENTIONThe present invention relates to the manufacturing of semiconductor devices, and more particularly, to semiconductor devices based upon a silicon germanium (SiGe) substrate with an optimized drive current ratio.BACKGROUND OF THE INVENTIONOver the last few decades, the semiconductor industry has undergone a revolution by the use of semiconductor technology to fabricate small, highly integrated electronic devices, and the most common semiconductor technology presently used is silicon-based. A large variety of semiconductor devices have been manufactured having various applications in numerous disciplines. One silicon-based semiconductor device is a metal-oxide-semiconductor(MOS) transistor. The MOS transistor is one of the basic building blocks of most modern electronic circuits. Importantly, these electronic circuits realize improved performance and lower costs, as the performance of the MOS transistor is increased and as manufacturing costs are reduced.A typical MOS device includes a bulk semiconductor substrate on which a gate electrode is disposed. The gate electrode, which acts as a conductor, receives an input signal to control operation of the device. Source and drain regions are typically formed in regions of the substrate adjacent the gate electrodes by doping the regions with a dopant of a desired conductivity. The conductivity of the doped region depends on the type of impurity used to dope the region. The typical MOS device is symmetrical, in that the source and drain are interchangeable. Whether a region acts as a source or drain typically depends on the respective applied voltages and the type of device being made. The collective term source/drain region is used herein to generally describe an active region used for the formation of either a source or drain.As an alternative to forming a MOS device directly on a bulk semiconductor substrate, the MOS device can be formed on a strained-silicon layer. The process for forming strained-silicon involves depositing a layer of silicon germanium (SiGe) on the bulk semiconductor substrate. A thin layer of silicon is subsequently deposited on the SiGe layer. The distance between atoms in a SiGe crystal lattice is greater than the distance between atoms in an ordinary silicon crystal lattice. However, there is a natural tendency of atoms inside different types of crystals to align with one another where one crystal is formed on another crystal. As such, when a crystal lattice of silicon is formed on top of a layer of SiGe, the atoms in the silicon crystal lattice tend to stretch or "strain" to align with the atoms in the SiGe lattice. A resulting advantage of such a feature is that the strained silicon experiences less resistance to electron flow and produces gains of up to 80% in speed as compared to ordinary crystalline silicon.MOS devices using a strained-silicon structure typically fall in one of two groups depending on the type of dopants used to form the source, drain and channel regions. The two groups are often referred to as n-channel and p-channel devices. The type of channel is identified based on the conductivity type of the channel which is developed under the transverse electric field. In an N-channel MOS (NMOS) device, for example, the conductivity of the channel under a transverse electric field is of the conductivity type associated with n-type impurities (e.g., arsenic or phosphorous). Conversely, the channel of a P-channel MOS (PMOS) device under the transverse electric field is associated with p-type impurities (e.g., boron).One consideration when manufacturing NMOS and PMOS strained-silicon transistors is the beta ratio (i.e., N/P ratio, drive current ratio) of the transistors. A beta ratio of approximately 3.0 is commonly used to maintain cell stability. However, it has been found that in MOS devices based on SiGe, the drive current of the N-channel transistor increases with respect to the drive current of the P-channel transistor, such that the N-channel transistor runs hotter than the P-channel transistor. A desired drive current ratio between N-channel and P-channel transistors can therefore not be maintained. Accordingly, a need exists for an improved method of forming devices on a strained-silicon structure that allows for optimum performance by controlling the beta ratio or drive current ratio of the N-channel transistor with respect to the P-channel transistor.SUMMARY OF THE INVENTIONThis and other needs are met by embodiments of the present invention which provide a method of manufacturing a semiconductor device that improves performance by controlling the beta ratio of a N-channel transistor with respect to a P-channel transistor. The method includes forming a silicon germanium layer, and forming the N-channel transistor and the P-channel transistor over the silicon germanium layer. A beta ratio of the N-channel transistor to the P-channel transistor is then controlled to be from about 1.8 to about 2.2.The beta ratio can be controlled by selectively implanting source/drain regions of the P-channel transistor with a P-type dopant, with the implant energies and dosages for the P-type dopant ranging respectively from about 1 keV to about 10 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> . The N-channel transistor includes source/drain regions implanted with a N-type dopant, and the implant energies and dosages for the N-type dopant range respectively from about 0.5 keV to about 60 keV and from about 1*10<14 > to about 5*10<15 > dopants. In another aspect of the invention, the depth of the source/drain regions of the N-channel transistor is from about 0.1 to about 1.0 [mu]m, and the and the depth of the source/drain regions of the P-channel transistor is from about 0.1 to about 1.0 [mu]m. In still another aspect of the invention, a beta ratio of another N-channel transistor to another P-channel transistor is greater than 2.2 where the beta ratio is not as critical.In another embodiment of the present invention, a semiconductor device is provided. The semiconductor device includes a N-channel transistor and a P-channel transistor formed over the silicon germanium layer. A beta ratio of the N-channel transistor to the P-channel transistor is controlled to be from about 1.8 to about 2.2.Additional advantages of the present invention will become readily apparent to those skilled in this art from the following detailed description, wherein only the preferred embodiment of the present invention is shown and described, simply by way of illustration of the best mode contemplated for carrying out the present invention. As will be realized, the present invention is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the invention. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.BRIEF DESCRIPTION OF THE DRAWINGSReference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout, and wherein:FIGS. 1-3 schematically illustrate sequential phases of a fabrication method according to an embodiment of the present invention.DETAILED DESCRIPTION OF THE INVENTIONThe present invention improves performance by controlling the beta ratio or drive current ratio of P-channel and N-channel transistors formed on a silicon germanium (SiGe) structure. This is achieved, in part, by adjusting the doping density of source/drain regions in the P-channel transistor so that the beta ratio of the N-channel and P-channel transistors is from about 1.8 to about 2.2. By controlling the beta ratio of the N-channel transistor with respect to the P-channel transistor, optimum performance can be maintained.The present invention can be implemented with a wide variety of SiGe-type substrates, such as a SixGe1-x substrate wherein x can range from 0 atomic percent to 100 atomic percent. Additional substrates can include SiGe substrates over which a layer of strained silicon is placed. An example of the present invention is illustrated in FIGS. 1-3. FIG. 1 illustrates a conventional strained-silicon structure. The strained-silicon structure includes a silicon germanium layer 12 formed over a substrate 8, and a strained-silicon semiconductor layer 14 formed on the silicon germanium layer 12. The invention, however, is not limited as to the manner in which the strained-silicon structure is formed.An exemplary method of forming a strained-silicon structure is as follows. The substrate 8 can be a silicon wafer having a thickness of approximately 100 [mu]m. The silicon germanium layer 12 is formed over the substrate 8 using a chemical vapor deposition (CVD) process, such as ultra-high vacuum chemical vapor deposition (UHVCVD). The silicon germanium layer 12 can comprise a sublayer 12A, in which the concentration of Ge in the sublayer 12A is graded from about 0% Ge at the silicon germanium layer 12/substrate 8 interface to a maximum concentration of about 30% Ge. In certain aspects, the maximum concentration of Ge is about 20% Ge. Also, the thickness of the graded-concentration sublayer 12A can be about 2 [mu]m.After the maximum desired concentration of Ge is achieved in the first sublayer 12A, a second silicon germanium sublayer 12B having a substantially constant Ge concentration can be formed on the first sublayer 12A. The second germanium sublayer 12B, although not limited in this manner, has a thickness between about 1 [mu]m and about 2 [mu]m. The resulting silicon germanium layer 12, therefore, can have an overall thickness of between about 3 [mu]m and about 4 [mu]m. The concentration of Ge in the constant-concentration sublayer 12B is substantially equal to the maximum Ge concentration in the graded-concentration sublayer 12A.The strained silicon layer 14 is an epitaxial layer formed, for example, by CVD. The atoms in the silicon layer 14 stretch apart from each other in order to align themselves with the underlying lattice structure of the silicon germanium layer 12. Electron flow in this resulting strained silicon layer 14 is advantageously much faster than in ordinary crystalline silicon. Although not limited in this manner, the thickness of the strained silicon layer is between about 20 nm to about 40 nm.As illustrated in FIGS. 2 and 3, an N-channel transistor 10N and a P-channel transistor 10P are formed on the strained-silicon structure. The invention is not limited as to the type of N-channel and P-channel transistors 10N, 10P being formed or the manner of forming these transistors, as many types of transistors and many methods of forming these transistors are well known to those having skill in the art. As illustrated in FIG. 2, the N-channel and P-channel transistors 10N, 10P can include, for example, a gate dielectric 16 and a gate electrode 24 over the gate dielectric 16. Sidewall spacers 36, 38 can be formed on sidewalls 26, 28 of the gate electrode 24, and source/drain extensions 30, 32 can be formed in the silicon layer 14 underneath the sidewall spacers 36, 38.In a current aspect of the invention, the beta ratio of the N-channel and P-channel transistors 10N, 10P is modified. As is known to those skilled in the art, the beta [beta] of a transistor designates the current gain of the transistor and is the ratio of the transistor's output current (drive current) to its input current ([beta]=Iouput/Iinput). The beta ratio ([beta]N/[beta]P) is defined as the ratio of the beta [beta]N of the N-channel transistor 10N to the beta [beta]P of the P-channel transistor 10P. In conventional devices, a beta ratio ([beta]N/[beta]P) of approximately 3.0 is commonly used to maintain cell stability. However, it has been discovered that the N-channel transistors 10N run hotter than the P-channel transistors 10P in strained-silicon structures, such that the drive current of the N-channel transistors 10N increases with respect to the drive current of the P-channel transistors 10P. Therefore, for those particular circuits where the beta ratio ([beta]N/[beta]P) is critical to optimize gate leakage versus performance, it has been determined that a revised beta ratio ([beta]N/[beta]P) of about 1.8 to about 2.2 is desired. However, for those circuits where the beta ratio ([beta]N/[beta]P) is not as critical, a beta ratio ([beta]N/[beta]P) of greater than 2.2 can be used.In modifying the beta ratio ([beta]N/[beta]P) of the N-channel and P-channel transistors 10N, 10P to about 1.8 to about 2.2, the threshold voltage VT of the P-channel transistors 10P is adjusted by changing the doping density of source/drain regions 40, 42 of the P-channel transistors 10P. As illustrated in FIG. 3, the source/drain regions 40, 42 of the P-channel transistors 10P are formed by implantation, as represented by arrows 34, using a p-type dopant, such as boron. The implant energies and dosages for the p-type dopant range respectively from about 1 keV to about 10 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> . The source/drain regions 40, 42 of the N-channel transistor 10N are formed by doping, as represented by arrows 44, using an n-type dopant, such as arsenic or phosphorus. The implant energies and dosages for the n-type dopant respectively range from about 0.5 keV to about 60 keV and from about 1*10<14 > to about 5*10<15 > dopants/cm<2> . Thus, the depth of the source/drain regions 40, 42 of the N-channel transistor 10N is from about 0.1 to about 1.0 [mu]m, and the depth of the source/drain regions 40, 42 of the P-channel transistor 10P is from about 0.1 to about 1.0 [mu]m.By controlling the beta ratio or drive current ratio of P-channel and N-channel transistors formed on a silicon germanium layer from about 1.8 to about 2.2, the effects of forming a N-channel transistor over a silicon germanium layer can be accounted for. Thus, optimum performance of the transistor can be maintained.The present invention can be practiced by employing conventional materials, methodology and equipment. Accordingly, the details of such materials, equipment and methodology are not set forth herein in detail. In the previous descriptions, numerous specific details are set forth, such as specific materials, structures, chemicals, processes, etc., in order to provide a thorough understanding of the present invention. However, it should be recognized that the present invention can be practiced without resorting to the details specifically set forth. In other instances, well-known processing structures have not been described in detail, in order not to unnecessarily obscure the present invention.Only the preferred embodiment of the present invention and but a few examples of its versatility are shown and described in the present disclosure. It is to be understood that the present invention is capable of use in various other combinations and environments and is capable of changes or modifications within the scope of the inventive concepts as expressed herein.
A thermoelectric module having areas of highly thermally conductive material integrated into a substrate layer. For one embodiment copper pads are integrated into the external surface of the substrate of the hot side of the thermoelectric module. The copper pads allow direct connection of a heat removal device to the thermoelectric module thereby reducing thermal resistance. Thermal vias may be formed through the substrate to further reduce thermal resistance.
CLAIMS What is claimed is: 1. A thermoelectric module comprising: an upper substrate having an internal surface and an external surface; a lower substrate having an internal surface and an external surface; and a plurality of n-diode and p-diode pairs disposed between the internal surface of the lower substrate and the internal surface of the upper substrate, electrically connected to effect cooling of the external surface of the lower substrate, wherein the external surface of the upper substrate has areas of highly thermally conductive material integrated therein. 2. The thermoelectric module of claim 1 wherein the highly thermally conductive material is a highly thermally conductive metal. 3. The thermoelectric module of claim 2 wherein the highly thermally conductive metal is a metal selected from the group consisting of copper, aluminum, silver, copper- indium, silver-zinc, and multi-layers thereof. 4. The thermoelectric module of claim 1 further comprising: a heat removal device bonded to at least one of the areas of highly thermally conductive material integrated into the external surface of the upper substrate 5. The thermoelectric module of claim 4 wherein the heat removal device comprises one or more plate fins. 6. The thermoelectric module of claim 4 wherein the heat removal device comprises a plurality of pin fins. 7. The thermoelectric module of claim 4 wherein the heat removal device is a coolant-based device. 8. The thermoelectric module of claim 1 further comprising: one or more thermal vias formed through the upper substrate. 9. The thermoelectric module of claim 1 further comprising: one or more thermal vias extending from the external surface of the upper substrate to the internal surface of the upper substrate. 10. The thermoelectric module of claim 1 wherein the external surface of the upper substrate has areas of highly thermally conductive material integrated therein. 11. A method comprising: providing a microelectronic device; attaching a cold side of a thermoelectric module to the microelectronic device to effect cooling of the microelectronic device, the thermoelectric module having areas of highly thermally conductive material integrated into an external surface of a hot side; and bonding a heat removal device directly to at least one of the areas of highly thermally conductive material. 12. The method of claim 11 wherein the highly thermally conductive material is a highly thermally conductive metal. 13. The method of claim 12 wherein the highly thermally conductive metal is a metal selected from the group consisting of copper, aluminum, silver, copper-indium, silver- zinc, and multi-layers thereof. 14. The method of claim 13 wherein the heat removal device comprises one or more plate fins. 15. The method claim 13 wherein the heat removal device is a plurality of pin fins. 16. The method of claim 13 wherein the heat removal device is a coolant-based device. 17. The method of claim 11 wherein the thermoelectric module has one or more thermal vias formed through a substrate comprising the hot side of the thermoelectric module. 18. The method of claim 11 wherein one or more thermal vias extend from the external surface of a substrate comprising the hot side of the thermoelectric module to an internal surface of the substrate comprising the hot side of the thermoelectric module. 19. The method of claim 11 wherein the microelectronic device has a metal casing and is bonded directly to areas of highly thermally conductive material integrated into an external surface of the cold side. 20. A system comprising: a thermoelectric module having areas of highly thermally conductive material integrated into a substrate comprising a hot side of the thermoelectric module; a processor mechanically coupled to a substrate comprising a cold side of the thermoelectric module; and a heat removal device directly bonded to at least one of the areas of highly thermally conductive material integrated into the substrate comprising a hot side of the thermoelectric module. 21. The system of claim 20 wherein the highly thermally conductive material is a metal selected from the group consisting of copper, aluminum, silver, copper-indium, silver-zinc, and multi-layers thereof. 22. The system of claim 20 wherein the heat removal device comprises one or more plate fins. 23. The system of claim 20 wherein the heat removal device comprises a plurality of pin fins. 24. The system of claim 20 wherein the heat removal device is a coolant-based device. 25. The system of claim 20 wherein the thermoelectric module has one or more thermal vias formed through the substrate comprising the hot side of the thermoelectric module. 26. The system of claim 20 wherein the processor has a metal casing and is bonded directly to areas of highly thermally conductive material integrated into an external surface of the substrate comprising a cold side of the thermoelectric module. 27. The system of claim 26 wherein the thermoelectric module has one or more thermal vias formed through the substrate comprising the cold side of the thermoelectric module.
THERMOELECTRIC MODULEFIELD [0001] Embodiments of the invention relate generally to the field of thermoelectric cooling and more specifically to a more efficient thermoelectric cooling module and their applications. BACKGROUND [0002] A thermoelectric module (TEM) contains a number of alternating p-type and n- type semicondcutor thermoelements (e.g., n and p diodes) serially connected and disposed between two thermally conducting, but electrically insulating substrates. When an electric current is passed through the TEM, heat is absorbed at one face (one of the substrates) and rejected at the other face. The TEM thus functions as a cooler or refrigerator. A TEM may be used as a thermoelectric cooler in applications where small size, high reliability low power consumption and a wide operating temperature range are required.[0003] Figure 1 illustrates a typical TEM in accordance with the prior art. TEM 100, shown in Figure 1 includes multiple n and p diode pairs 110, which are typically electrically connected in series with conductive connecting strips 115. Typically the space 111 between diode pairs 110 contains air. The diodes are disposed between two substrates 120A and 120B. Typically such substrates are formed by bonding several (e.g., three) ceramic layers together. When a current is connected through the negative terminal 125 A and the positive terminal 125B, one side of the TEM (e.g., substrate 120A) will absorb heat, and the other side (e.g., substrate 120B) rejects heat. The side of the TEM that absorbs heat is referred to as the "cold side" and the side of the TEM that rejects heat is referred to as the hot side. Which side of the TEM is the cold side and which the hot side is determined by the polarity of the current. That is, reversing the current changes the direction of the heat transfer.[0004] Figure IA illustrates a side view of the TEM 100.[0005] TEMs can be used to cool a heat generating component by attaching a heat generating component to the cold side of the TEM and applying a current. TEMs can likewise be used to heat by reversing the TEM physically or reversing the current. [0006] When used to cool a heat generating component, the TEM will not function efficiently unless a heat removal device is attached to the hot side. This is because the TEM is designed to maintain a specified temperature difference, [Delta]T, between the cold side of the TEM and the hot side of the TEM. As heat from the heat generating component is absorbed by the cold side, the hot side gets increasingly hot in order to maintain the temperature difference [Delta]T. The hot side of the TEM can get so hot that the TEM fails. [0007] To address this situation, a heat removal device (e.g., a heat sink) is attached to the hot side. Typically, a thermal interface material (TIM) is used to reduce the contact resistance between the heat removal device, which may be a copper or aluminum block with fins, and the TEM substrate. The TIM fills the voids and grooves created by the imperfect surface finish of the two surfaces. Such voids and grooves can be highly thermally resistant. The TIMs used, typically polymers or grease, are thermally conductive materials. Even with the use of TIMs, the thermal resistance at the TEM/heat removal device interface can be excessive and detrimental for some applications. BRIEF DESCRIPTION OF THE DRAWINGS[0008] The invention may be best understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings: [0009] Figure 1 illustrates a typical TEM in accordance with the prior art; [0010] Figure IA illustrates a side view of the TEM in accordance with the prior art;[0011] Figure 2 illustrates the use of a TEM to cool a microelectronics device;[0012] Figure 3 illustrates a TEM having areas of highly thermally conductive material integrated into a substrate layer in accordance with one embodiment of the invention;[0013] Figure 4 illustrates a TEM having metal areas integrated within the surface layer of the substrate and a heat removal device directly integrated to the metal areas in accordance with one embodiment of the invention;[0014] Figure 5 illustrates a TEM having metal areas integrated within the surface layer of the substrate and metal traces formed through subsequent layers of the TEM substrate to act as thermal vias;[0015] Figure 6 illustrates a TEM used to cool a microelectronics device in accordance with one embodiment of the invention;[0016] Figure 7A illustrates a TEM having areas of highly thermally conductive material integrated into a substrate layer in accordance with one embodiment of the invention; and[0017] Figure 7B illustrates a TEM having pads integrated within the surface layer of the substrate and pin fins directly attached to the TEM.DETAILED DESCRIPTION [0018] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure the understanding of this description.[0019] Reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout the specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. [0020] Moreover, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention. [0021] An embodiment of the invention may be used in the context of cooling microelectronic devices. For example, because microelectronic devices are becoming smaller with increased power requirements, the devices are producing increasing amounts of heat, which must be removed from a decreasing surface area. Figure 2 illustrates the use of a TEM to cool a microelectronics device. TEM-cooled device 200, shown in Figure 2, includes a device package 201 placed upon a PCB 202. Attached to the device package 201 is a TEM 203 to cool the device package 201. The TEM 203 has a hot side 204 and a cold side 205. A first TIM layer 206 is disposed between the device package 201 and the cold side 205 of TEM 203. A heat removal device 207 is attached to the hot side 204 of the TEM 203. The heat removal device may typically be a conductive metal block with fins formed thereon. A second TIM layer 208 is disposed between the hot side 204 of TEM 203 and the heat removal device 207. Mounting hardware 209 is used to ensure that adequate pressure is applied. The TIM layers and the mounting hardware provide additional thermal resistance in the overall cooling solution of the TEM-cooled device 200. [0022] The measure of the thermal resistance can be defined as [psi] = [Delta]T/pwr, where [Delta]T is the difference in temperature at the die junction Tj and the ambient temperature TA, and pwr is the amount of power dissipated through the device in watts. A typical desired value for [psi]jA is 0.3[deg.] C/watt. Junction temperatures are fixed by the components of the microelectronic device, and therefore, as power requirements increase for such devices, the value of [psi]jA must decrease proportionally.[0023] Figure 3 illustrates a TEM having areas of highly thermally conductive material integrated into a substrate layer in accordance with one embodiment of the invention. For purposes of this discussion, highly thermally conductive materials are those having a thermal conductivity greater than approximately 200 W/m K at 20[deg.] C.TEM 300, shown in Figure 3, includes multiple n and p diode pairs 310 disposed between two substrates 320A and 320B. The substrates 320A and 320B may be formed by bonding several ceramic layers together. At least one of the substrates includes areas of highly thermally conductive material integrated into the substrate surface. For example, as shown in Figure 3, substrate 320A includes area, shown by example as areas 321 of highly thermally conductive material. For one embodiment, the highly thermally conductive material is a metal such as copper, aluminum, silver, and alloys thereof, such as copper- indium and silver zinc alloys. For example, areas 321 may comprise copper pads integrated into the top surface of a multi-layered substrate for one embodiment. [0024] In accordance with one embodiment of the invention, integrated metal areas within the surface layer of the TEM substrate can be used to integrate a heat removal directly to the TEM. For example, metal fins can be directly soldered or brazed to the integrated metal areas within the surface layer of the TEM substrate. For such an embodiment, a TIM layer between the heat removal device and the TEM is not required, and therefore, the thermal resistance associated with the TDVI layer is avoided. A typical value of the thermal resistance across the TIM layer is approximately 0.1[deg.] C/watt, and therefore, obviating the need for such a layer can significantly reduce the thermal resistance, [psi]JA, (e.g., from 0.3[deg.] C/watt to 0.2[deg.] C/watt).[0025] Figure 4 illustrates a TEM having metal areas integrated within the surface layer of the substrate and a heat removal device directly integrated to the metal areas in accordance with one embodiment of the invention. TEM 400, shown in Figure 4, includes a TEM 300 in accordance with an embodiment of the invention as described above in reference to Figure 3. In accordance with one embodiment of the invention, a number of plate fins 450 are directly attached to the TEM 300. The plate fins 450 may be soldered or brazed to the metal areas (not shown) integrated into the surface layer of the TEM substrate 320A.[0026] The TEM 400, having a heat removal device directly integrated to the TEM substrate, reduces thermal resistance by rendering a TIM material between the heat removal device and the TEM unnecessary. Additional thermal resistance can be avoided by providing thermal vias through the TEM substrate. For example, for one embodiment of the invention, the surface layer of the TEM substrate has highly thermally conductive areas integrated thereon and additionally, has highly thermally conductive traces formed through subsequent layers of the TEM substrate to act as thermal vias. [0027] Figure 5 illustrates a TEM having metal areas integrated within the surface layer of the substrate and metal traces formed through subsequent layers of the TEM substrate to act as thermal vias. TEM 500, shown in Figure 5, has multiple n and p diode pairs 510 electrically connected with conductive connecting strips 515. The diodes are disposed between two substrates 520A and 520B. As shown in Figure 5, substrate 520A is formed of three ceramic layers, namely, 522, 524, and 526. Surface layer 522 of substrate 520A has integrated therein, areas 521 of highly thermally conductive material such as copper or aluminum. Ceramic layers 524 and 526 have metal traces 534 and 536, respectively, integrated therein. Metal traces 534 are in contact with areas 521 and are also in contact with metal traces 536. As such, a thermal via is created through the TEM substrate 520A. The thermal vias are essentially thermally conductive pathways through the substrate, which may be metal tubes of any suitable geometry. For one embodiment, metal traces 536 have dimensions and are so positioned so as not to short the conductive connecting strips 515, while the dimensions and or position of metal traces 534 may vary. Implementation of thermal vias through the TEM substrate further reduces thermal resistance. [0028] Figure 6 illustrates a TEM used to cool a microelectronics device in accordance with one embodiment of the invention. TEM-cooled device 600, shown in Figure 6, includes a device package 601 placed upon a PCB 602. Attached to the device package 601 is a TEM 603 to cool the device package 601. The TEM 603 has an upper substrate 620A and a lower substrate 620B and includes multiple n and p diode pairs 610 disposed between two substrates 620A and 620B. The upper substrate 620A which is the hot side of the TEM 603 has areas 621 of highly thermally conductive material (e.g., copper) integrated into its upper surface 630. The areas 621 are in contact with thermal vias 635 extending through the substrate between the n and p diode pairs. A heat removal device is connected directly to the TEM 603 through the areas 621. For example, as shown, plate fins 650, which may be copper or some other suitable material, are soldered or brazed to areas 621 which also may be copper. For the embodiment illustrated by TEM 600, a TIM layer 606 is disposed between the device package 601 and the TEM substrate 620B, which is the cold side of TEM 603, however for alternative embodiments as discussed below, the TIM layer 606 may also be rendered unnecessary. Alternative Embodiments [0029] Embodiments of the invention provide a TEM having highly thermally conductive areas integrated into a surface layer of a substrate on the hot side of the TEM, which allow a heat removal device to be attached directly to the TEM. This renders unnecessary a layer of TIM between the TEM and the heat removal device. Embodiments, as discussed above, describe plate fins directly soldered or brazed to metal areas integrated into the surface of the TEM substrate. For alternative embodiments, any heat removal device that can be attached directly to the integrated areas of the highly thermally conductive metal may be used. [0030] For example, Figure 7 A illustrates a TEM having areas of highly thermally conductive material integrated into a substrate layer in accordance with one embodiment of the invention. TEM 700, shown in Figure 7A, includes multiple n and p diode pairs 710 disposed between two substrates 720A and 720B. As shown in Figure 7A, substrate 720A includes integrated pads 721 for directly attaching pin fins. The pads 721 may be copper or aluminum or other highly thermally conductive metals or materials. Figure 7B illustrates a TEM 700 having pads (not shown) integrated within the surface layer of the substrate and pin fins 750 directly attached to the TEM 700. The pin fins 750 may be soldered or brazed to the pads integrated into the surface layer of the TEM substrate 720A. [0031] For alternative embodiments, coolant-based heat removal devices (e.g., cold plates) can be attached directly to the TEM substrate and used in conjunction with a remote heat exchanger.[0032] While embodiments of the invention discussed above have described a TEM having highly thermally conductive areas integrated into the surface of the substrate of the hot side of the TEM5 alternative embodiments may include such areas integrated into the surface of the substrate of the cold side of the TEM, additionally or alternatively. For example, some die packages include a metal exterior surface area. Metal areas integrated into the surface of the substrate of the cold side of the TEM could be directly bonded to the metal surface of the die case. This would render the TIM layer typically used between the die and the TEM unnecessary thus further reducing thermal resistance. [0033] Embodiments of the invention having thermal vias through the substrate have been discussed above with the dimension and position of the thermal vias limited by the electronics of the TEM (e.g., the position of the interconnection of the n and p diode pairs). For alternative embodiments of the invention, the size and position of the thermal vias can be more liberal provided the thermal vias do not extend completely through the substrate of the TEM. For example, for a TEM having a substrate formed from three bonded layers, the layer proximate to the n and p diode pairs may be comprised entirely of an electrically insulating material, while a center layer may have extensive thermal vias formed through it and positioned as desired.[0034] Embodiments of the invention have been discussed in the context of cooling a microelectronic device. It will be apparent to one skilled in the art that various embodiments of the invention may be employed in all applications where a TEM is desired to provide efficient cooling.[0035] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
Backside contact structures include etch selective materials to facilitate backside contact formation. An integrated circuit structure includes a frontside contact region, a device region below the frontside contact region, and a backside contact region below the device region. The device region includes a transistor. The backside contact region includes a first dielectric material under a sourceor drain region of the transistor, a second dielectric material laterally adjacent to the first dielectric material and under a gate structure of the transistor. A non-conductive spacer is between thefirst and second dielectric materials. The first and second dielectric materials are selectively etchable with respect to one another and the spacer. The backside contact region may include an interconnect feature that, for instance, passes through the first dielectric material and contacts a bottom side of the source/drain region, and/or passes through the second dielectric material and contactsthe gate structure.
1.An integrated circuit structure, including:A device region, which includes a transistor including a source region or a drain region, and a gate structure;The front contact area, which is above the device area; andA back contact area, which is below the device area, the back contact area includes a first dielectric material under the source region or the drain region of the transistor, and is laterally connected to the first dielectric material Material adjacent to the second dielectric material below the gate structure of the transistor, and a non-conductive spacer laterally located between the first dielectric material and the second dielectric material, wherein the The first dielectric material and the second dielectric material are selectively etchable with respect to each other and the non-conductive spacer.2.The integrated circuit structure of claim 1, wherein the back contact area further comprises an interconnection feature, the interconnection feature passing through the first dielectric material and contacting the source region or the drain The bottom surface of the area.3.The integrated circuit structure according to claim 2, further comprising a contact structure on the bottom surface of the source region or the drain region, the contact structure comprising metal, wherein the interconnection feature directly contacts the The contact structure, wherein the contact structure is conformal to the bottom surface of the source region or the drain region.4.The integrated circuit structure of claim 1, wherein the back contact area further includes an interconnect feature that passes through the second dielectric material and contacts the gate structure, and wherein The gate structure includes a high-k dielectric and a gate electrode, and the interconnect feature is in contact with the gate electrode.5.4. The integrated circuit structure of claim 1, wherein the device region further includes a partition wall structure, the partition wall structure including an insulating material and a conductor in the insulating material.6.The integrated circuit structure of claim 5, wherein one of the first dielectric material or the second dielectric material is also under the isolation wall structure, and wherein the back contact area further includes mutual The interconnection feature passes through the first dielectric material or the second dielectric material under the isolation wall structure and contacts the conductor.7.The integrated circuit structure of claim 1, wherein the first dielectric material includes nitride, the second dielectric material includes oxide, and the non-conductive spacer includes oxynitride.8.The integrated circuit structure according to any one of claims 1-7, wherein the transistor further comprises one or more nanowires or nanoribbons or nanosheets, and the gate structure surrounds those one or more nanosheets. Wire or nanobelt or nanosheet.9.7. The integrated circuit structure according to any one of claims 1-7, wherein the transistor further comprises a fin structure, and the gate structure is on the top wall and sidewalls of the fin structure.10.The integrated circuit structure according to any one of claims 1-7, wherein the transistor is a first transistor, the device region further includes a second transistor, and the first transistor and the second transistor are opposite to each other. Are arranged in a stacked configuration with each other such that the first transistor is a bottom transistor, and the second transistor is above the first transistor, and wherein the second transistor is directly connected to the front contact area.11.7. The integrated circuit structure according to any one of claims 1-7, wherein the transistor is a first transistor, and the non-conductive spacer is laterally located between the first dielectric material and the second dielectric The first non-conductive spacer between the first part of the material, the device region further includes a second transistor, the second transistor includes a source region or a drain region, and a gate structure, wherein the first dielectric The second part of the material is below the source region or the drain region of the second transistor, and the second part of the second dielectric material is laterally opposite to the second part of the first dielectric material Adjacent to and below the gate structure of the second transistor, and a second non-conductive spacer is laterally located between the first dielectric material and the second portion of the second dielectric material.12.An integrated circuit structure, including:A device region, which includes a first transistor and a second transistor, each of the first transistor and the second transistor includes a source region or a drain region, and a gate structure;The front contact area, which is above the device area; andThe back contact area is below the device area, and the back contact area includes:The first dielectric material under the source region or the drain region of both the first transistor and the second transistor, and the gate of both the first transistor and the second transistor The second dielectric material under the structure, and the non-conductive spacer between the first dielectric material and the second dielectric material in the lateral direction, wherein the first dielectric material, the second dielectric material, and the Each of the non-conductive spacers exists in the same horizontal plane,A first interconnect feature that passes through the first dielectric material and contacts the bottom surface of the source region or the drain region of the first transistor, andA second interconnect feature passing through the second dielectric material and contacting the gate structure of the second transistor.13.The integrated circuit structure of claim 12, further comprising a contact structure on the bottom surface of the source region or the drain region of the first transistor, the contact structure comprising metal, wherein the first transistor An interconnection feature directly contacts the contact structure, and wherein the contact structure is conformal to the bottom surface of the source region or the drain region of the first transistor.14.The integrated circuit structure of claim 12, wherein the gate structure of the second transistor includes a high-k dielectric and a gate electrode, and the second interconnect feature is in contact with the gate electrode.15.The integrated circuit structure according to claim 12, wherein the device region further comprises an isolation wall structure, the isolation wall structure includes an insulating material and a conductor in the insulating material, wherein the first dielectric material or One of the second dielectric materials is also under the partition wall structure.16.The integrated circuit structure according to claim 15, wherein the back contact area further comprises a third interconnection feature, the third interconnection feature passing through the first dielectric under the isolation wall structure Material or the second dielectric material and contact the conductor.17.The integrated circuit structure of claim 12, wherein the first dielectric material includes nitride, the second dielectric material includes oxide, and the non-conductive spacer includes oxynitride.18.The integrated circuit structure according to any one of claims 12-17, wherein one or both of the first transistor and the second transistor further comprise one or more nanowires or nanoribbons or nanosheets , And the corresponding gate structure surrounds those one or more nanowires or nanobelts or nanosheets.19.The integrated circuit structure according to any one of claims 12-17, wherein one or both of the first transistor and the second transistor further comprise a fin structure, and the corresponding gate The structure is on the top and side walls of the fin structure.20.The integrated circuit structure according to any one of claims 12-17, wherein the device area includes a lower device area and an upper device area, and the first transistor and the second transistor are the lower device area part.21.An integrated circuit structure, including:A device region, which includes a first transistor, a second transistor, and a third transistor, each of the first transistor, the second transistor, and the third transistor includes a source region or a drain region, and a gate Pole structureA front-side contact area above the device area, the front-side contact area including a front-side interconnect feature that is directly connected to the source region or drain of the third transistor At least one of the region or the gate structure; andA back contact area under the device area, the back contact area including a first dielectric material under the source area or the drain area of both the first transistor and the second transistor , A second dielectric material under the gate structure of both the first transistor and the second transistor, and a non-conductive material located between the first dielectric material and the second dielectric material in the lateral direction Spacer, whereOne of the first dielectric material, the second dielectric material, and the non-conductive spacer is a nitride,One of the first dielectric material, the second dielectric material, and the non-conductive spacer is an oxide, andOne of the first dielectric material, the second dielectric material, and the non-conductive spacer is oxynitride,The first dielectric material and the second dielectric material are made selectively etchable with respect to each other and the non-conductive spacer.22.The integrated circuit structure of claim 21, further comprising:A first bottom surface interconnect feature that passes through the first dielectric material and contacts the bottom surface of the source region or the drain region of the first transistor; andA second bottom surface interconnect feature that passes through the second dielectric material and contacts the gate structure of the second transistor.23.The integrated circuit structure of claim 22, further comprising a contact structure on the bottom surface of the source region or the drain region of the first transistor, the contact structure comprising metal, wherein the first transistor A bottom surface interconnect feature directly contacts the contact structure, wherein the contact structure is conformal to the bottom surface of the source region or the drain region of the first transistor.24.The integrated circuit structure of claim 22 or 23, wherein the gate structure of the second transistor includes a high-k dielectric and a gate electrode, and the second bottom surface interconnection feature is in contact with the gate electrode .25.The integrated circuit structure according to claim 22 or 23, wherein the device region further comprises a partition wall structure, the partition wall structure includes an insulating material and a conductor in the insulating material, wherein the first dielectric Material or one of the second dielectric material is also under the isolation wall structure, and wherein the back contact area further includes a third bottom surface interconnection feature, the third bottom surface interconnection feature passing through The first dielectric material or the second dielectric material under the isolation wall structure is in contact with the conductor.
Back contact of semiconductor deviceBackground techniqueIntegrated circuits continue to shrink to smaller feature sizes and increase to higher transistor densities. Three-dimensional (3D) integration increases transistor density by utilizing the Z dimension, that is, building upward in the X and Y dimensions and building laterally outward. Another development available for semiconductor devices that are more and more densely packaged is the use of both front and back interconnects to establish electrical connections between semiconductor devices. Regardless of whether the integrated circuit includes one device layer (or equivalent to a "device area") or multiple such layers, the use of backside interconnects can improve various aspects of semiconductor device configuration and performance, especially in terms of density constraints. However, there are still a number of problems related to this backside connection that cannot be ignored.Description of the drawings1A is a cross-sectional view of an integrated circuit structure having a stacked transistor configuration and including a variety of etch-selective materials and a backside interconnect structure according to an embodiment of the present disclosure. The cross section is taken perpendicular to the gate structure and through the channel region.1B is a cross-sectional view of an integrated circuit structure having a non-stacked transistor configuration and including a variety of etch selective materials and a backside interconnect structure according to an embodiment of the present disclosure. The cross section is taken perpendicular to the gate structure and through the channel region.2A to 2F show cross-sections of an exemplary fin structure that can be used in a stacked transistor configuration according to various embodiments of the present disclosure, in which upper and lower regions of the same fin structure are used simultaneously For separate transistor devices. As will be further understood from the present disclosure, according to other embodiments, a fin structure including only the upper or lower portion of this exemplary fin structure may be used in a non-stacked transistor configuration. The cross section is taken perpendicular to the fin structure.3A to 9D are various cross-sectional views showing various stages of manufacturing a back contact of an integrated circuit structure according to some embodiments of the present disclosure. The position of the cross section is changed to stably show exemplary structural features, and the position of the cross section includes: a vertical cross section taken parallel to the gate structure and through the channel region; parallel to the gate structure and through the source A vertical cross section taken through the pole region or drain region; a vertical cross section taken perpendicular to the gate structure and through the channel region; and a horizontal cross section taken through the back contact area.Figure 10 shows a computing system implemented with one or more integrated circuit structures configured in accordance with embodiments of the present disclosure.These drawings illustrate various embodiments of the present disclosure for illustrative purposes only. From the detailed discussion below, many variations, configurations, and other embodiments will be apparent. Furthermore, as will be appreciated, the drawings are not necessarily drawn to scale or are intended to limit the described embodiments to the specific configurations shown. For example, although some drawings roughly show straight lines, right angles, and smooth surfaces, considering the practical limitations of the manufacturing process, actual implementations of the disclosed technology may have imperfect straight lines and right angles, and some features may have surfaces The topography or other is not smooth. In short, the drawings are provided only to illustrate exemplary structures.detailed descriptionDescribes the back contact formation technology and structure. This technique uses two or more etch selective materials to promote the formation of backside contacts. As will be understood from this disclosure, compared to standard lithography techniques such as extreme ultraviolet (EUV), argon fluoride immersion (ArFi) lithography, or other such techniques that rely on tight lithographic patterning on the backside, etching Selectivity allows for a more forgiving process. In this way, the use of multiple etch-selective materials (at least one of which is a dielectric) facilitates the realization of a relatively convenient and easy process (compared to the existing method), through which the process is combined with the source/drain regions and/ Or the backside contact of the gate structure, and the use of multiple etching selective materials (at least one of which is a dielectric) can also help prevent adjacent conductive structures from shorting to each other due to variability in photolithography and/or patterning processes . This technique is particularly suitable for stacked transistor configurations with multiple device levels along the height of the fin structure, but can also be used in non-stacked configurations with back contacts. Likewise, the technology can be applied to planar and non-planar transistor architectures, including FinFETs (e.g., double-gate and tri-gate transistors), and nanowire or nanoribbon or nanosheet transistors (e.g., full-gate transistors). In a more general sense, the techniques provided herein can be used in any integrated circuit structure that includes backside contacts, as will be further understood in light of this disclosure.General overviewAs mentioned above, there are still a number of problems related to forming the back connection that cannot be ignored. In more detail, backside contact formation is a technique in which the semiconductor substrate is reoriented to enable processing on and/or through the backside of the substrate, the backside being the same as the one or The sides (ie, the "front") of the multiple device layers are opposite. The processing includes using photolithography and patterning techniques to expose the back surface of one or more of the source region, the drain region, and/or the gate electrode. Once exposed, interconnect structures (eg, contacts, vias, conformal conductive layers, metal lines) can be fabricated to establish electrical contact with the source region, drain region, and/or the exposed backside of the gate electrode. The use of backside interconnects can be helpful for the manufacture of semiconductor devices, especially as device density increases and the ability to connect densely packed and closely spaced devices becomes more challenging. However, standard patterning techniques (eg, photolithography, etching) used to remove the back portion of the substrate to expose the selected back portion of the semiconductor device may lack the necessary precision and/or accuracy, especially as the size continues Of shrinking. For example, in view of the increasingly tighter spacing between adjacent semiconductor devices and the tighter spacing between structures within a single device (for example, the source region, the drain region, and the channel region between them), even EUV Photolithography is also prone to alignment errors and poor electrical connections, which short-circuit devices or structures in integrated circuits. The resulting manufacturing delays and yield losses reduce the attractiveness of using backside interconnect solutions.Therefore, according to some embodiments, the present disclosure provides integrated circuit structures that include backside contacts and corresponding formation techniques that help alleviate these problems. The structures and techniques described herein use different materials that have different etch selectivities from each other. The use of different etch selective materials (in response to different etch chemistries) allows for relatively loose registration/alignment schemes. Therefore, for example and according to an exemplary embodiment, the first type of etch selective material is used as a mask to protect the first set of features (eg, source and drain regions), and the second type of etch selective material Used as a mask to protect the second set of features (eg, gate structure). In this exemplary case, when the first etching scheme is used to remove the first etch-selective material to expose one or more features in the first set of features to be contacted, the second type of etching that protects the second set of features The selective material remains intact (for the most part). Similarly, when a second etching scheme different from the first etching scheme is used to remove the second etch selective material to expose one or more features of the second set of features to be contacted, the first type of the first set of features is protected The etch-selective material remains intact (for the most part). If a given etching scheme is selective to a particular material, the etching scheme tends to remove the material at a much lower rate than the etching scheme removes one or more other materials that are also exposed when the etching scheme is implemented ( For example, 2 times lower, or 3 times lower, or 10 times lower, or 20 times lower, or higher multiples lower, or may not be removed at all). In some embodiments, the one or more etch selective materials are dielectric materials, which are also used to provide electrical isolation (in addition to etch selectivity).In some exemplary embodiments, the non-conductive spacer between the first etch selective material and the second etch selective material further assists the selective etching process and the provision of back contacts. As will be appreciated, when dielectrics or other non-conductive etch-selective materials are used, errors in patterning conductive structures do not necessarily cause electrical shorts to adjacent conductive features or structures. That is, even if the via holes connected to the source region, drain region, gate electrode, or other parts of the semiconductor device are large enough to extend to adjacent conductive structures, the intervening etching of selective dielectrics or other non-conductive materials will prevent both Short circuit between structures. Based on the present disclosure, many variations and other embodiments will be understood.Exemplary architectureFIGS. 1A and 1B each show a cross-sectional view of an integrated circuit structure including a plurality of etch selective materials and a backside interconnect structure according to some embodiments of the present disclosure. Both exemplary embodiments are configured with non-planar structures including fin structures and/or nanowires in the channel region, although this technique can also be used with planar structures. As can be seen, the cross section shown is perpendicular to the gate structure and taken through the channel region. FIG. 1A shows a stacked transistor architecture including nanowires in the channel region, and FIG. 1B shows a non-stacked or single device layer architecture that includes nanowires in the channel region on the left side of FIG. 1B , And the fin in the channel region on the right side of FIG. 1B. The similarities between the two architectures including back contacts will be obvious.As can be seen with reference to FIG. 1A, an exemplary integrated circuit includes a stacked transistor configuration including an upper and a lower portion separated by an isolation region 106, and wherein the channel region of the fin structure has been processed into Nanowires. In particular, the upper part of the fin structure is a part of the upper device region 108 and the lower part of the fin structure is a part of the lower device region 104. The lower gate structure of the lower device region 104 surrounds the nanowire 116A and includes a gate dielectric 122A and a gate electrode 120A, and the upper gate structure of the upper device region 108 surrounds the nanowire 116B and includes a gate dielectric 122B and a gate electrode 120B . Likewise, the lower device region 104 includes a source region and a drain region 124A adjacent to the nanowire 116A, and the upper device region 108 includes a source region and a drain region 124B adjacent to the nanowire 116B. The front contact 125 is provided on the source/drain region 124B, and this exemplary embodiment further includes an insulator material 127B adjacent to the source/drain region. As can be further seen, the back contact area 103 is applied to the back surface 102 of the structure. In this exemplary embodiment, the back contact area 103 includes a source/drain contact 138, a first etch selective material 138 , The second etch selective material 140, the spacer 126 and the interconnection feature 128. Similarly, a front contact area 105 can be applied to the front side 101 of the structure. The front contact area 105 can include local contacts and/or interconnects, and one or more interconnects or metallization layers (e.g., metal layer M0- MN, as shown by the dashed lines in Figures 1A-B) and the intervening passivation layer or etch stop layer, if necessary.The integrated circuit of FIG. 1B includes a non-stacked transistor configuration similar to, for example, the lower device region 104 of FIG. 1A. The relevant discussions will also apply to these two structures.Although only a single fin structure and two gate structures are shown in each of FIGS. 1A-1B, as will be understood, any number of fin structures and gate structures may be used. The fin structure shown in Figures 1A-1B includes two nanoribbons, however, any number of nanoribbons or different channel regions can be used in each fin structure. Additionally, other embodiments may have fins in the channel region instead of nanowires, or some other combination of fins, nanowires, and/or nanoribbons and/or nanosheets, as will be further understood. Note that complementary circuits in a stacked architecture may include, for example, p-type devices in the upper fin portion and n-type devices in the lower fin portion, and vice versa, although other embodiments may include different arrangements (e.g., The upper and lower parts are both n-type or p-type, or alternate fin structures with alternating polarities). Similarly, complementary circuits in a non-stacked architecture may include alternating patterns of p-type fins and n-type fins, although other embodiments may include different arrangements (for example, all n-type fins or p-type fins) Fins, or p-type or n-type fin groups or fins equivalent). Any number of other configurations will be obvious, and all can benefit from the backside technology provided in this article.The substrate 112 may have any number of standard configurations, for example, a bulk substrate, a semiconductor-on-insulator substrate, or a multilayer substrate. In some exemplary embodiments, the substrate 112 may be, for example, a bulk silicon or germanium or gallium arsenide substrate. In other embodiments, the substrate 112 may be a multilayer substrate configuration, such as a silicon-on-insulator (SOI) substrate. In other embodiments, the substrate 112 is optional, or removed at some point in the process. For example, in some embodiments, the substrate 112 is removed after the lower device region and the upper device region are formed to allow further required processing under the lower device region, such as forming the back contact region described in various ways herein 103. In other embodiments, if not completely removed, the back contact area 103 may be formed in the substrate 112.The fin structure in a stacked or non-stacked structure can be configured in any number of ways, including fins inherent to the substrate 112, replacing fins or fin structures, and/or suitable for forming nanowires (or nanowires). Tape or nanosheets, depending on the situation; for the convenience of discussion, all of these can be summarized as nanowires) multilayer structure. For example, in a stacked structure, the upper fin portion may include, for example, a first semiconductor material, and the lower fin portion may include a second semiconductor material different in composition from the first semiconductor material. In another exemplary stacked structure, the upper fin portion may include a semiconductor material having one crystal orientation, and the lower fin portion may include the same semiconductor having a different crystal orientation. Exemplary semiconductor materials include, for example, silicon, germanium, silicon germanium (SiGe), semiconductor oxides such as indium gallium zinc oxide (IGZO), indium gallium arsenide (InGaAs), indium arsenide (InAs), gallium antimonide (GaSb) ), or other suitable semiconductor materials. Alternatively, the upper fin portion and the lower fin portion may include the same semiconductor material and configuration. Figures 2A-F show various exemplary fin structures that can be used in a stacked architecture and will be discussed next. Any such structure shown in Figures 2A-F can be substituted for Figure 1A. As will be further understood, for a non-stacked architecture, any upper or lower half of the structure shown in Figures 2A-F can be replaced in Figure 1B.The isolation portion 106 electrically isolates the upper device region 108 and the lower device region 104, and may be implemented, for example, by using an insulator layer (for example, oxide or nitride) or by doping or fixed charge isolation. The insulator 127A-B adjacent to the isolation portion 106 may be any suitable insulator material, such as silicon dioxide, silicon nitride, silicon carbide, silicon oxynitride, polymer, porous form of any of these or a combination of these Any combination (for example, silicon oxide on the top and silicon nitride on the bottom, or vice versa). In some embodiments, the isolation portion 106 and the insulator 127A-B are the same material, while in other embodiments, they are different in composition to facilitate the formation of upper and/or lower device region features (for example, in order to The two materials provide etching selectivity, for example, for etching that removes the insulator 127A-B but does not remove the isolation region 106, and vice versa). From the present disclosure, many such configurations and changes will be apparent.In the exemplary embodiment shown, the upper gate electrode and the lower gate electrode are electrically isolated from each other by the isolation region 106. In other embodiments, at least one of the upper gate electrode and the lower gate electrode may be electrically connected to each other through the isolation region 106. In addition to the gate dielectric 122A-B and the gate electrodes 120A-B, the upper and lower gate structures also include gate spacers 123A-B. Any number of gate structure configurations can be used. The gate spacers 123A-B may be, for example, silicon nitride or silicon dioxide or carbon-doped oxide or oxynitride or carbon-doped oxynitride. The gate dielectric 122A-B may be, for example, any suitable one or more gate dielectric materials, such as silicon dioxide or high-k gate dielectric materials. Examples of high-k gate dielectric materials include, for example, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, zirconium silicon oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide, barium titanium oxide , Strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide and lead zinc niobate. In some embodiments, when a high-k material is used, an annealing process may be performed to improve the gate dielectric quality. In addition, for example, the gate electrodes 120A-B may include various suitable metals or metal alloys, such as aluminum, tungsten, titanium, tantalum, copper, titanium nitride, ruthenium, or tantalum nitride.In some embodiments, the gate dielectric 122A-B and/or the gate electrode 120A-B may include a multilayer structure that is composed of two or more material layers or components. For example, in one such embodiment, the gate dielectric structure 122A or B (or both) is a double-layer structure having a first dielectric material (such as silicon dioxide) in contact with the corresponding channel region and a second A dielectric material contacts a second dielectric material (for example, hafnium oxide), and the dielectric constant of the first dielectric material is lower than the dielectric constant of the second dielectric material. Likewise, the gate electrode structure 120A or B (or both) may include a central metal plug portion (e.g., tungsten), which has one or more external work function layers and/or barrier layers (e.g., tantalum, tantalum nitride, Containing aluminum alloy), and/or a coating layer that reduces resistance (e.g., copper, gold, cobalt, tungsten). In some embodiments, the gate dielectric 122A-B and/or the gate electrode 120A-B may include one or more materials in which the concentration is graded (increased or decreased as the case may be).Also note that the gate structure of the upper device region 108 may be the same as or different from the gate structure of the lower device region 104. In some exemplary embodiments, for example, the gate electrode 120B of the upper gate structure includes p-type work function metal suitable for PMOS devices, and the gate electrode 120A of the lower gate structure includes n-type work function metal suitable for NMOS devices. . Likewise, the gate dielectric 122B of the upper gate structure may include a first gate dielectric material, and the gate dielectric 122A of the lower gate structure may include a second gate dielectric material different in composition from the first gate dielectric material. In any such case, the upper gate dielectric structure 122B and the lower gate dielectric structure 122A may be used with intentionally different thicknesses adjusted for different types of transistor devices. For example, a relatively thick gate dielectric can be used for high voltage transistor devices, while a relatively thin gate dielectric can be used for logic transistor devices.The source and drain regions 124A-B can be implemented using any number of standard processes and configurations. As can be seen in this exemplary embodiment, the source/drain region is an epitaxial source/drain provided after the relevant part of the fin or fin structure is isolated and etched away or otherwise removed. Polar region. Therefore, the source/drain material may be different in composition from the underlying fin structure or substrate 112 material and/or channel material. In addition to the standard source/drain formation process, note that the etch selectivity can be deposited in the source/drain trench before depositing the required source/drain material for the source/drain region 124A Material 356 (as will be described next with reference to Figures 3B-C). The depth of the etching selective material 356 may be set by an etch-back process, or be deposited at a desired level in other ways. The shape of the source/drain regions can vary greatly depending on the process used.For example, in some embodiments, the bottom of the source/drain region trench used to form the source/drain region 124A is faceted, which in turn causes the etching selective material 356 to exhibit a faceted shape. It is then given to the bottom of the source/drain region 124A. In another example, the etching selection layer 356 may exhibit a rectangular shape, but due to the preferential growth rate of the epitaxial material in a specific crystal orientation, the source/drain region 124B may exhibit a faceted shape. In some such exemplary cases, due to this facet of the source/drain material 124B, an air gap or void region may be included between the etch selective material 356 and the source/drain material 124B. Area. In another exemplary case, the source/drain regions 124B may be overgrown from their respective trenches and have facets on the top, and the corresponding source or drain contact structure 125 sits on the faceted On the extra part. Alternatively, in other embodiments, any excess portion of the epitaxial source/drain regions 124A and/or 124B with facets on top may be removed (for example, via chemical mechanical planarization or CMP). As will be further understood, in some embodiments, removing the original source/drain regions and replacing them with epitaxial source/drain materials can result in a source that is wider than the underlying fin structure (e.g., 1-10 nm). The upper part of the electrode/drain region (the overgrown part of epitaxial deposition). Any combination of these features can be obtained.The source/drain regions 124A and/or 124B can be any suitable semiconductor material. For example, the PMOS source/drain regions may include, for example, group IV semiconductor materials such as silicon, germanium, SiGe, germanium tin (GeSn), SiGe alloyed with carbon (SiGe:C). Exemplary p-type dopants in silicon, SiGe, or germanium include boron, gallium, indium, and aluminum. The NMOS source/drain regions may include, for example, III-V semiconductor materials, such as two or more of indium, aluminum, arsenic, phosphorus, gallium, and antimony. Some exemplary compounds include, but are not limited to, indium aluminum arsenide (InAlAs), indium arsenic phosphide (InAsP), InGaAs, indium gallium arsenide phosphide (InGaAsP), GaSb, aluminum gallium antimonide (GaAlSb), indium gallium antimonide (InGaSb) or indium gallium phosphate antimonide (InGaPSb) Exemplary N-type dopants include phosphorus, arsenic, and antimony in silicon, germanium, or SiGe. In a more general sense, the source/drain regions can be any semiconductor material suitable for a given application. In some specific such exemplary embodiments, for example, the source/drain regions 124A and/or 124B include SiGe (for example, Si1-xGex, where 0.20≤x≤0.99; or SixGey:Cz, where 8 ≤x≤16; 80≤y≤90; 1≤z≤4; x+y+z=100). In another embodiment, the source/drain region 124A includes an indium-containing compound (for example, InyAl1-yAs, where 0.60≤y≤1.00; or InAsyP1-y, where 0.10≤y≤1.00; InyGa1-yAszP1 -z, where 0.25≤y≤1.00 and 0.50≤z≤1.00; InxGa1-xSb, where 0.25≤x≤1.00; or InxGa1-xPySb1-y, where 0.25≤x≤1.00; 0.00≤y≤0.10) .In some embodiments, the source/drain regions 124A and/or 124B may include a multilayer structure, such as a germanium cover on a SiGe body, or a germanium body and a gap between the corresponding channel region and the germanium body Carbon-containing SiGe spacer or liner. In any such case, a portion of the source/drain regions 124A and/or 124B may have a graded composition, such as a graded germanium concentration to promote lattice matching, or a graded dopant concentration to promote low contact resistance. As will be appreciated, any number of source/drain configurations can be used, and the present disclosure is not intended to be limited to any particular such configuration.As will be further apparent, the source and drain contact structure 125 may also be included in the final structure. Note that even if bottom contacts are provided for the source/drain regions 124A, these source/drain regions 124A may have front contacts similar to the contacts 125. The source/drain contact structure 125 may have any number of standard configurations. In some exemplary embodiments, the contact structure 125 includes a contact metal and a conductive liner or barrier layer, which is deposited in a contact trench formed in an insulator layer above the source and drain regions 124B. The liner layer can be, for example, tantalum or tantalum nitride, and the metal can be any suitable plug/core material, such as tungsten, aluminum, ruthenium, cobalt, copper, or alloys thereof. In some cases, the contact structure 125 may be optimized p-type and n-type contact structures similar to p-type and n-type gate electrode structures. For example, according to some of these embodiments, for NMOS source/drain contact structures, the liner may be titanium, and for PMOS source/drain contact structures, the liner may be nickel or platinum. In other embodiments, in addition to the contact metal and any lining layers, the contact structure 125 may also include materials that reduce resistance (for example, nickel, platinum, nickel platinum, cobalt, titanium, germanium, nickel, gold or alloys thereof, For example, a germanium-gold alloy, or a multilayer structure of titanium and titanium nitride, all of which have good contact resistance). Other embodiments can be configured differently. In a more general sense, according to the embodiments of the present disclosure, any number of suitable source/drain contact structures may be used, and the present disclosure is not intended to be limited to any specific such contact structure configuration.As can be further seen from Figures 1A-1B, the front side 101 and the back side 102 are shown, and reference to them in the following description may help. It should be understood that the front side 101 and the back side 102 may generally refer to the corresponding surfaces of the entire integrated device structure, and the respective structures within the integrated device structure (for example, the source region, the drain region, and the gate structure), the substrate and the combination thereof. surface. It will also be understood that the techniques described herein can be applied to any configuration of semiconductor devices, whether it is a single device layer (e.g., only the lower device region 104, as shown in FIG. 1B), two device layers (e.g., device layers 104, 108). Both, such as shown in Figure 1A) or more. In examples where there are two or more device layers, any number of integration schemes can be used to fabricate a stacked configuration. In some examples, the stacked device layers can be formed by the fabrication of a single fin structure that includes an isolation region 106 between the upper device region 108 and the lower device region 104 (eg, FIG. 2A -F example shown). In other examples, the stacked device layers may be formed by separately manufacturing the device layers, wherein the separately manufactured device layers are stacked and bonded together using bonding materials to provide a monolithic structure. In this case, note that the bonding material (for example, silicon nitride, silicon oxide SiOx) may further serve as the isolation region 106.As can be further seen, a contact 138 is formed on the back surface of the source/drain region 124A, and then the etching selective material 140 is formed. In some examples, the etch-selective material 140 may be a dielectric material, but as is apparent from the following description, this is not required. As can be further seen in this particular exemplary embodiment, the backside interconnect 128 is formed to contact the backside of the middle source/drain region 124A through the corresponding contact 138. Note that the backside interconnect 128 may overlap an adjacent area (such as the overlap area 144 shown in FIGS. 1A-B). To this end, the width W2 is sufficient for the conductive material of the interconnect 128 to contact the spacer 126 and the etching selective material 136. However, since the spacer 126 and the etch selective material 136 are dielectric materials, the interconnect 128 is not shorted to these adjacent structures or other adjacent conductive structures covered by the spacer 126 and the etch selective material 136. Therefore, although the backside interconnect 128 is formed of a conductive material that will form an electrical short circuit with adjacent conductive materials (e.g., source regions, drain regions, interconnects/contacts, or other conductive structures), according to some embodiments, The technique described in this article avoids this short circuit. In addition, for a given etchant, the etch-selective material 136 has an etch selectivity relative to the etch-selective material 140, and for the respective etchant of both materials 136 and 140, the spacer 126 has an etch selectivity relative to both the materials 136 and 140. Etching selectivity, as will be further explained next with reference to various exemplary embodiments shown in FIGS. 3A to 9D.As will be understood, the integrated circuit as shown in FIGS. 1A-1B may also include other features. For example, the structure may further include interconnecting features and layers. For example, in a stacked configuration such as that shown in FIG. 1A, a first vertical interconnect feature may be provided that connects a given upper source or drain region 124B to a corresponding lower source or drain region 124A. Likewise, as previously described, it may include a front contact area 105, such as local contacts and interconnects, and one or more metallization layers formed above the local contacts/interconnects. This type of metallization layer shown in dashed lines roughly above the front surface 101 is sometimes referred to as a back-end process or so-called BEOL (not to be confused with the backside contact and interconnect structure). The BEOL structure can be different from the local contacts and interconnects directly above the front 101, or include local contacts and interconnects directly above the front 101, and may include any number of different interconnect/metallization layers (e.g., M0 to MN), but include, for example, nine to twelve different such layers in some such embodiments. Such interconnect features and layers can be provided, for example, using standard photolithography and masking operations, and standard deposition (e.g., CVD, ALD, etc.). Another feature that can be included is a partition wall structure disposed between the two fin structures to help electrically isolate the two fin structures. In some such cases, the partition wall may include an inner conductor covered by an insulating material, where the inner conductor may be used to transmit power and signals, for example. Also note that although the fin structure is shown in an ideal state (for example, completely vertical sidewalls and completely horizontal top and bottom), all such geometric shapes can be circular or tapered, or Other non-ideal shapes. For example, due to the forming process, the shape of the fin structure may be trapezoidal, or hourglass or some other shape.As will be understood, use herein such as "channel region" or "channel structure" or "active semiconductor channel structure" or "source region" or "source structure" or "drain region" or "drain" Expressions such as "polar structure" only refer to specific positions of the transistor structure, and are not intended to imply that the transistor itself is currently electrically biased or otherwise in a conductive state where carriers move in the channel region. This will be particularly obvious. of. For example, a given transistor does not need to be connected (directly or indirectly) to whatever power source to have a channel region or channel structure, or source and drain regions or structures. Note further that the semiconductor materials constituting the fins, nanowires, nanoribbons, nanosheets, channel regions or structures, source regions or structures, or drain regions or structures may be referred to herein as being composed of one or more A body composed of three semiconductor materials or a body including one or more semiconductor materials. Similarly, the insulator material that constitutes an insulating structure or region (such as a shallow trench isolation (STI) layer or structure, a dielectric layer or structure, an interlayer dielectric (ILD) structure, a gate dielectric, a gate spacer, or a dielectric capping layer) is in It may be referred to herein as a body composed of one or more insulator materials or a body including one or more insulator materials. Likewise, the conductive material that constitutes a conductive structure or region (such as a via structure, a conductive line, a conductive layer or structure, a conductive plug, or a conductive feature) may be referred to herein as a body composed of one or more conductive materials Or a body comprising one or more conductive materials.Note that the use of "source/drain" herein is only intended to mean the source region or the drain region or both the source region and the drain region. For this reason, unless otherwise stated, the forward slash ("/") used herein means "and/or", and is not intended to imply that the source region and the drain region or anything listed together with the forward slash Any specific structural limitations or arrangements of other materials or features.As used herein, materials that are "different in composition" or "different in composition" refer to two materials with different chemical compositions. The composition difference can be, for example, by means of elements in one material but not in another material (for example, SiGe is different from silicon in composition), or by means of one material having all the same elements as the second material but At least one of these elements is intentionally provided in a different concentration in one material than in another material (for example, SiGe with a germanium atomic percentage of 70 is different in composition from SiGe with a germanium atomic percentage of 25) . In addition to this diversity of chemical composition, materials can also have different dopants (such as gallium and magnesium) or the same dopants, but at different concentrations. In other embodiments, materials with different compositions may also refer to two materials with different crystal orientations. For example, (110) silicon is different or different from (100) silicon in composition.The use of tools such as electron microscopes can be used to test the uses of the techniques and structures provided in this article. Electron microscopes include scanning/transmission electron microscopy (SEM/TEM), scanning transmission electron microscopy (STEM), nanobeam electron diffraction (NBD or NBED) and reflection Electron microscopy (REM); composition mapping; x-ray crystallography or diffraction (XRD); energy dispersive x-ray spectroscopy (EDX); secondary ion mass spectrometry (SIMS); time of flight SIMS (ToF-SIMS); atomic probe imaging or Tomography; local electrode atom probe (LEAP) technology; three-dimensional tomography; or high-resolution physical or chemical analysis, just to name a few suitable exemplary analysis tools. In particular, in some embodiments, such a tool may indicate the presence of a backside interconnect structure as described in various ways herein, where the backside interconnect structure utilizes etch-selective materials and spacers. For example, SEM/TEM imaging can be used to show a cross-section cut perpendicular to the gate structure, the cross-section showing below the gate and/or source/drain regions of the device, as in this document in various ways Described backside interconnect feature. For example, according to some embodiments, such cross-sectional images may reveal the presence/shape of spacer material between the gate and source/drain backside interconnect features (whether those features include metal or insulator or both. ). In some embodiments, such cross-sectional images may reveal the presence of different materials under the source, drain, and channel regions (e.g., according to some embodiments, some locations will have residues of etch-selective materials, and Some locations will have metal, and some locations can have both). In some embodiments, the cross-sectional image may further reveal bottom metal contacts conformal to the faceted epitaxial source and/or drain regions. From the present disclosure, many other configurations and changes will be apparent.Note that things such as "above" or "below" or "top" or "bottom" or "top surface" or "bottom surface" or "top" or "bottom" or "front" or "back" or "top surface" or " Designations such as "bottom surface" or "topmost surface" or "bottommost surface" are not intended to necessarily imply a fixed orientation of the integrated circuit structure provided herein or otherwise limit the present disclosure. Rather, such terms are only used in a relative sense to consistently describe structures as shown or described herein. As should be understood, the structure provided herein can be rotated or oriented in any manner such that the top surface or top surface becomes, for example, a left-facing side wall or bottom surface or bottom surface, and the bottom surface or bottom surface, for example, becomes a right-facing side wall Or top surface or top surface. Any such structure having an alternative orientation relative to the orientation shown herein still belongs to the embodiments of the present disclosure.Stacked source and drain regions2A to 2F show cross-sections of exemplary fin structures according to embodiments of the present disclosure, where these exemplary fin structures can be used in stacked transistor configurations (such as the example shown in FIG. 1A) , The upper and lower regions of the same fin structure are used for separate transistor devices at the same time. Also, as previously mentioned, any upper or lower half of the structure shown in Figures 2A-F can be used for the fin structure shown in Figure 1B. The cross section is taken perpendicular to the fin structure. Note that the shading of features/layers is provided only to help distinguish features/layers visually. For this reason, it should also be noted that the materials shaded in FIGS. 2A-F are not meant to be the same as the materials shaded in FIGS. 1A-B and 3A-9D.As can be seen, each fin structure generally includes an upper fin region and a lower fin region. As can be further seen, each of these upper and lower fin regions may include fins, or one or more nanowires (separated by a sacrificial material), or one or more nanoribbons Or nanosheets (separated by sacrificial material). Between the upper fin area and the lower fin area is an isolation area (e.g., isolation area 106) generally indicated by a dashed line. The fin structure can have any number of geometric shapes, but in some exemplary cases is 50nm to 250nm high (e.g. 55nm to 100nm) and 5nm to 25nm wide (e.g. 10nm to 15nm). The isolation region between the upper fin region and the lower fin region can be, for example, integrated into one or both of the upper fin region or the lower fin region by integrating an intervening insulating layer or dopant near the dotted area. To achieve. Standard processing can be used to form the fin structure, for example, blanket deposition of the various layers that make up the structure, and then patterning and etching into individual fin structures.2A shows a fin structure having an upper fin region including a first semiconductor material (diagonal hatching) and a lower region including a second semiconductor material that is different in composition from the first semiconductor material Fin area. Figure 2B shows a fin structure having an upper fin region and a lower fin region, the upper fin region including four nanowires, the four nanowires including the first semiconductor material (diagonal hatching ), and the lower fin region includes a second semiconductor material different in composition from the first semiconductor material (vertical hatching). Note that the nanowires are located in the fin structure so as to be enclosed on top of the upper fin area. FIG. 2C shows a fin structure having an upper fin region including a first semiconductor material and a lower fin region including four nanowires, the four nanowires having a composition different from The second semiconductor material of the first semiconductor material (hatched diagonally). In this exemplary case, note that the nanowire is located in the fin structure so as to be enclosed on the top of the lower fin area.Figure 2D shows a fin structure with an upper fin region and a lower fin region, the upper fin region including three nanoribbons, the three nanoribbons including the first semiconductor material (diagonally hatched ), and the lower fin region includes two nanowires, the two nanowires including a second semiconductor material different in composition from the first semiconductor material (vertical hatching). In this exemplary case, note that the nanoribbons are located in the fin structure so as to be enclosed on the top of the upper fin area, and the nanowires are located in the fin structure so as to be enclosed on the top of the lower fin area. FIG. 2E shows a fin structure having an upper fin region including a first semiconductor material and a lower fin region including four nanoribbons or nanosheets, the four nanoribbons or nanosheets The sheet includes a second semiconductor material different in composition from the first semiconductor material (diagonal hatching). In this exemplary case, note that the nanoribbons or nanosheets are located in the fin structure so as to be enclosed at the bottom of the lower fin area. Another exemplary embodiment may be the case where the upper fin region and the lower fin region are of the same material (a continuous fin composed of the same semiconductor material) or configured in a similar manner, for example in one such example In an exemplary case, the lower area of FIG. 2A and the upper area of FIG. 2E, or the upper area of FIG. 2B and the lower area of FIG. 2C in another exemplary case.Figure 2F shows a fin pair including two fin structures. Each fin structure can be configured in any number of ways, as shown in the examples of Figures 2A-2E. In this exemplary case, each fin structure is configured similarly, and has an upper region including a first semiconductor material and a lower region including two nanowires that are different in composition from the first The second semiconductor material of the semiconductor material (diagonal hatching). Note that the fins are tapered. Also note the curved groove bottom between the fins and the rounded top of the fin structure. This taper and circular shape can be caused by the fin formation process and can vary between fins. For example, in some embodiments, the fin shapes of the two fins may be different due to processing and layout patterning effects, so that the taper of the outer side of the left fin is different from the taper of the outer edge of the right fin.Note also that the exemplary fin structures shown each include an upper fin portion having opposite sidewalls and a lower fin portion having opposite sidewalls, and the sidewalls of the upper fin portion and the lower fin portion The side walls of the object part are collinear. According to some embodiments provided herein, this is an exemplary sign indicating that a common or single fin structure is used for top and bottom transistor devices arranged in a stacked configuration. As will be appreciated, other fin structure configurations may have a curved or hourglass profile, but generally still provide a degree of collinearity or self-alignment between the upper and lower fin portions. In other embodiments, for example, in the embodiment of forming a stacked device layer by separately manufacturing a plurality of device layers, then stacking these device layers and bonding these device layers together using a bonding material, note that on the upper side There may be no such collinearity between the wall and the lower side wall.It should be understood that the technology described herein can be applied to a variety of different transistor devices, including but not limited to various field effect transistors (FET), such as metal oxide semiconductor FET (MOSFET), tunnel FET (TFET), and Fermi filter FET (FFFET) (also called tunnel source MOSFET), here are just a few examples. For example, the technology can be used to benefit n-channel MOSFET (NMOS) devices. According to some embodiments, the n-channel MOSFET (NMOS) device can include npn or nin source-channel-drain schemes, where " n" means n-type doped semiconductor material, "p" means p-type doped semiconductor material, and "i" means intrinsic/undoped semiconductor material (which may also include nominally undoped semiconductor materials) The material, for example, includes a dopant concentration of less than 1E16 atoms per cubic centimeter (cm)). In another example, the technology can be used to benefit p-channel MOSFET (PMOS) devices. According to some embodiments, the p-channel MOSFET (PMOS) device can include pnp or pip source-channel-drain Program. In yet another example, this technique can be used to benefit a TFET device, which according to some embodiments can include a p-i-n or n-i-p source-channel-drain scheme. In yet another example, the technology can be used to benefit FFFET devices. According to some embodiments, the FFFET device can include np-ip (or np-np) or pn-in (or pn-pn) source-channel Road-drain scheme.Additionally, in some embodiments, this technique can be used to benefit transistors including planar and/or non-planar configurations, where non-planar configurations can include fin or FinFET configurations (e.g., dual-gate or tri-gate), full Ring-grid configurations (e.g., nanowires) or some combination thereof (e.g., beaded fin configurations), these provide only a few examples. In addition, the technology can be used to benefit complementary transistor circuits, such as complementary MOS (CMOS) circuits, where the technology can be used to benefit one or more of the n-channel and/or p-channel transistors included in the CMOS circuit . As described herein, some examples include stacked CMOS circuits where n-channel and p-channel devices are located in separate layers along the height of the fin structure, while other examples include non-stacked CMOS circuits where n-channel The channel and p-channel devices are located in separate regions of a single device layer.Method and architecture3A-9D show various cross-sections selected to illustrate the step-by-step manufacturing of integrated circuit devices in accordance with some embodiments of the present disclosure. As will be apparent, the exemplary structure shown is for a non-stacked transistor configuration, but the method is equally applicable to a stacked configuration as previously described. Since bottom contact formation does not generally affect the top of the stacked configuration, the concentrated description of the non-stacked transistor configuration in this article will clearly apply to the stacked configuration.Turning to FIG. 3A, a cross-sectional view of a partially fabricated exemplary integrated circuit structure according to an embodiment is shown. A cross-section is taken perpendicular to the fin structure and through the gate structure to show the channel region. The device structure includes a device region 104. As will be understood, the device region 104 can be the only device layer or the lower device layer in a stacked configuration. In this manufacturing stage, the device structure includes the substrate 112, the nanowire 116, the gate dielectric structure 122, and the gate electrode 120. The previous related discussions regarding each of these features are equally applicable here. In addition, the integrated circuit structure further includes an isolation wall structure 324 separating the two fin structures shown, as well as a shallow trench isolation (STI) 340 and a sub-channel region 344.Two fin structures are shown, one on the left and one on the right. The fin structure has been processed to include nanowires 116 in the channel region. In addition, the partition wall structure 324 is between the two fin structures. The partition wall structure 324 (if present) can be implemented with standard processing and has any number of standard or proprietary configurations. In some such exemplary embodiments, for example, the partition wall structure 324 includes a conductor 328 (e.g., tungsten, copper, silver, aluminum, etc.) that is covered by one or more layers of insulating material 332 (e.g., silicon nitride , Or a double-layer structure, which includes a first layer of silicon dioxide on the conductor 328 and a second layer of silicon nitride on the first layer) encapsulation. The bottom of the isolation wall structure 324 is within the substrate 112 and is at least partially surrounded by a shallow trench isolation layer 340 (for example, silicon dioxide). As will be understood, a variety of insulating materials and structures can be used for STI 340 and insulating material 332.Depending on the given application, the configuration of the sub-channel (or sub-fin) region 344 can vary. In some cases, the sub-channel region 344 is part of the substrate 112, such as a fin stub just below the lowest nanowire 116 (for example, a silicon fin stub extending upward from the substrate) , And in other cases, the sub-channel region 344 is an insulator region (for example, silicon dioxide with or without a silicon nitride liner), which is provided or otherwise formed in the substrate 112 and configured to Reduce the current leakage in the off state. As will be discussed next, the selective material 356 can be etched by not removing (or at a much slower rate, for example, two times, three times, five times, ten times or more times slower) and the selective material 356 can be The etching of other materials exposed during the removal of the channel region 344 removes the material for the sub-channel region 344, as will be described next.Turning to FIG. 3B, the cross-section shown is perpendicular to the fin structure and taken through the source/drain regions. The source/drain regions 124 can be formed using standard techniques such as those previously described in the context of FIGS. 1A-B, for example, etching and replacement processes are used where epitaxial source/drain regions are provided. However, according to an embodiment of the present disclosure, the source/drain grooves are etched to extend deeper down into the substrate, so that the etching selective material 356 can be deposited to the trench before the source/drain material is deposited. Slot. Standard source/drain recess etching schemes (eg, wet etching and/or dry etching) can be used. The depth of the source/drain trench can be deeper than normal to accommodate the added etch selective material 356. For example, in this exemplary case, the source/drain trenches extend beyond the bottom of the isolation wall structure 324, but in other embodiments, they extend to the same depth as the isolation wall structure 324, or just extend to the isolation wall. Above the depth of structure 324. In any such case, and as will be appreciated from the present disclosure, note that the backside of the substrate 112 or structure may be polished or otherwise planarized (during backside contact formation) later to expose the etching options Sex material 356 on the back. Also note that although the source/drain region 124 is shown as having a rectangular cross-section, other cross-sectional shapes are also possible, such as having a faceted top surface and/or bottom surface, as explained previously. The nanowire 116 is shown with a dashed line to indicate its laterally adjacent position, which is covered by the source/drain region 124 from this perspective.Note that even if it is not shown, a front contact (if necessary) can be formed on the source/drain region 124, even if a back contact will be provided. In this case, front contact formation may include, for example, depositing a dielectric layer over the source/drain regions 124, then forming contact trenches and depositing contact material into those trenches. Such front contacts can be used to achieve the desired front connection or simply to remain unconnected. In some embodiments, the front contact may be connected in the form of one or more interconnect layers (sometimes referred to as metallization layers or BEOL) to additional metal and dielectric layers deposited over the source/drain regions 124 . Such a front contact and interconnection/metallization layer can be provided first, and then the resulting structure can be flipped so that back contact processing can be performed. In this case, note that the front side can be passivated or other protective measures (for example, the front side can be bonded to the temporary substrate through a bonding oxide that can be removed later) before turning over.A variety of etch selective materials can be used. Generally, oxide and nitride tend to have etch selectivity with respect to each other (eg, a first etchant that etches nitride will not etch oxide, and a second etchant that etches oxide will not etch nitride). In some examples, the etch-selective material 356 may be composed of titanium nitride (TiN), but other materials that provide the desired etch selectivity may be used, as will be understood from this disclosure. Exemplary etchants that can be used to remove the etching selective material 356 (TiN) include, but are not limited to, a mixture of thermal (eg, 40° C. or higher) sulfuric acid (H2SO4) and peroxide (H2O2). More generally, it will be understood that the etching selective material 356 may be configured to be removable by a corresponding etching, which does not remove the etching selective material of different composition which will be described in more detail below. Chemical mechanical polishing (CMP) may be used to polish or otherwise planarize any excess material 356 down to the surface of the substrate 112.Figure 3C shows another cross-sectional view of the exemplary structure shown in Figures 3A-3B. A cross-section is taken perpendicular to the gate structure and through the channel region of one of the fin structures (rotated 90 degrees from the view shown in FIGS. 3A-B). As shown in this particular exemplary embodiment, the etch selective material 356 is within the substrate 112 and under the source/drain regions 124. Note that the etch selective material 356 is effectively self-aligned to the sub-channel region 344 below the nanowire 116 that constitutes the channel region. This is because the etch-selective material 356 is provided in the extended source/drain trenches etched down adjacent to the channel region (and sub-channel region 344) during the source/drain formation process.Now turning to 4A, the process can continue to remove the sub-channel region 344 (e.g., silicon or SiOx) and any related liners (e.g., silicon nitride), thereby forming cavities or trenches previously occupied by those elements槽451. The cross section shown is perpendicular to the fin structure and taken through the channel region. In this case, the formed cavity is in the substrate 112 below the gate structure and between adjacent portions or blocks of the etch selective material 356. Therefore, the etchant used to remove the sub-channel region 344 may be selective to the etch-selective material 356 (ie, remove the material of the sub-channel region 344, but not the etch-selective material 356). In some exemplary cases, the etchant that can be used to remove the silicon oxide-based material of the sub-channel region 344 includes, for example, CF4/CH2F2 plasma or C4F6/argon (Ar) plasma. The etchant to remove the silicon nitride-based structure (eg, sub-channel region 344 liner) (if present) includes, for example, CH3F/He/O2 plasma. 4B shows a cross-section taken perpendicular to the fin structure and through the source/drain region, and shows the use of selective etching to remove the sub-channel region 344 while under the source/drain region 124 The etch selective material 356 (in this example, composed of TiN) is largely left in place. Figure 4C shows the integrated circuit structure of Figures 4A-B, but rotated 90 degrees, in which the cross section shown is taken perpendicular to the gate structure and through the channel region.As shown in FIG. 5A, the process continues to form conformal dielectric spacers 126 on the sidewalls of trenches 451. The cross section shown is taken perpendicular to the fin structure and through the channel region. As will be understood from the content of this disclosure, the dielectric spacer 126 may provide electrical isolation between adjacent conductive structures. In some examples, the conformal dielectric spacer 126 may be formed of the same materials used for the gate spacers 123A-B. Exemplary materials include, for example, carbon-doped silicon oxynitride and other silicon-based dielectric materials (eg, SiN, SiOx). These materials can be formed by any conformal deposition process, including but not limited to atomic layer deposition (ALD), chemical vapor deposition (CVD), and the like. As shown in the figure, directional etching (for example, anisotropic dry plasma etching) can be used to remove the conformally deposited spacers 126 formed on each exposed horizontal surface (for example, the bottom of the gate electrode 120) while retaining Vertical sidewall spacers 126. The cross-section shown in FIG. 5B is taken perpendicular to the fin structure and taken through the source/drain region, and shows that the spacer 126 is formed at an appropriate position below the source/drain region 124 An etch selective material 356 (in this example, composed of TiN) is added. Figure 5C shows the integrated circuit structure of Figures 5A-B, but rotated 90 degrees, in which the cross section shown is taken perpendicular to the gate structure and through the channel region.FIG. 6A shows the resulting structure after filling the spaced trenches 451 with an etch selective material 136. FIG. The cross section shown is taken perpendicular to the fin structure and through the channel region. As mentioned previously, the etching selective material 136 may be selectively etched relative to the etching selective material 356, and vice versa. In some specific examples, the etch selective material 356 is titanium nitride, and the etch selective material 136 is a dielectric material such as SiOx, silicon nitride, silicon carbide, metal oxide, and silicon oxynitride. For example, exemplary etchants for etching the selective material 136 include: CF4/CH2F2 plasma or C4F6/argon plasma for silicon oxide-based materials, and CH3F/He/O2 plasma for silicon nitride-based materials body. Note that any excess etch selective material 136 may be planarized down to the bottom surface of the structure (eg, so as to be coplanar with the surface of the etch selective material 356). Any suitable deposition technique (eg, ALD and/or plasma assisted CVD) can be used. The cross section shown in FIG. 6B is taken perpendicular to the fin structure and taken through the source/drain region, and shows the formation of the etch selective material 136 at an appropriate position under the source/drain region 124 The etch selective material 356 (in this example, composed of TiN) is left. Figure 6C shows the integrated circuit structure of Figures 6A-B, but rotated 90 degrees, in which the cross section shown is taken perpendicular to the gate structure and through the channel region.Figures 6D and 6E show two views of alternative processes according to other embodiments, in which buried conductors in the partition wall structure can be used to form back contacts. Turning first to FIG. 6D, there is shown a partition wall structure 324 extending between the fin structure (and the nanowire 116) and parallel to the fin structure (and the nanowire 116). As will be noted, the cross section shown is taken perpendicular to the fin structure and through the channel region. In this exemplary case, the backside of the substrate 112 may be patterned and etched to expose the conductor 328 buried in the insulator 332 (for example, for a similar process for forming the trench 451). The resulting trench formed can then be filled with an etch-selective material 641, which can be the same as the etch-selective material 136, for example. As will be understood from the following description, the etch-selective material 641 can then be processed using the same technique used to process other backside regions including the etch-selective material 136 (or 641) in order to form between the buried conductor 328 and the interconnect Electrical contact (e.g. as shown in Figure 9B). Figure 6F shows a horizontal cross section taken along the axis indicated by the dashed line 6F-6F in Figure 6E according to some such embodiments. Various etch-selective materials facilitate the subsequent back contact formation process. Variations will be obvious.Turning now to FIGS. 7A and 7B, the etching selective material 356 is removed using, for example, one of the previously described etching chemistries, thereby forming trenches 761 and exposing the bottom surface of the source/drain regions 124. Recall that the bottom surface can be faceted, but this need not be the case in all embodiments. As will be understood, the cross section shown in FIG. 7A is taken perpendicular to the fin structure and through the source/drain region 124. As will be further understood, the use of the corresponding etching chemistry to remove the etch selective material 356 will leave the etch selective material 136 and spacers 126 largely intact, as further shown in FIG. 7B, and FIG. 7B shows The 7A integrated circuit structure, but rotated 90 degrees, where the cross section shown is taken perpendicular to the gate structure and through the channel region.Continuing to refer to FIGS. 8A-B (respectively having the same cross-section as FIGS. 7A-B), then source/drain contacts may be formed in the trench 761 on the bottom surface of the source/drain region 124. For example, the conformal S/D contact layer 138 is deposited in the trench 761, and any excess deposits can be etched back. Examples of conformal S/D contact layer 138 include, but are not limited to, tungsten, titanium, cobalt, gold, aluminum, silver, copper, and the like. Any number of techniques can be used to complete the conformal deposition of the contact layer 138, including but not limited to ALD, CVD, plasma assisted CVD, and the like. As previously mentioned, the S/D region 124 need not have a rectangular cross-section and may be faceted, such as those shown in the examples shown in FIG. 8A-1. In this case, the source/drain contact layer 138 conforms to the outline of the exposed surface of the source/drain region 124. An example of one such embodiment is further shown in Figure 8A-2. In some cases, the contact layer 138 may be a multi-component structure. For example, a silicide and/or germanide layer (shown as layer 863 in FIG. 8A-2) may be formed between the conformal S/D metal layer 862 and the corresponding source/drain region 124. Depending on the composition of the source/drain regions 124 and contacts 138, any number of layers at the metal/semiconductor interface can be thought of. For example, in a specific exemplary case, if the main component of the source/drain region 124 is silicon and the contact layer 138 includes titanium, the layer 863 may include titanium silicide.After the conformal S/D contact layer 138 is formed, an etch selective material 140 is deposited to fill the remaining portion of the trench 761, as further shown in FIGS. 8A-8B. As will be appreciated, the etch selective material 140 is a dielectric material that can resist etching chemistry capable of removing the etch selective material 136. In some cases, exemplary materials for etching selective material 140 include, but are not limited to, SiOx, silicon nitride, silicon carbide, metal oxide, and silicon oxynitride. Therefore, in an exemplary embodiment, the etch-selective material 136 is a silicon oxide-based material etched by CF4/CH2F2 plasma or C4F6/argon (Ar) plasma, and the etch-selective material 140 is CH3F/He /O2 plasma etched silicon nitride-based material, and the spacer 126 is carbon-doped oxynitride. Considering that nitrides, oxides and carbides tend to have etch selectivity with respect to each other, other variations will be obvious.9A-D collectively show that interconnections 968A and 968B (eg, vias and metal lines) are formed according to an embodiment of the present disclosure, thereby forming the back contact area 103. As will be understood, and as described above, because the etch-selective materials 136, 641 and the etch-selective material 140 are dielectric materials in this exemplary embodiment, the width of the interconnections 968A-B will generally result in the same This is not the case due to the electrical isolation provided by the etching selective materials 136 and/or 140 and/or 641.Turning first to FIG. 9A, according to an exemplary embodiment, an interlayer dielectric (ILD) layer 966 is formed on the back surface of the structure and then patterned. The first etching scheme that is selective for the etch-selective material 140 and the spacer 126 (meaning that the etch-selective material 136 is etched at a much faster rate than the etch-selective material 140 and the spacer 126) is used to form The first set of interconnect trenches. Any suitable deposition technique (eg, CVD, etc.) is then used to deposit interconnect 968A (eg, vias) into the resulting trench. As should be understood, the cross section shown in Figure 9A is taken perpendicular to the fin structure and through the channel region. In the example of FIG. 9A, the backside interconnect 368A enables electrical communication with the gate electrode 120 in the device region 104 and the conductor 328 in the barrier structure 324. Note that in this exemplary case, only a part of each etching selection layer 136 is removed (for example, by patterning on the back surface with photolithography to expose the part, instead of etching the entire area of the selection layer 136). In such an exemplary case, one advantage is that the lithography can be relatively loosely aligned and does not have to define individual features as necessary when using standard backside processes. Therefore, as seen in this exemplary case, the resulting structure includes replacing some areas of the etch selective material 136 with metal (interconnect 968A) and some areas where the etch selective material 136 remains in the final structure, And no back contact is formed in those areas.Then turning to FIG. 9B, the second etching scheme that is selective for the etch-selective material 136 and the spacer 126 (meaning that the etch-selective material 136 and the spacer 126 are etched at a much faster rate 140 etching) is used to form the second set of interconnect trenches. Any suitable deposition technique (e.g., CVD, etc.) is then used to deposit interconnect 968B (e.g., vias) into the resulting trench. As should be understood, the cross section shown in FIG. 9B is taken perpendicular to the fin structure and through the source/drain regions. In the example of FIG. 9B, the backside interconnect 368B enables electrical communication with the source/drain region 124 in the device region 104. Also, note that the resulting structure includes: replacing some areas of the etch-selective material 140 with metal (interconnect 968B), and some areas where the etch-selective material 140 remains in the final structure, and no back contact is formed in those areas .Figures 9C and 9D further illustrate the structure shown in Figures 9A-B. As it should be noted, the cross-section shown in FIG. 9C is taken perpendicular to the gate structure and through the channel region, and FIG. 9D shows a cross section along each of FIGS. 9A-C according to some such embodiments A horizontal cross-section taken with the axis indicated by the dashed line 9D-9D. As shown in FIG. 9C, the width of the interconnect 929 may be greater than the width of the corresponding feature (the gate electrode in this exemplary case) to which it is connected. In this exemplary case, note that the entire width of the etch-selective material 136 shown in the cross-section shown is replaced with metal (interconnect 929) without leaving any etch-selectivity directly adjacent to the metal. Material 136. Note also how each of the etch-selective material 136, the etch-selective material 140, and the spacer 126 exists in the same horizontal plane, which is, for example, a horizontal horizontal plane taken along the axis indicated by the dashed line 9D-9D The horizontal plane obtained at the section. The interconnect features 968A-B and 929 can be any suitable conductive material, such as copper, aluminum, silver, gold, tungsten, and the like. They can also include a liner or barrier layer, such as tantalum nitride, which can help prevent electromigration of conductive materials into adjacent dielectric materials.Exemplary systemFigure 10 is an exemplary computing system implemented with one or more integrated circuit structures disclosed herein in accordance with some embodiments of the present disclosure. As can be seen, the computing system 1000 houses a motherboard 1002. The motherboard 1002 may include multiple components, including but not limited to a processor 1004 and at least one communication chip 1006, and each component may be physically and electrically coupled to the motherboard 1002 or integrated in it in other ways. As can be understood, the motherboard 1002 may be, for example, any printed circuit board, whether it is a main board, a daughter board mounted on the main board, or the only board of the system 1000.Depending on its application, the computing system 1000 may include one or more other components, which may be physically and electrically coupled to the motherboard 1002 or may not be physically and electrically coupled to the motherboard 1002. These other components may include, but are not limited to, volatile memory (e.g., DRAM), non-volatile memory (e.g. ROM), graphics processor, digital signal processor, encryption processor, chipset, antenna, display, touch screen display , Touch screen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (for example, hard drives, Compact disc (CD), digital versatile disc (DVD), etc.). Any component included in the computing system 1000 may include one or more integrated circuit structures or devices configured in accordance with exemplary embodiments (e.g., including front and back contacts and stacks or non-stacks of one or more etch-selective materials). Stacked CMOS devices, as provided herein in various ways). In some embodiments, multiple functions may be integrated into one or more chips (for example, note that the communication chip 1006 may be part of the processor 1004 or otherwise integrated into the processor 1004).The communication chip 1006 implements wireless communication to transmit data to and from the computing system 1000. The term "wireless" and its derivatives can be used to describe circuits, devices, systems, methods, technologies, communication channels, etc. that can transmit data through non-solid media and by using modulated electromagnetic radiation. The term does not imply that the related equipment does not contain any wires, although in some embodiments they may not. The communication chip 1006 can implement any one of multiple wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+ , HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, its derivatives, and any other wireless protocols named 3G, 4G, 5G and later. The computing device 1000 may include a plurality of communication chips 1006. For example, the first communication chip 1006 may be dedicated to short-distance wireless communication, such as Wi-Fi and Bluetooth, and the second communication chip 1006 may be dedicated to long-distance wireless communication, such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev -DO etc.The processor 1004 of the computing system 1000 includes an integrated circuit die packaged in the processor 1004. In some embodiments, the integrated circuit die of the processor includes on-board circuitry that is implemented using one or more integrated circuit structures or devices configured with the backside contacts described in various ways herein. The term "processor" may refer to any device or part of a device that processes, for example, electronic data from a register and/or memory, and converts the electronic data into other electronic data that can be stored in the register and/or memory.The communication chip 1006 may also include an integrated circuit die packaged in the communication chip 1006. According to some such exemplary embodiments, the integrated circuit die of the communication chip includes one or more integrated circuit structures or devices configured with backside contacts described in various ways herein. As will be understood from the present disclosure, note that multi-standard wireless capabilities can be directly integrated into the processor 1004 (for example, where the functions of any chip 1006 are integrated into the processor 1004 instead of having a separate communication chip). Note further that the processor 1004 may be a chipset with such wireless capabilities. In short, any number of processors 1004 and/or communication chips 1006 can be used. Likewise, any one chip or chipset can have multiple functions integrated therein.In various embodiments, the computing system 1000 may be a laptop computer, a netbook computer, a notebook computer, a smart phone, a tablet computer, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, Scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players, digital video cameras, or any other electronic equipment that processes data or adopts one or more integrated circuit structures or devices, wherein the one or more integrated circuits The structure or device is formed using the disclosed technology as described in various ways herein.Other exemplary embodimentsThe following examples relate to other embodiments, from which many permutations and configurations will be obvious.Example 1 is an integrated circuit structure including: a device area, including a transistor, which includes a source or drain area and a gate structure; a front contact area, above the device area; and a back contact area, in the device Below the region, the back contact region includes a first dielectric material below the source region or drain region of the transistor, a second dielectric material adjacent to the first dielectric material in the lateral direction and below the gate structure of the transistor, and A non-conductive spacer located between the first dielectric material and the second dielectric material in the lateral direction, wherein the first dielectric material and the second dielectric material are selectively etchable with respect to each other and the non-conductive spacer.Example 2 includes the subject matter of Example 1, wherein the back contact region further includes an interconnect feature that passes through the first dielectric material and contacts the bottom surface of the source region or the drain region.Example 3 includes the subject matter of Example 2, and also includes a contact structure on the bottom surface of the source region or the drain region, the contact structure including metal, wherein the interconnect feature directly contacts the contact structure.Example 4 includes the subject matter of Example 3, wherein the contact structure is conformal to the bottom surface of the source region or the drain region.Example 5 includes the subject matter of any of the preceding examples, wherein the back contact area further includes an interconnect feature that passes through the second dielectric material and contacts the gate structure.Example 6 includes the subject matter of Example 5, wherein the gate structure includes a high-k dielectric and a gate electrode, and the interconnect feature is in contact with the gate electrode.Example 7 includes the subject matter of any one of the preceding examples, wherein the device region further includes a partition wall structure including an insulating material and a conductor in the insulating material.Example 8 includes the subject matter of Example 7, wherein one of the first dielectric material or the second dielectric material is also under the partition wall structure.Example 9 includes the subject matter of Example 8, wherein the back contact area further includes an interconnection feature that passes through the first or second dielectric material under the partition wall structure and contacts the conductor.Example 10 includes the subject matter of any of the preceding examples, wherein the first dielectric material includes a nitride, the second dielectric material includes an oxide, and the non-conductive spacer includes an oxynitride.Example 11 includes the subject matter of any of the preceding examples, wherein the transistor further includes one or more nanowires or nanoribbons or nanosheets, and the gate structure surrounds those one or more nanowires or nanoribbons or nanosheets.Example 12 includes the subject matter of any of the preceding examples, wherein the transistor further includes a fin structure, and the gate structure is on the top wall and sidewalls of the fin structure.Example 13 includes the subject matter of any of the preceding examples, wherein the transistor is a first transistor, the device region further includes a second transistor, and the first transistor and the second transistor are arranged in a stacked configuration relative to each other such that the first transistor is the bottom Transistors, and the second transistor is above the first transistor.Example 14 includes the subject matter of Example 13, wherein the second transistor is directly connected to the front contact area.Example 15 includes the subject matter of any of the preceding examples, wherein the transistor is a first transistor and the non-conductive spacer is a first non-conductive spacer located laterally between the first portion of the first dielectric material and the second dielectric material , The device region further includes a second transistor, the second transistor includes a source region or a drain region and a gate structure, wherein the second part of the first dielectric material is below the source region or the drain region of the second transistor, and The second portion of the second dielectric material is laterally adjacent to the second portion of the first dielectric material and is below the gate structure of the second transistor, and the second non-conductive spacer is laterally located between the first dielectric material and the second Between the second part of the dielectric material.Example 16 is a printed circuit board including the integrated circuit structure of any of the preceding examples.Example 17 is an electronic system that includes the integrated circuit of any one of Examples 1 to 15 or the printed circuit board of Example 16.Example 18 is an integrated circuit structure, including: a device region, including a first transistor and a second transistor, each of the first transistor and the second transistor including a source region or a drain region and a gate structure; a front contact Area, above the device area; and back contact area, below the device area, the back contact area includes: the first dielectric material under the source area or drain area of both the first and second transistors, and the The second dielectric material under the gate structure of both the first and second transistors, and a non-conductive spacer located between the first dielectric material and the second dielectric material in the lateral direction, wherein the first dielectric material and the second dielectric material And each of the non-conductive spacers are present in the same horizontal plane; a first interconnection feature that penetrates the first dielectric material and contacts the bottom surface of the source region or the drain region of the first transistor; and a second mutual The connecting feature passes through the second dielectric material and contacts the gate structure of the second transistor.Example 19 includes the subject matter of Example 18, and further includes a contact structure on the bottom surface of the source region or the drain region of the first transistor, the contact structure including metal, wherein the first interconnect feature directly contacts the contact Point structure.Example 20 includes the subject matter of Example 19, wherein the contact structure is conformal to the bottom surface of the source region or the drain region of the first transistor.Example 21 includes the subject matter of any one of Examples 18 to 20, wherein the gate structure of the second transistor includes a high-k dielectric and a gate electrode, and the second interconnect feature is in contact with the gate electrode.Example 22 includes the subject matter of any one of Examples 18 to 21, wherein the device area further includes a partition wall structure including an insulating material and a conductor within the insulating material.Example 23 includes the subject matter of Example 22, wherein one of the first dielectric material or the second dielectric material is also under the partition wall structure.Example 24 includes the subject matter of Example 23, wherein the back contact area further includes a third interconnect feature that passes through the first dielectric material or the second dielectric material under the partition wall structure and contacts the conductor.Example 25 includes the subject matter of any one of Examples 18 to 24, wherein the first dielectric material includes nitride, the second dielectric material includes oxide, and the non-conductive spacer includes oxynitride.Example 26 includes the subject matter of any one of Examples 18 to 25, wherein one or both of the first and second transistors further include one or more nanowires or nanoribbons or nanosheets, and the corresponding gate structure surrounds Those one or more nanowires or nanoribbons or nanosheets.Example 27 includes the subject matter of any one of Examples 18 to 26, wherein one or both of the first transistor and the second transistor further include a fin structure, and the corresponding gate structure is on the top wall of the fin structure And on the side walls.Example 28 includes the subject matter of any one of Examples 18 to 27, wherein the device region includes a lower device region and an upper device region, and the first transistor and the second transistor are part of the lower device region.Example 29 is a printed circuit board that includes the integrated circuit structure of any one of Examples 18-28.Example 30 is an electronic system including the integrated circuit of any one of Examples 18 to 28 or the printed circuit board of Example 29.Example 31 is an integrated circuit structure, including: a device region including a first transistor, a second transistor, and a third transistor, each of the first, second, and third transistors includes a source region or a drain region, and Gate structure; front contact area, above the device area, the front contact area includes a front interconnect feature that is directly connected to the source or drain region or gate structure of the third transistor And the back contact area, under the device area, the back contact area includes: the first dielectric material under the source area or the drain area of both the first and second transistors, the first and second The second dielectric material under the gate structure of the two transistors, and the non-conductive spacer between the first dielectric material and the second dielectric material in the lateral direction, wherein the first dielectric material, the second dielectric material and the non-conductive spacer One of the spacers is a nitride, one of the first dielectric material, the second dielectric material, and the non-conductive spacer is an oxide, and one of the first dielectric material, the second dielectric material, and the non-conductive spacer is nitrogen The oxide makes the first dielectric material and the second dielectric material selectively etchable with respect to each other and the non-conductive spacer.Example 32 includes the subject matter of Example 31, and further includes: a first bottom surface interconnect feature that passes through the first dielectric material and contacts the bottom surface of the source region or the drain region of the first transistor; And/or a second bottom surface interconnect feature that passes through the second dielectric material and contacts the gate structure of the second transistor.Example 33 includes the subject matter of Example 32, and also includes a contact structure on the bottom surface of the source region or the drain region of the first transistor, the contact structure including metal, wherein the first bottom surface interconnect feature directly contacts the The contact structure, wherein the contact structure is conformal to the bottom surface of the source region or the drain region of the first transistor.Example 34 includes the subject matter of Example 32 or 33, and further includes, wherein the gate structure of the second transistor includes a high-k dielectric and a gate electrode, and the second bottom surface interconnect feature is in contact with the gate electrode.Example 35 includes the subject matter of any one of Examples 31 to 34, wherein the device region further includes a partition wall structure that includes an insulating material and a conductor within the insulating material.Example 36 includes the subject matter of Example 35, wherein one of the first dielectric material or the second dielectric material is also under the barrier wall structure.Example 37 includes the subject matter of Example 36, wherein the back contact area further includes a third bottom surface interconnection feature that passes through the first dielectric material or the second dielectric material under the partition wall structure and contacts conductor.Example 38 includes the subject matter of any of Examples 31 to 37, wherein the first dielectric material includes nitride, the second dielectric material includes oxide, and the non-conductive spacer includes oxynitride.Example 39 includes the subject matter of any one of Examples 31 to 38, wherein one or both of the first transistor and the second transistor further include one or more nanowires or nanoribbons or nanosheets, and the corresponding gate structure Surround those one or more nanowires or nanoribbons or nanosheets.Example 40 includes the subject matter of any one of Examples 31 to 39, wherein one or both of the first transistor and the second transistor further include a fin structure, and the corresponding gate structure is on the top wall of the fin structure And on the side walls.Example 41 includes the subject matter of any one of Examples 31 to 40, wherein the device region includes a lower device region and an upper device region, and the first transistor and the second transistor are part of the lower device region.Example 42 is a printed circuit board including the integrated circuit structure of any one of Examples 31 to 41.Example 43 is an electronic system including the integrated circuit of any one of Examples 31 to 41 or the printed circuit board of Example 42.
Techniques for efficiently performing transforms on data are described. In one design, an apparatus performs multiplication of a group of data values with a group of rational dyadic constants that approximates at least one irrational constant scaled by a common factor. Each rational dyadic constant is a rational number with a dyadic denominator. The common factor is selected based on pre-computed numbers of operations for multiplication of a data value by different possible values of at least one rational dyadic constant. The pre-computed numbers of operations may be stored in a look-up table or some other data structure and may be used to evaluate different possible values for the common factor. The use of the common factor may reduce complexity and/or improve precision. The multiplication may be performed for various transforms such DCT, IDCT, etc.
WHAT [Iota]S CLAIMED IS: I , An apparatus comprising; a first logic to receive a group of data values: and a second logic to perform multiplication of the group of data values with a group of rational dyadic constants that approximates at least one irrational constant scaled by a common factor, each rational dyadic constant being a rational number with a d\ adic denominator, the common factor being selected based on pre~computed numbers of operations for multiplication of a data value by different possible values of at least one rational dyadic constant. 2 The apparatus of claim 1 , wherein the pre-computed numbers of operations are for arithmetic operations. 3. The apparatus of claim 1, wherein the pre-computed numbers of operations are for logical and arithmetic operations. 4. I he apparatus of claim 1, wherein the pre-comp[upsilon]tcd numbers of operations are for shift and add operations. 5. The apparatus of claim 1, wherein the pre-computcd numbers of operations are stored in a data structure. 6 The apparatus of claim 1 , wherein the pre-computed numbers of operations are stored in a look-up table 7. The apparatus of claim 6. wherein each entry of the look-up table indicates the number of logical and arithmetic operations for multiplication of a data value with a specific value for each of the at least one rational dyadic constant. 8. The apparatus of claim 6, wherein the look-up table comprises a plurality of columns and a plurality of rows, each column corresponding to a different value of a first rational dyadic constant, and each row corresponding to a different value of a second rational dyadic constant and wherein an entry for a particular column and a particular row indicates the number of operations for multiplication of a data value by a first rational dyadic constant value associated with the particular column and a second rational dyadic constant value associated with {lie particular row. 9. The apparatus of claim 8, wherein the number of operations for multiplication of a data value with one rational dyadic constant is determined based on one row of the look-up table, and wherein the number of operations for multiplication of a data value with two rational dyadic constants is determined based on the plurality of columns and the plurality of rows of the look-up table. 30. The apparatus of claim 8, wherein the look-up table comprises 819.2 columns for a. .13 -bit first rational dyadic constant and 8.192 rows for a 13-bit second rational dyadic constant 11. The apparatus of clairo 1, wherein the number of rational dyadic constants is greater than the number of irrational constants being approximated by the rational dyadic constants. 12. The apparatus of claim I 5 wherein the second logic performs multiplication of a first data value in the group with a first rational dyadic constant that approximates the common factor, and performs multiplication of a second data value in the group with a second rational dyadic constant that approximates an irrational constant scaled by the common factor, :13. The apparatus of claim 1? wherein the at least one irrational constant comprises first and second irrational constants, wherein the group of rational dyadic constants comprises a first rational dyadic constant that approximates the first irrational constant scaled by the common factor and a second rational dyadic constant that approximates the second irrational constant scaled by the common factor. 14. The apparatus of claim 1, wherein the second logic performs multiplication of a data value in the group with a first rational dyadic constant in the group, and performs multiplication of the data value with a second rational dyadic constant in the group. 15. The apparatus of claim \ , wherein the common factor is selected further based on. at least one precision metric for results generated from the multiplication of the group of data values wltli the group of rational dyadic constants, 1.6, The apparatus of claim 1, wherein the second logic performs the multiplication for a discrete cosine transform (DCT). 17. The apparatus of claim 1, wherein the second logic performs the multiplication for an inverse discrete cosine transform (IDCT). 3 S. The apparatus of claim .1, wherein, the second logic performs the multiplication for an S-point discrete cosine transform (DCT) or an 8-poirtt inverse discrete cosine transform ([Iota]DCT). 19. A. method comprising: receiving a group of data values; and performing multiplication of the group of data values with a group of rational dyadic- constants that approximates at ieast one irrational constant scaled by a common factor, each rational dyadic constant-being a rational number with a dyadic denominator, the common factor being selected based on pre-coraputed numbers of operations tor multiplication of a data value by different possible values of at least one rational dyadic constant. 20. The method of claim 19, wherein the pre-computed numbers of operations are stored in a look-up table. 21. The method of claim 20, wherein the look-up table comprises a plurality of columns and a plurality of rows, each column corresponding to a different value of a first rational dyadic constant, and each row corresponding to a different value of a second rational dyadic constant, and wherein an entry for a particular column and a particular row indicates the number of operations for multiplication of a data value by a first rational dyadic constant value associated with the particular column and a second rational dyadic constant value associated with the particular row, 22. The method of claim 19, wherein the performing multiplication of the group of data values comprises performing multiplication of a first data value in. the group with a first rational dyadic constant that approximates the common factor, and performing multiplication of a second data value in the group with a. second rational dyadic constant that approximates an irrational constant scaled by the common, factor. 23. The method of claim 19, wherein the performing multiplication of the group of data, values comprises performing multiplication, of a data value in the group with a first rational dyadic constant in the group, and performing multiplication of the data value with a second rational dyadic constant in the group. 24. An apparatus comprising: means for receiving a group of data values; and means for performing multiplication of the group of data values with a group of rational dyadic constants that approximates at least one irrational constant scaled by a common factor, each rational dyadic constant being a rational number with a dyadic denominator, the common factor being selected based on pre-c[alpha]tn pitted numbers of operations for multiplication of a data value by different possible values of at least one rational dyadic constant. 25. The apparatus of claim 24, wherein the pre-compuled numbers of operations are stored in a look-up table. 26. The apparatus of claim. 25, wherein the look-up table comprises a plurality of columns and a plurality of rows, each column corresponding to a different, value of a first rational dyadic constant, and each row corresponding to a different value of a second rational dyadic constant, and wherein an entry for a particular column and a particular row indicates the number of operations for multiplication of a data value by a first rational dyadic constant value associated with the particular column and & second rational dyadic constant value associated with the particular row. 27. The apparatus of claim 24, wherein the means for performing multiplication of the group of data values comprises means for performing mult) plication of a first data value in the group with a first rational dyadic conslant {hat approximates the common factor, and means for performing multiplication of a second data value in the group with a second rational dyadic constant that approximates an irrational constant scaled by the common ractor. 28, The apparatus of claim 24, wherein the means for performing multiplication of the group of data values comprises means for performing multiplication of a data value in *he group wiih a ilrsi rational dyadic constant in the group, and means for performing multiplication of the data value with a second rational dyadic constant in the group.
TRANSFORMS WITH REDUCE COMPLEXITY AND/OR IMPROVE PRECISION BY MEANS OF COMMON FACTORS[iota]. Claim of Priority ""d"r 5S C.S.C. [section]119(OGOJ The present application claims priority lo provisional U.S. Application Serial No60/758,464, filed January i L 2006, entitled "efficient Multiplication-Free Implementations of Scaled DLs[alpha]ete Cosine Traosfoun (DCT) and Inverse Discrete Cosine Transform (JDCT),'<"> assigned to the assignee hereof and incorporated herein by reference.BACICGROILND l[iota]. Field[0002 The present disclosure relates generally to processing, aad more specifically to techniques for performing transforms on. data1 II. Backgr own dOO[Theta]3J Transforms axe commonly used to corner! data from one domain to another domain. For example, discrete cosine transform (DCT) is commonly used to Transform data from spatial domain to frequency domain, and inverse discrete cosine ttamfoim (IDCT) is commonly used to transform data from frequency domain to spatial domain. DCT Is widely used for image/video compression to spatially decouelate blocks of picture elements (pixels) in images or video frames. The resulting transform coefficients are typically ra[upsilon]ch less dependent on each other, which makes these coefficients more suitable for quantization and encoding DCT also exhibits energy compaction property, which is the ability to map most of the energy of a block of pixels to only few (typically low order) transform coefficients. This energy compaction property can simplify the design of encoding algorithms.}00o4I Transforms such as DCT and IDCT may be performed on large quantity of data, [iota][iota]ence, it is desirable to perfotm transforms as efficiently as possible. Furthermore, it is des[pi]ab[iota]e to peiform computation for transforms using simple hardware in. order to reduce cost and complexity.}0O[delta]SJ There is therefore a need in the art for techniques to eft!cientl> perform transforms on data StTMMAKY[0006] Techniques for efficiently performing transforms on data are described herein.According to an aspect an apparatus performs multiplication of a group of data values with a group of rational dyadic constants that approximates at least one irrational constant scaled by a common factor. Each rational dyadic constant is a rations! number with a dyadic denominator. The caramon factor is selected based on pre-cortiputed numbers of operations for multiplication of a data value by different possible values of at least one rational dyadic constant. The pre-e[sigma]mputed numbers of operations may be stored in a look-up table or some other data structure and may be used to evaluate different possible values for the common factor. The use of the common factor may reduce complexity and/or improve precision. The multiplication may be performed for various transforms such DCT, IDCT, etc.J00[beta]7 Various aspects and features of the disclosure are described in further detail below.BRIEF DESCRIPTION Off TiItIS DRAWINGS (OOOSj F[iota]G. 1 shows a flow graph of an S-point 5DCT.J0009 FIG. 2 shows a flow graph of an 8-point DCI<'>.[001.0 FIG. 3 shows a flow graph of an S-point IDCT with common factors.[0011J EfG. 4 shows a look-up table storing the numbers of operations for multiplication with different rational dyadic constant values. }0ui2J F[Iota]G. 5 shows a block diagram of a decoding system.DETAILED DESCRIPTION}00J3 The techniques described herein may be used for various types of transforms such as DCT5 !DCT, discrete Fourier transform (DFT), inverse DFT ([Iota]DFT), modulated lapped transform (MLT), inverse MLT, modulated complex lapped transform (MiCLT), inverse MCLT, etc. The techniques may also be used for various applications such as image, video, and audio processing, communication, computing, data networking, data storage, graphics, etc. M general, the techniques may be used for any application that uses a transform. For clarity, the techniques are described below for DCT and [Iota]DCT, which ai"e commonly used in image and video processing,[0014] A one-dimensional (ID) N-p[alpha]int DCT and a 1 D N-point [Iota]DCT of type 11 may be defined as follows: ^ [^] ^-(TM)' [sum]4"J'Cosi - -~ , and Eq (I)2N*["] ^ [sum] "V^ - A [Zf] - cos <i> - -J , Eq (2)wh . ere C( ,A, .) =M f [iota]/v2 if A? = O[ I otherwise , xj>?] is a ID spatial domain function, and JT[A-] is a ID frequency domain function.J00I5} The ID DCT In equation (1) operates on N spatial domain vsl[upsilon]es x{0] tltrough-[gamma]p<i-~13 and generates N transform coefficients X[O] through .Y[N-I]. The ID IDCT in equation (2) operates on N transform coefficients and generates N spatial domain values. Type II DCT is one type of transform and h commonly believed to be one of the most efficient transforms among several energy compacting transforms proposed for image/video compressi on .\00l6] The 1 D DCT may be used for a two 2D DCT, as described below. Similarly, theID 3DCT may be used for a 2[Omega] IBCT. By decomposing the 2D DCT/IBCT into a cascade of ID DCTstfDCTs, the efficiency of the 2D DCT/JDCT is dependent on the efficiency of the .1 D DCT[Lambda]DCT. In general, I D DCT and ID IBCT may be performed on arsty vector size, and 2D DCT and 2D IDCT .may be performed ors any block [alpha]ze. However, SxS DCT and SxS IDCT are commonly used for image and video processing, where ISi is equal to S. For example, SxS DCT and 8x8 !!DCT are used as standard building blocks an various image and video coding standards such as JPBG, MPEG-I, MPEG-2, MPEG-4 (P.2), H.26L H.263, etc. f0017} The J D DCT and ID IDCT may be implemented in their original forms shown in equations (1) and (2),. respectively. However, substantial reduction in computational complexity may be realized by finding factorizations that result in as few multiplications and additions as possible. A factorization for a transform may be represented by a flow graph, that indicates specific operations to be performed for that transform.JOGISJ FKL 1. shows a flow graph 100 of an example factorisation of an 8~point IDCT.In flow graph 100, each addition is represented by symbol "[Phi]" and each multiplication is represented, by a box. Each addition sums or subtracts two input values and provides an output value. Each multiplication multiplies an input value with a transform constant shown Inside the box and provides an output value. The factorization in FIG. 1 has six .multiplications with the .following constant factors:Cx/4 - co$(?r /4) * 0.707106781 -C3jr,R " cos Q[pi] /8) " 0.382683432 " andS3919 - sin (3[pi] /S) * 0.923879533.0019 Flow graph i00 receives eiglit scaled transform coefficients 4; - .ATOj throughA. ' X{7] , performs an 8-[rho]oint IDCT on these coefficients, and generates eight output samples x{[upsilon]] through x[l}. Ao through A-j are scale tractors and are given below:j - - -1 V- * 00.^3553355553333990066,. A Ax - - - *"V <l l>*<[Lambda]> W<l kV>> M 0.4499881 1 15,2 V 2 2sin (3^/8) - V2A, s [identical to]&M w 0.6532814824. J. = ....J2[Iota]&E1M....... " 0.2548977895.V2 <' *> V2 + 2COS (3[Lambda]-/8)..4. 1 2gi 4577239 ,.{00201 Flow graph 100 includes a number of butterfly operations. A butterfly operation receives two input values and generates two output values, where one output, value is the sum of the two input values and the other output value is the difference of the two input values. For example, the butterfly operation on input values A^ -XlO] amiA4 -X(A) generates an output value A$ - XlO]+ A4 - XlQ] for the top branch and an output value 4> - ^10] - 4* ' ^ H] for the bottom branch.{00211 FlO-. 2 shows a flow graph 200 of an example factorisation of an 8-p[alpha]inl DCT.Flow graph 200 receives eight input, samples xfO] through x[71 performs an 8~point DCT oa these input samples, and generates eight scaled transform coefficients 8J0 - AlO] through SA7 - X[7j . The scale factors Ao through A7 are given above. The factorization in F[Iota]CJ. 2 has six multiplications with constant factors IfCxM, 2CW and 2Ssx.*.}0022 The flow graphs for the IDCT and DCT in FIGS. 1 and 2 are similar and involve multiplications by essentially the same constant factors (with the difference in 1/2), Such similarity may be advantageous for implementation of the DCT and [Iota]DEURT on an integrated circuit. In particular, the similarity may enable savings of silicon or die area to implement the butterflies and the multiplications by transform constants, which are used in both the forward and inverse transforms.J0023J The factorization shown in FiG. 1 results in a total of 6 multiplications smd 28 additions, which are substantially fewer than the number of multiplications and additions required for direct computation of equation (2). The factorization shown in FIG- 2 also results in a total of 6 multiplications and 28 additions, which are substantially fewer than the number of multiplications and additions required for direct computation of equation (1). The factorization in FlG. 1 performs plane rotation on two intermediate variables with CWs and .6Ws- The factorization 5" FIG. 2 perforins plane rotation on two intermediate variables with 2CWs and 2SWs. A plane rotation is achieved by multiplying an intermediate variable with both sine and cosine, e.g., COS (3J[Gamma]/8) and sin {3,[tau]/S) in FlG. \ . The multiplications for plane rotation may be efficiently performed using the computation techniques described below. jQO24 FIGS. 1 and 2 show example factorizations of an 8-poi[pi]t IDCT and an 8-pointDCT, respectively. These factorizations are for scaled LDCT and scaled DCT. where "scaled'<"'> refers to the scaling of the transform coefficients X[O] through ,[iota][7] with known scale factors Ao through A-?t respectively. Other factorisations have also been derived by using mappings to other known fast algorithms such as a EURooley-'Fukey DFT algorithm or by applying systematic factorization procedures such as decimation in time or decimation in frequency. In general, factorization reduces the number of multiplications but does not eliminate them.}0025 The multiplications in FIGS. 1 and 2 are with irrational constants representing the sine and cosine of different angles,, which are multiples of %<->/% for the 8-po[iota][alpha]t.OCT and [Iota]DCT. Ati irrational constant is a constant that is not a. ratio of two integers. The multiplications with irrational constants may he more efficiently performed in fixed- point integer arithmetic when each irrational constant is approximated by a rational dyadic constant A rational dyadic constant is a rational constant with a dyadic denominator and has the form c / 2<h> > where h and c are integers and h > 0. Multiplication of an integer variable with a rational dyadic constant may be achieved with logical and arithmetic operations, as described below. The number of logical and arithmetic operations is dependent on the manner in which the computation is performed as well as the value of the rational dyadic constant{0026} In an aspect, common factors are used to reduce the. total number of operation for a transform and/or to improve the precision of the transform results. A common factor is a constant that is applied to one or more intermediate variables in a transform. An intermediate variable may also be referred to as a data, value* etc. A common factor may be absorbed with one or more transform constants and may also be accounted for by altering one or more scale factors. A common, factor may improve the approximation of one or more (irrational) transform constants by one or more rational dyadic constants, which may then result in a fewer total number of operations and/or improved precision.(0027) In general, any number of common factors may be used for a transform, and each common factor may be applied to any number of intermediate variables in the transform. In one design, multiple common factors are used for a transform and are applied to multiple groups of intermediate variables of different sizes. Tn another design, multiple common factors are applied to multiple groups of intermediate variables of the same size.J002SI FK*. 3 shows a flow graph 300 of an S-pomt IDCT with common factors. Flow graph 300 uses the same factorization as flow graph 100 in FIG. 1. However, flow graph 300 uses two common factors for two groups of intermediate variables.{0029) A first, common factor P\ is applied to a first group of two intermediate variablesXi and Xz, which is generated based on transform coefficients X[2} and X[G] . The first common factor Fj is multiplied with Xu is absorbed with transform constant C^, and is accounted for by altering scale factors A% and [Lambda]>- A second common factor Fs. is applied to a second group of four intermediate variables X$ through X& which is generated based on transform coefficients X[TJ 6 XiS] 3 X[S\ and X[<1>J] . The second common factor./^ is multiplied with Xt7 is absorbed with transform constants Crj.[iota]- CWs and Si*(R)* and is accounted for by altering scale factors A j, A^ [Lambda]$ and [Lambda][eta]. [0030] The first common factor Ft may be approximated with a rational dyadic constant an which may be .multiplied with X] to obtain an approximation of the product X1 -F1 . A scaled transform factor Fi - C^4 may be approximated with a rational dyadic constant [beta]u which may be multiplied with X% to obtain an approximation of the product .Af3 - F1 -Cx/li . An altered scale factor J2-ZF1 may be applied to transform coefficient X{2] . A[Lambda] altered scale factor A6 ! F1 may be applied to transform coefficient X {6} .(00331 The second common factor F2 may be approximated with a. rational dyadic constant [alpha]j, which may be multiplied with JCj. to obtain ao. approximation of the product X4 'Ft. A scaled transform factor R -C7.,., may be approximated with a rational dyadic constant [beta]t , which may be multiplied with X% to obtain an approximation of the product Xj - F2 -C1114. A scaled transform factor F7 -C5n.,s may be approximated with a rational dyadic constant [gamma]% , and a scaled transform, factor is - "V3[Lambda].,[beta] may be approximated with a rational dyadic constant S2. Rational dyadic constant r> may be multiplied with [lambda]% to obtain an approximation of the product AV .R - C3^ and aiso with Xs to obtain an approximation of the productNational dyadic constant S2 may be multiplied with Xs to obtain an approximation of the product X5 - F2 -S3itis and also with X$ to obtain an approximation of the product X6 - f% -S^!S . Altered scab factors Ax IF2 , A^ iF^ ^ A5 ZF1 and A1 ZF* may be applied to transform coefficients ATpIl , AT[S] , X {5} and X[I], respectively.}0032) Six rational dyadic constants CU1, [beta]Xi a%, [beta]±, [gamma]% and S2 may be defined far six constants, as follows:ax ft F1 , [beta]t w /<;>; - cos ([pi] 14) , Eq (3) a2 ^ F.i > [beta]2 & F≥ -cm([pi]/4), y^ ^ F \ -CGs(SWS) , and S.i ^ Fi -sm(3[pi]Z^) .[0033 FIG, 3 shows an example use of common factors for a specific factorization of an 8~po[upsilon]it IDCT. Common factors may be used for other factorizations of the [Iota]DCT and also for the DCT and other types of transforms. Ih general, a common factor may be applied to a group of at least one intermediate variable in a transform. This group of intermediate va[pi]able(s) may be generated from a group of input values (e.g.. as shown in F[iota]G. 3) or used to generate a group of output -values (e.g.. not shown in FIG. 3). The common factor may be accounted for by the scale factors applied to the input values or the output values.[0034) Multiple common factors may be applied to multiple groups of intermediate variables, and each group may include any number of intermediate variables. The selection of the groups may be dependent on various factors such as the factorization of the transform, where the transform constants are located within the transform, etc. Multiple common factors may be applied to multiple groups of intermediate variables of the same size (not shown in FlG. 3) or different sizes (as shown m FiG. 3), For example, three common factors may be used for the factorization shown in FIG. 3, with a first common factor being applied to intermediate variables X\ and [Lambda]%, a second common factor being applied to intermediate variables X^Xn, Xi and Xa. and a third common factor being applied to two intermediate variables generated from .A][O] and JClA].[0[beta]35 Multiplication of an intermediate variable Jc with a rational, dyadic constant u may be performed in various manners m fixed-point integer arithmetic. The multiplication may be performed using logical operations (e,g,, left shift, right shift, bit-inversion, etc.), arithmetic operations (e,g., add, subtract, sign-inversion, etc.) and/or other operations. The number of logical and arithmetic operations needed for the multiplication of x with, u is dependent on the manner in which the computation is performed and the value of the rational dyadic constant u. Different computation techniques may require different numbers of logical and arithmetic operations for the same multiplication of [Lambda]<*> with u. A given computation technique may require different numbers of logical and arithmetic operations for the multiplication, of x with different values of ".J0036} A common factor may be selected for a group of intermediate variables based on criteria such as:* The number of logical and arithmetic operations to perform multiplication, and- The pre[alpha]si on. of the results ,[0037J In general, it [iota]s desirable to minimize the number of logical and arithmetic operations for multiplication of an intermediate variable with a rational dyadic constant On some hardware platforms, arithmetic operations (e.g., additions) may be more complex than logical operations, so reducing the number of arithmetic operations may be more important. Jn the extreme, computational complexity may be quantified based solely on the number of arithmetic operations, without taking into account logical operations. On some other hardware platforms, logical operations (e.g., shifts) may be more expensive, and reducing the number of logical operations (e.g., reducing the number of shift operations and/or the total number of bits shifted) may be more important. In general, a weighted average number of logical and arithmetic operations may be used, where the weights may represent the relative complexities of the logical and arithmetic operations. i 00381 The precision of the results may be quantified based on various metrics such as those given in Table 6 below. In general, it is desirable to reduce the number of logical and arithmetic operations (or computational complexity) for a given precision, [iota]t may also be desirable to trade of? complexity for precision, e.g., to achieve higher precision at the expense of some additional operations.[00391 A^s shown in FIG. 3, for each common factor, multiplication may be performed on a group of intermediate variables with a group of rational dyadic constants that approximates a group of at least one irrational constant (for at least one transform factor) scaled by that common factor. Multiplication m fixed-point integer arithmetic may be performed In various manners. For clarity, computation techniques that perform multiplication with shift and add operations and using intermediate results are described below.[0040J Multiplications in a transform,, e.g., the EDCT shown in HFlG. 3, may be efficiently performed in fixed-point integer arithmetic using computation techniques that approximate multiplication of an integer variable A<*> with one or more irrational constants with a series of intermediate values generated by shift and add operations and using intermediate results to reduce the total number of operations. Each irrational constant may be approximated with a rational dyadic constant, as follows:ju we/2* , Eq (4)where [mu] is the irrational constant to he app.roxmia.ted,. ci 2<h> is the rational dyadic constant, b and c are integers, and b >0. The series of intermediate values is determined by the one, or more rational dyadic constants being multiplied with integer variable x. The computation techniques may be illustrated by the following examples. {0041} In FiG L multiplication of integer variable x with transform constant C5 *. in fjxed-poi.nl integer arithmetic may be achieved by approximating constant Cm with a rational dyadic constant, as follows.181 b OlOf 10101* <*> ~ 25b <"> b ioOOodoOO <* "q>< > <->where C *4 is a rational dyadic constant that Ls an S-bit approximation of ^ Vi j[theta]O42 Multiplication of integer variable x by constant C V<s> ^ may be expressed as:>" - (*- 181>/256 . HiI (6)}0043 The multiplication in equation (<J> may be achieved with the following scries of operations;> // L Bq (T)" // 101" 2) - // DiOl \<~> 3 ? " 6) , // QlOI 1010? .The binary value to the tight of <*>'//" is an intermediate constant that is multiplied with variable x. } 0044 The desired product is equal toy 4s or }>4 ~ y . The multiplication in equation (6) may be performed with three additions and tliiee shifts to generate three intermediate values j;>,>-3 and>-i. {[Theta]O4S In F[Iota]G. I , multiplication [upsilon]f integer variable x with transform constants C?K& and[Lambda]j[Lambda]f! in. fixed-point integer arithmetic may be achieved by appioxim&ting constants C3n $ and Si[pi] s wth rational dyadic constants, as follows:Cl473 b OU[Iota]0N001512 b I 000000000 where C~^;R is a rational dyadic constant that is a.7-bit approximation of C^/s, and Swx is a rational dyadic constant that is a 9-bit approximation ofS^s-[0046) Multiplication of integer variable x by constants C^,% and &[pound],$ may be expressed as:{0047 Tlie multiplications in equation (.10) may be achieved with the following series of operat ions :(004SJ The desired products are equal to w$ and w& or ";. ~ v and U's - s . The two multiplications in. equation ([Iota]O) may be jointly performed with five additions and five shifts to genemte seven, intermediate values W3 through "'". Additions of zeros are omitted in the generation of "'3 and W6. Shifts by zero are omitted Ln the generation of w'4 and V5.0049J For the 8-poiu.t IDCT shown m FIG. 1, using the computation techniques described above for multiplications by constants C^4 , C^,s and S^, tlie total complexity For 8-bit precision may be given as: 28 -f 3-2 + 5-2 - 44 additions and 3-2+5-2 - 16 shifts. ]" general any desired precision may be achieved by using sufficient number of bits for the approximation of each transform constant. JGG5O1 For the 8-p[alpha]int DCTI<'> shown in FIG. 2, irrational constants .1 fEURrj.-[iota], CV[Lambda]-S and $$"%.ro&y be approximated with rational dyadic constants. Multiplications with the rational dyadic constants may be achieved using the computation techniques described above.10051 j For the TDCT shown in FlG: 3, different values of common factors Fj. and F? may result in different total numbers of logical and arithmetic, operations for the IDCT and different levels of precision for the output samples x{0] through x[7j. Different combinations of s'alues for F? and Fa may be evaluated. For each combination of <->values, the total number of logical and arithmetic operations for the IDCT and the precision of the output samples may be determined.{0052} For a given value of F}, rational dyadic constants ". and /?, may be obtained forFi and P1 -C^4., respectively. The numbers of logical and arithmetic operations may then be determined for multiplication of Xi with, a-, and multiplication of Xz with [beta]x , For a given value of Fs, rational dyadic constants ",, fiz , [gamma]2 and [delta]2 .may be obtained for F2, F2 -C^4, F2 ' C3J[Iota]/S and F2 - 5^, respectively. The numbers of logical and arithmetic operations may then be determined for multiplication, of JC* with a2 , multiplication of Xs with [beta]2 , and multiplicatiotts of X$ with both y2 and ^j . The number of operations for raulti pit cations of X$ with [gamma]% and ^j is equal to the number of operations for multiplications of X? with r., and S1.[0053J To facilitate the evaluation and selection of the common factors, the number of logical and arithmetic operations may be pre-computed for multiplication with different possible values of rational dyadic constants. The pre-computed numbers of logical and arithmetic operations may be stored in a data structure such as a look-up table, a list, a linked HsL a sotted list (a priority queue), an orthogonal sorted list, multiple tables or lists, a combination, of table and/or list, etc.J0054J FlG. 4 shows a look-up table 400 that stores the numbers of logical and arithmetic operations for multiplication with, different rational dyadic constant values. Look-up table 400 is a two-dimensional table with different, possible values of a first rational dyadic constant Cj on. the horizontal axis and different possible values of a second rational dyadic constant C2 on the vertical axis. The number of possible values for each rational dyadic constant is dependent an tbe number of bits used for that constant. For example, if Ci is represented with 13 bits, then there are 8192 possible values tor C5. The possible values for each rational dyadic constant are denoted as C[Omicron], Ci, Co, ... " Cu, where C1, = 0, c? fs the smallest non-zero value, and c^ is the maximum, value (e.g., cM =8191 for 13-bit).(0055 The entry in the Mb column and j-th row of look-up table 400 contains the number of logical and arithmetic operations for joint multiplication of intermediate variable x with both ct for the first rational dyadic constant Ci and c, for the second rational dyadic constant C2. The value for each entry in look-up table 400 may be determiaecl by evaluating different possible series of intermediate values for the joint itui.UipHcat.io.il with c, arid cj for thai entry and selecting the best series, e.g.- the series with the fewest operations. The entries i.n the first row of look-up table 400 (with t\-j <->-<->-<->- 0 for the second rational dyadic constant Cz) contain the numbers of operations for multiplication of intermediate variable x with just C1 for the first rational dyadic constant Ci- Since the look-tip table is symmetrical, entries in only half of the table (e.g., either above or below the main diagonal) may be filled. Furthermore, the number of entries to fill may be reduced by considering the irrational constants being approximated with the rational dyadic constants Ci and Cz.}[beta]O56J For a given value of F\, rational dyadic constants cti and [beta][iota] may be determined.The numbers of logical and arithmetic operations for multiplication of X\ with ^ . and multiplication of Xs with [beta]\ may be readily determined from the entries in the first row of look-up table 400, where a\ and [beta]\ correspond to CV Similarly, for a given value of F^ rational dyadic constants a% /?, , [gamma]% and S2 may be determined. The numbers of logical and arithmetic operations for multi plication of X$ with cfy a**d multiplication of X$ with [beta]2 raay be determined from the entries in the first row of Soofc-up tahle 400, where ai and [beta]2 correspond toThe number of logical and arithmetic operations for joint multiplication of Xs with [gamma]2 and <?, may be determined from an appropriate entry in look-up table 400, where ^2 may correspond to Ci and <% may correspond to C≥, or vice versa.{0057J For each possible combination of values lor f\ and Fz, the precision metrics inTable 6 may be determined for a sufficient number of iterations with different random input data. The values of Ft and Fj that result in poor precision (e.g., failure of the metrics) may be discarded, and the values of Fi aad F2 that result in good precision (e.g., pass of the metrics) may be retained. 1(M)SS] Tables 1 through 5 show five fjxed-poitit approximations for the IDCT in FJG.3, which are denoted as algorithms A, B, C, D and E. These approximations are for two groups of factors, with one group including &[iota] and [beta][iota] and another group including a% [beta]^ > /2 and S2 . For each of Tables 1 through 5, tho common factor for each group is given m the first column. The common factors improve the precision of the rational dyadic constant approximations and may be merged with, the appropriate scale factors ia the flow graph for the IDCT, The original values fwhioh may be 1 or irrational constants) are given in the third column. The rational dyadic constant for each original, value scaled by its common factor is given in the fourth column. The series of intermediate values for the multiplication of intermediate variable x with one or two rational dyadic constants is given in the fifth coiuro[beta]. The numbers of add and shift operations for each multiplication are given in the sixth and seventh columns, respectively. The total number of add operations for the [Iota]DCT is equal to the sum of all add operations in the sixth column plus the last entry again (to account for multiplication of each ofcX$ and X& with both [gamma][tau] m\d S2) plus 28 add operations for all of the butterflies in the flow graph. The tola! number of shift operations for the 3DCT is equal to the sum of all shift operations in the last column plus the last entry again. f 00591 Table 1 gives the details of algorithm A, which uses a common factor of1/1.000044247 i for each of the two groups.<'>fable 1 - Approximation A (42 additions, 16 shifts) f[theta]o[delta][upsilon]i Table 2 gives the details of algorithm B, which uses a common factor of i/ 1.0000442471 for the first group and a common factor of i/<'>l .02053722659 for the second group.Table 2 ~ Approximation B (43 additions, 17 shifts)JoO6<'>i{ Table 3 gives the details of algoritten C, which uses a common factor ofI[Lambda]K87734890555 for the first group and a common factor of 1/1.02053722659 for the second group,Table 3 - Approximation C (44 additions, IS shifts) {(H>62 Table 4 gives the details of algorithm D. which uses a common factor of1/0.87734890555 for the first group and a common factor of 1/0,8906205430S for the second group.Table 4 - Approximation D (45 additions, 1 ? shifts) (H)63j Table 5 gives the details of algorithm E, which uses a common factor [upsilon]f i /0.87734890555 for the first group and a common factor of 1/1.22387468002 for the second group.Table 5 - Approximatkro E (48 additions, 20 shifts)[00641 The precision of the output samples from an approximate IDCT may be quantified based on metrics defined in IEEE Standard 1 1 SO- i 190 and its pending replacement. This standard specifies testing a reference 64-bit floating-point DCT followed by the approximate IDCT using data from a random number generator. The reference DCT receives random data for a block of input pixels and generates transform coefficients- The approximate IDCT receives the transform, coefficients (appropriately rounded) and generates a block of reconstructed pixels. The reconstructed pixels are compared against the input pixels using five metrics, which are given m Table 6. Additionally, the approximate IDCT is required to produce all zeros when supplied with i :zero transform coefficients and to demonstrate near-DC inversion behavior. All five algorithms A through E given above pass aii of the metrics in Table 6.Table 6{0065] For clarity, much of the description above is for an 8-[rho]oini scaled [Iota]DCT and an8-poi[iota]it scaled <'>.OCT. The techniques described herein, may be used for any type of transform such as DCT5 [Iota]DCT, DFT, IDFT, MVl, inverse MLT, MCLT, inverse MCLT, etc. The techniques may also be used for any factorization of a transform, with several example factorizations being given in FIGS. 1 through 3. The groups for the common factors may be selected based on the factorization, as described above. The techniques may abo be used for transforms of any size, with, example 8 -point transforms being given in FIGS, [iota] through 3. The techniques may also be used in conjunction with any common factor selection criteria such as totai number of logical and arithmetic operations, total number of arithmetic operations, precision of the results, etc.{0066} The number of operations for a transform may be dependent on the manner in which, multiplications are performed. The computation techniques described above unroll, multiplications into series of shift and add operations, use intermediate results to reduce the number of operations, and perform joint multiplication with multiple constants using a common, series. The multiplications may also be performed with other computation, techniques, which may influence the selection of the common factors.[Iota]OO67J The transforms with common factors described herein may provide certain advantages such as:- Lower multiplication complexity due to merged multiplications in a scaled phase. - Possible reduction in complexity due to ability to merge scaling with quantization in implementations of JPEG, H.263, MPEG-I, MPEG-2, MPBG-4 (P.2), and other standards, and- improved precision clue to ability to minimize/distribute errors of fixed-point approximations for irrational constants used in multiplications by introducing common factors that can be accounted for by scale factors. fOO[omicron][delta]j Transforms with common factors may be used for various applications such as image and video processing, communication, computing, data networking, data storage, graphics,, etc. Example use of transforms for video processing is described below.J0069 FIG. 5 shows a block, diagram of a decoding system 500, which may implement the 8-[rho]oint IDCT shown in FIG. 3. A receiver 510 may receive compressed data from an encoding system, and a storage -unit 5.12 may store the received compressed data, A processor 520 processes the compressed data and generates output data. Within processor 520, the compressed data may be de-packetized by a de-packetizer 522, decoded by an entropy decoder 524, inverse quantized by an inverse quantizer 526, placed in the proper order by an inverse zig-zag scan unit 52S, and transformed by an [DCT unit 530. IDCT unit 530 may perform IDCTs on the reconstructed transform coefficients in accordance with the techniques described above. Each of units 522 through 530 may be implemented a hardware, firmware and/or software. For example, TDCT unit. 530 may be implemented with dedicated hardware, a set of instructions for an ALU, etc.(0070) A display mik 540 displays reconstructed images and video from processor 520.A controller/processor 550 controls the operation of various units in decoding system 500, A memory 552 stores data and program codes for decoding system 500. One or more buses 560 interconnect various units in decoding system 500.{0071} Processor 520 may be implemented with one or more application specific integrated circuits (ASICs), digital signal processors (DSPs)5 and/or some other type of processors. Alternatively, processor 520 may be replaced with one or more random access memories (RAMs). read only memory (ItOMs). electrical programmable ROMs (EFROMs), electrically erasable programmable ROMs (EEPROMs)5 magnetic disks, optical disks, and/or other types of volatile and nonvolatile memories known ia the art[Iota]<Q>072J The techniques described herein may be implemented in hardware,, firmware, software, or a combination thereof. For example, tlie logical (e.g.. shift) and arithmetic (e.g., add) operations for multiplication of a data value with a constant value may be implemented with one or more logics., which may also be referred to as units, modules, etc. A logic may be hardware logic comprising logic gates., transistors, and/or other circuits known in the art. A logic may also be firmware and/or software logic comprising machine-readable codes.[0073J In one design, an apparatus comprises a first logic to receive a group of data values and a second logic to perform multiplication of the group of data values with a group of rational dyadic constants that approximates at least one irrational constant scaled by a common factor. Bach rational dyadic constant is a rational number with a dyadic denominator, The common factor is selected based on pre-computed numbers of operations for multiplication, of a daia value by different, possible values of at least one rational dyadic constant The first and second logics may be separate logics, the same common logic, or shared logic.}00?4J For a firmware and/or software implementation, multiplication of a data value with a constant value may be achieved with machine-readable codes that perform the desired logical and arithmetic operations. The codes .may be hardwired or stored in a memory (e.g., memory 552 in FlG. 5) and executed by a processor (e.g., processor 550) or some other hardware unit.[OO75 The techniques described herein may be implemented in various types of apparatus. For example, the techniques may be implemented in different types of processors, different types of integrated circuits, different types of electronics devices, different types of electronics circuits, etc.J0076J Those of skill in the art would, understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof,{0077 Those of skill would further appreciate that d*e various, illustrative logical blocks, modules, circuits, and algorithm steps described m connection with the disclosure may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, -various illustrative components, blocks, modules, circuits, and sieps have been described above generally in terms of their functionality, Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.[907SJ The various illustrative logical blocks, modules, and circuits described in connection with the disclosure may be implemented or performed with a general-purpose processor, a DSP, an ASIC, a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components;, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a. plurality of microprocessors, one or more microprocessors ia conjunction with a DSP core,, or any other such configuration.{0[Theta]79J The steps of a method or algorithm described in connection with the disclosure may be embodied directly in hardware, in a software module executed by [Alpha] processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a EURD~RUMS or any other form of storage medium known in the art, An exemplary storage medium is coupled to the processor such that the processor can read information from., and write information to, the storage medium, [iota]n the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC The AS[Iota]C may reside in a user terminal, [iota]n the alternative, the processor and the storage medium may reside as discrete components in a user terminal.}00S0 The previous description of the disclosure is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the disclosure may be readily apparent to those skilled In the art, and the generic principles defined herein may be applied to other designs without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the examples shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), the ADC including a reference voltage circuit, providing a plurality of evenly spaced analog reference voltages, and a plurality of system voltage comparators for comparing an input voltage against the reference voltages and providing an indication of which reference voltage corresponds to the input voltage. In the method the following steps are performed. A plurality of redundant voltage comparators are provided. A subset of the plurality of system voltage comparators are selected. Auto-zero is performed on the selected comparators, and the redundant comparators are used in the place of the selected comparators. The outputs of the main comparator array and the extra comparators are combined to produce a final digital output.
What is claimed is: 1. A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), said ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, a plurality of system voltage comparators for comparing an input voltage against said reference voltages and providing an indication of which reference voltage corresponds to said input voltage, comprising the following steps:providing a plurality of redundant voltage comparators; selecting a subset of said plurality of system voltage comparators; performing auto-zero on said selected comparators; and using said redundant comparators in the place of said selected comparators. 2. A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), said ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, a plurality of system voltage comparators for comparing an input voltage against said reference voltages and providing an indication of a one of said reference voltages of which said input voltage is below in level, comprising the following steps:providing a plurality of redundant voltage comparators; selecting a subset of said plurality of system voltage comparators; performing auto-zero on said redundant voltage comparators; performing auto-zero on said selected comparators; and using said redundant comparators in the place of said selected comparators during a conversion operation. 3. A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), and for performing an analog to digital conversion, said ADC having an input, said ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, and said ADC including a plurality of system voltage comparators for comparing an input voltage against said reference voltages and providing an indication of a one of said reference voltages of which said input voltage is below in level, comprising the following steps:applying an input voltage to said input; providing a plurality of redundant voltage comparators; selecting a subset of said plurality of system voltage comparators; performing auto-zero on said redundant voltage comparators; performing auto-zero on said selected comparators; performing an analog to digital conversion on said input voltage using said redundant comparators in the place of said selected comparators; and combining the outputs of said system voltage comparators and of said redundant comparators. 4. A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), said ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, a plurality of system voltage comparators for comparing an input voltage against said reference voltages and providing an indication signal representing an indication of which reference voltage corresponds to said input voltage, said indication signal being converted to a binary code corresponding to said input voltage, comprising the following steps:providing a plurality of redundant voltage comparators; selecting a subset of said plurality of system voltage comparators; performing auto-zero on said redundant voltage comparators; performing auto-zero on said selected comparators; using said redundant comparators in the place of said selected comparators during a conversion operation; converting the output of said system voltage comparators, less said selected comparators, to a first digital value; converting the output of said redundant comparators to a second digital value; and adding said first digital value and said second digital value. 5. A method for performing an auto-zero function in a flash analog to digital converter ("ADC"), said ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, a plurality of system voltage comparators for comparing an input voltage against said reference voltages and providing a thermometer code corresponding to said input voltage, said thermometer code being provided to a converter for converting said thermometer code to a binary code corresponding to said input voltage, comprising the following steps:providing a plurality of redundant voltage comparators; selecting a subset of said plurality of system voltage comparators; performing auto-zero on said redundant voltage comparators; performing auto-zero on said selected comparators; after a sufficient time has passed after said step of performing auto-zero on said redundant voltage comparators so that the outputs of said redundant voltage comparators becomes valid, using said redundant comparators in the place of said selected comparators during a conversion operation; performing a thermometer code to binary code conversion on the output of said system voltage comparators, less said selected comparators, to generate a first digital value, wherein said outputs of said system voltage comparators, less said selected comparators, are concatenated by a shifting down of comparators above said selected comparators; adding the outputs of said redundant comparators as binary values to generate a second binary digital value; and adding said first digital value and said second digital value.
TECHNICAL FIELD OF THE INVENTIONThis invention relates to analog-to digital converters ("ADCs"), and more particularly relates to methods and apparatus for improving the performance of flash ADCs.BACKGROUND OF THE INVENTIONAnalog-to-Digital converters (ADC) are an important class of semiconductor components used widely in signal processing, instrumentation, communications and data storage. FIGS. 1(A) and 1(B) show a portion of a flash ADC 10, in two different modes. FIG. 1(A) shows the flash ADC 10 in auto-zero mode, while FIG. 1(B) shows the flash ADC 10 in sample conversion mode. A resistor ladder 12 is connected between plus and minus reference voltages VREF+ and VREF-, respectively, to form 2<n >evenly spaced analog reference voltages, of which two are shown in FIG. 1(A). A charge corresponding to these 2<n >reference voltages is stored on each of 2<n >corresponding capacitors 18. The ADC 10 shown in FIGS. 1(A) and (B) has a resolution of n+1 bits.An analog input voltage VIN is captured periodically by a sample and hold ("SH") circuit 14, and, as shown in FIG. 1(B), is compared to the 2<n >reference voltages in a corresponding number of comparators arrayed along these reference voltages. The comparator function is provided in the ADC 10 of FIG. 1(B) by the combining of VIN and the reference voltages from the charge stored on the array of capacitors 18. When VIN overcomes the reference voltage on a particular capacitor in the array 18, the resulting positive voltage is amplified by the preamps P1 and P2, tripping an associated latch 16, and thus storing a data value of "1." Assigning a number to each such comparator, the number of the comparator in the array at which the analog input goes from being below to being above the reference voltage corresponds to the digital representation of the analog input.As the speed of flash ADCs has grown, various problems have arisen which need to be solved. One problem is the error in flash ADCs from the mismatch in the comparators. This mismatch changes the analog value at which the output of a comparator changes from zero to one, thus degrading the accuracy of the ADC. A solution developed to correct this problem is to auto-zero the comparators in order to cancel the offset. Usually, this offset correction is stored as a voltage on a storage capacitor.Since capacitors slowly leak charge, it is necessary to perform this auto-zero operation on the array of comparators periodically. In low speed flash ADCs, which have long clock cycles, this auto-zero function can be done on every clock cycle. However, the clock cycle of a high speed ADC is too short for the auto-zero function to finish. Therefore, in high speed ADCs the auto-zero function must be performed during idle periods.However, for some applications, such as communications applications, the idle times are not available. A common technique used to overcome this problem is to auto-zero only one comparator at a time. An extra comparator is used temporarily, to take over the functionality of the comparator being auto-zeroed. See, for example, S. Tsukamoto et al., "A CMOS 6-b, 200 MSample/s, 3V Supply A/D Converter for a PRML Read Channel LSI," IEEE J. Solid-State Circuits, Vol. 31, No. 11, pp. 1831-1836. This technique is referred to herein as on-line auto-zero, since it allows the ADC to convert continuously without being taken off-line to perform its auto-zero function.Another problem encountered in flash ADCs is the speed bottleneck in the SH circuit at the front end. One technique used to reduce this bottleneck is to reduce the capacitive load on the SH circuit, thereby reducing the time needed to provide a reliable sample voltage. This is accomplished by eliminating the first stage preamps of half of the comparators, so that the SH load is reduced by a factor of two. The outputs of two adjacent first stage preamps are interpolated to provide inputs for the second stage preamps of the comparators that do not have first stage preamps.This technique is used in the ADC shown in FIGS. 1(A) and (B), in which half of the first stage preamps have been eliminated. Note that the analog signal path in these figures may be differential even though it is shown as single-ended in the figure for clarity. Note also that some of the P2 preamps are interpolating preamps, and are identified by P2'. These P2' preamps interpolate between adjacent P1 preamp outputs.As mentioned above, FIG. 1(A) shows the ADC 10 configuration during an auto-zero ("AZ") period. There are actually two phases of this auto-zero period. Since a second stage interpolating preamp P2' requires a zero input to perform its auto-zero, during the first phase of AZ, the first stage preamps P1 have their reset switches turned on, i.e. are reset, so that their output is zero. Preamps P2 are auto-zeroed during this phase of AZ. For the interpolating P2's, this also cancels any offset due to differences in the P1 reset mode output voltages. During the second phase of AZ, the reset switches of preamps P1 are turned off, and the preamps P1 are auto-zeroed and their offsets are stored on the capacitors 18.The coupling capacitors 18 that connect the SH to the preamps P1 thus store both the offset of P1 and the reference voltages generated from resistor ladder 12 for P1. The consequence of this is that in order to auto-zero a P2' preamp that interpolates between two P1 preamps, both P1 preamps must be reset during the first part of AZ. Since these P1s supply inputs not only for the P2 being auto-zeroed, but also for their own P2s and the P2s both above and below them, a total of five outputs are affected by the auto-zero of one P2.This means that, if one comparator is to be auto-zeroed at a time in the array and its output replaced with the output of an extra comparator, a problem exists. A total of five main array outputs are affected by the auto-zero: the comparator being auto-zeroed, the two above it, and the two below it. This is shown in FIG. 2, which shows the progression of AZ through the array. Question marks identify outputs being affected by the auto-zeroing process. The black preamps are the ones being auto-zeroed, and the gray ones are those that are not being auto-zeroed but which have outputs affected by the auto-zeroing process currently being performed.SUMMARY OF THE INVENTIONThe present invention involves a circuit architecture which combines two common flash ADC techniques used in high speed flash ADCs, described above, and which solves some of the problems that arise from combining these two techniques. In accordance with the present invention, a method is provided for performing an auto-zero function in a flash analog to digital converter ("ADC"), the ADC including a reference voltage circuit providing a plurality of evenly spaced analog reference voltages, and a plurality of system voltage comparators for comparing an input voltage against the reference voltages and providing an indication of which reference voltage corresponds to the input voltage. In the method the following steps are performed. A plurality of redundant voltage comparators is provided. A subset of the plurality of system voltage comparators is selected. Auto-zero is performed on the selected comparators, and the redundant comparators are used in the place of the selected comparators.These and other features of the invention will be apparent to those skilled in the art from the following detailed description of the invention, taken together with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1(A) is a diagram of a portion of a prior art flash ADC in auto-zero mode;FIG. 1(B) is a diagram of a portion of a prior art flash ADC in sample conversion mode;FIG. 2(A) is a symbolic diagram of a portion of a flash ADC array, showing a first set of comparator replacement selections and comparators affected thereby;FIG. 2(B) is a symbolic diagram of a portion of a flash ADC array, showing a second set of comparator replacement selections and comparators affected thereby;FIG. 3 is a waveform diagram showing various signals used in a flash ADC;FIG. 4 is a flow diagram showing a sequence implemented in a state machine of a preferred embodiment of the present invention;FIG. 5 is a logic diagram showing a section of a shift register portion of a control structure for a preferred embodiment of the present invention;FIG. 6 is a diagram which shows a pertinent portion of a comparator from the main array;FIG. 7(A) is a high level block diagram showing a main comparator array in direct mode;FIG. 7(B) is a high level block diagram showing the main comparator array of FIG. 7(B) in auto-zero mode; FIG. 8(A) is a high level block diagram showing a main comparator array, including two digital counters, in direct mode; FIG. 8(B) is a high level block diagram showing the main comparator array of FIG. 8(B) in auto-zero mode;FIG. 9 is a high level block diagram of an array, similar to that of FIG. 8(A), with additional representations to aid in understanding the operation of the array.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTThe preferred embodiments of the present invention comprise a method for performing auto-zeroing in a flash ADC. In performing auto-zero in accordance with a first preferred embodiment, wherein one P1 stage comparator is auto-zeroed at a time, a group of five comparators in the main array is removed from the normal operation at a time. For example, referring again to FIG. 2(A), the P2 comparators 50, 52, 54, 59, 60, 56 and 58 are all removed from normal operation during an auto-zero cycle, even though only preamps 52, 54, 59, 60 and 56 are being auto-zeroed during this cycle. This is done because of the ambiguity described above concerning the output of preamps 50 and 58.Before this occurs, however, redundant comparators replace this group of comparators being temporarily removed. This is accomplished by performing the following steps:1. Assume comparators k through k+4 are to be auto-zeroed. The first step is to place the corresponding reference voltages on an analog multiplexed bus, referred to herein as the Rbus, that provides the redundant comparators with reference voltages.2. The redundant comparators are placed in an auto-zero mode, so that their offsets are cancelled, and proper reference voltages are stored on the associated capacitors.3. The redundant comparators are switched out of auto-zero mode and into regular conversion mode. Barring any mismatches, the redundant comparators are now performing as an exact replica of comparators k through k+4.4. Comparators k through k+4 are placed in an auto-zeroed mode.5. Comparators k through k+4 are returned to regular operation.6. Auto-zero cycle for comparators k+2 through k+6 begins as described in step 1.Pertinent waveforms appearing in an auto-zero cycle are shown in the waveform diagram of FIG. 3. Those waveforms are: system clock CLK, control signal ADV_N, control signal DIR_SHN, extra comparator auto-zero command AZX and main array comparator auto-zero command AZ. Note that a waveform designator ending in a capital N indicates that the associated signal has negative logic, i.e., is asserted when the level is low. For example, DIR_SHN is such a signal. Also note that a waveform designator ending in a capital X indicates that the signal is a command for the extra, i.e., redundant, comparators, as contrasted with the main comparator array. For example, AZX is such a signal. ADV_N is used to increment K. When it is high, the signal DIR_SHN indicates that the extra comparators are being auto-zeroed. The function of these signals is described in detail below in conjunction with the description of FIG. 4.The auto-zero controller according to a preferred embodiment of the present invention comprises two major parts, a state machine and a shift register that is distributed into the comparator array, i.e. one flip-flop for every 2-bit comparator slice.The state machine implements the sequence illustrated in FIG. 4. The state machine can be conceptually described as two nested counters. The six-state inner counter 20 will be described first, with reference to both FIG. 3 and FIG. 4.In State 0, the DIR_SHN signal is set high and the ADV_N signal is asserted, i.e. driven low, thereby incrementing K. This incrementing of K is indicated in the Figure by the advancing of the outer counter 22 that marks the subset of five comparators that must be auto-zeroed. DIR_SHN signal being high places the comparator array into direct mode, in which all comparators in the main array are used, and Extra Comparators are not used.In State 1, the AZX signal is asserted. This initiates the auto-zero cycle for the redundant comparators.In State 2, the AZX signal is de-asserted. This state must be included to account for the latency between the time the redundant comparators leave auto-zero state and the time latched outputs of redundant comparators become valid.In State 3, the DIR_SHN signal is brought low. This indicates that outputs of five comparators in the main array will not be used, and the outputs of the redundant comparators will be used instead.In State 4, the AZ signal is asserted. This initiates the auto-zero cycle for the selected five comparators in the main array.In State 5, the AZ signal is de-asserted. This state must be included to account for latency between the time the five original comparators leave the auto-zero state and the time latched outputs of these comparators become valid.Note that States 1 and 4 may be of variable duration of 1, 2, 4 or 8 cycles. The reason for this is that the time needed for an auto-zero is fixed, but the clock period of the ADC operation is not. Therefore, the number of clock cycles required to complete an entire auto-zero operation must be increased in proportion to the operating frequency of the ADC.A section of the shift register portion of the control structure is shown in FIG. 5. One such section is provided for every two comparators in the comparator array. The section has as inputs lines for the signals AZ, AZX, RST (reset), SHR_IN, CLK and ADV_N. The section includes two 3-input AND gates 24, 26, and each of which has one inverting input, a two input multiplexer ("MUX") 28 and a DQ flip-flop 30. The AZ signal line is connected to the second non-inverting input of AND gate 24. The AZX signal line is connected to a first non-inverting input of AND gate 26. The RST signal line is connected to the reset input of flip-flop 30. The SHR_IN signal line is connected to a first input of MUX 28, to a second non-inverting input of AND gates 26 and to a first non-inverting input of AND gate 24. The CLK signal line is connected to the clock input of flip-flop 30. The ADV_N signal line is connected to the select input of MUX 28. The output of MUX 28 is connected to the D input of flip-flop 30, while the Q output of flip-flop 30 is connected to a SHR_OUT output signal line, to the inverting input of AND gate 26, to the inverting input of AND gate 24, and to the second input of MUX 28. The SHR_OUT is connected to the SHR_IN of the comparator above this one. The SHR_IN of the first comparator is connected to a one.The operation of the shift register can be described as follows: Initially all flip-flops are reset to zero. The flip-flop 30 always has a one at its SHR_IN pin.Conceptually, a token signal is generated whenever a flip-flop detects a high signal at its input, SHR_IN, and a low signal at its output, SHR_OUT. The token signal indicates that the current comparator as well as four more comparators immediately above it in the array have been selected an for auto-zero operation.The TAZX and TAZ signals are used by circuitry to control the auto-zero function for the main array comparators. This is shown in FIG. 6, which shows a pertinent portion of a comparator from the main array. Shown are a storage capacitor 18 and a P1 preamp 60 connected to one port of capacitor 18, and also a P2 preamp 62 and an interpolating P2' preamp 64 receiving the output of the P1 preamp 60. A line from the Rbus, described above, is connected to one side of a first switch 66. The other side of switch 66 is connected to a reference voltage tap from the resistor ladder and to one side of a second switch 68. The other side of switch 68 is connected to the other port of capacitor 18 and to one side of a third switch 70. The other side of switch 70 is connected to a sample and hold output. A fourth switch 72 is connected between the input and output of preamp 60. The output of preamp 60 is also connected to the input of P2 preamp 62 and to one input of P2' preamp 64. The control signal TAZX closes switch 66. The control signal TAZ closes switch 68 and switch 72. It also controls P2 preamp 62 and P2' preamp 64 to perform auto-zero. The control signal {overscore (TAZ)} closes switch 70.As can be seen, when TAZ is asserted, the reference voltage from the resistor ladder is connected to the main array, and shorts preamp 60, thus performing the P1's auto-zero. In addition, the P2 preamp 62 and P2' preamp 64 are controlled to perform auto-zero. When TAZ is not asserted, then switch 70 closes, allowing the P1 preamp, the P2 preamps and the latches to perform their comparator function.On the other hand, when TAZX is asserted, the reference voltage from the resistor ladder is connected to the redundant comparators, by the action of switch 66, so that the redundant comparators can be auto-zeroed.Finally, returning now to FIG. 5, after the TAZX and TAZ signals have performed their functions, ADV_N is asserted low. This allows a logical high to propagate higher in the stack of the shift registers so that the next group of comparators will be selected for auto-zero.When the top of the shift register is reached, the RST signal is asserted. This resets all Flip-flops so that the auto-zero cycle starts anew from the bottom of the comparator array.Each auto-zero cycle consists of two major parts:1. Applying proper reference voltages to Extra Comparators.2a. Training Extra Comparators to replace a group in the main comparator array. This is the Direct Mode.2b. Auto-zeroing a block of comparators in the main array while using Extra Comparators in its place. This is the Auto-Zero Mode.In the direct mode, outputs of the comparator array are applied to the ROM Encoder inputs to select the proper digital output. When done in auto-zero mode this requires some modification to the circuitry thus far described. Conceptually, the necessary change can be implemented as shown in FIGS. 7(A) and 7(B).FIG. 7(A) is a high level block diagram showing a main comparator array 32 in Direct Mode, receiving VREF and VIN, and providing a thermometer code to a ROM 34 for decoding to a binary digital value. Block 36 represents the redundant comparators. Shading signifies that the comparators are off-line, being auto-zeroed. FIG. 7(B) is a high level block diagram showing the same main comparator array 32 in auto-zero mode. As can be seen, in auto-zero mode a selected block 38 of comparators in the main array is off-line, being auto-zeroed, while the block 36 of redundant comparators are connected in the place of the comparators being auto-zeroed.This implementation, while conceptually simple, may not provide the performance required in some demanding applications, due to following. First, the outputs of the redundant comparators must be distributed to each comparator position. The capacitive loading on the bus required to distribute the redundant comparators results is heavy and is proportional to the number of comparators. Therefore the architecture does not scale well with the number of bits of resolution, since the number of comparators, and therefore capacitive loading, doubles with every extra bit of resolution. Second, because of the interleaving of the preamplifiers in the comparator array, auto-zero occurs in groups of five comparators, but with overlap of three. This implies that in the worst case, an output of comparator may come from one of four sources (comparator in the main array, or any one of three extra comparators. A four-way multiplexing produces an additional speed penalty.A further preferred embodiment of the present invention, described below and shown in FIG. 8(A) and 8(B), does not have the speed limitations described above. FIGS. 8(A) and 8(B) are similar to FIGS. 7(A) and 7(B). However, two digital adders 46, 48, are provided, as described below.In the direct mode, shown in FIG. 8(A), the binary digital output is formed as an output of the ROM encoder 34, as is normally done in flash ADC without auto-zero. In the auto-zero mode, however, the comparator outputs are split into three categories: the group 38 of five comparators that are being auto-zeroed, the group below 42 and the group above 44. The outputs of the auto-zero group 38 are invalid and are simply discarded. The outputs of comparators below 42 are sent to the corresponding select lines of the ROM encoder. The outputs of the comparators above the auto-zero group 44 are shifted down by five comparator positions, so that they are, in effect, concatenated with the outputs of the other system comparators not being auto-zeroed. The outputs of the extra comparators are added together as binary values of either 0 or 1 in adder 46 to form a 3-bit auxiliary word, which is a binary value which may take values from 0 to 5. Finally, outputs of the ROM encoder 34 and the auxiliary word are added together in adder 48 to form the final output.The input voltage VIN may fall into one of three voltage ranges: it may be below, above, or somewhere within the voltage range of the comparators that are undergoing auto-zero. FIG. 9 is a diagram similar to that of FIG. 8(A), with certain additional representations that help show that the correct digital output will be produced in all three cases.In case A, the input voltage A is below the reference voltage Vbot corresponding to the block of comparators being auto-zeroed. In this case, the output of the extra comparators is zero, the output of the ROM encoder is A, and thus SUM is A, which is correct.In case B, the input voltage B is within the range of reference voltages corresponding to the block of comparators being auto-zeroed. In this case, the output of the ROM encoder is the digital code corresponding to Vbot, the output of the extra comparators is B-Vbot, and thus SUM is Vbot+B-Vbot=B, which is correct.In case C, the input voltage C is above the reference voltage corresponding to the lock of comparators being auto-zeroed. In this case, the output of the extra comparators is 5, the output of the ROM encoder is C-5, and thus SUM is C-5+5=C, which is correct.This technique employs only 2-way multiplexers at every ROM encoder select line (i.e. either output of comparator, or output of comparator five positions above must be selected). This keeps the propagation delay of the logic gate low and allows for fast operation.Furthermore, the technique may be scaled to any bit resolution without incurring any penalty due to doubling of the number of comparators.The layout is modular and regular making it suitable for VLSI implementation.Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.
Systems, apparatuses and/or methods may provide for generating a packing order of items within a container that consolidates the items into a reduced space. Items may be scanned with a three-dimensional (3D) imager, and models may be generated of the items based on the data from the 3D imager. The items may be located within minimal-volume enclosing bounding boxes, which may be analyzed to determine whether they may be merged together in one of their bounding boxes, or into a new bounding box that is spatially advantageous in terms of packing. If a combination of items is realizable and determined to take up less space in a bounding box than the bounding boxes of the items considered separately, then they may be merged into a single bounding box. Thus, a spatially efficient packing sequence for a plurality of real objects may be generated to maximize packing efficiency.
CLAIMSWe claim:1. A system to determine efficient packing for objects comprising:at least one sensor to provide sensor data to an object modeler that is to generate a digital three-dimensional (3D) model of each of a plurality of real objects; an object boxer communicatively coupled with the at least one sensor to generate a bounding box for each of the plurality of real objects; anda packing plan generator communicatively coupled with the object boxer to determine a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real objects contains at least one other of the plurality of real objects.2. The system of claim 1, further including an object voxelizer to approximate a size and a shape for each of the plurality of real objects in voxels.3. The system of claim 1, wherein one or more of the plurality of real objects is an item, and wherein the apparatus further includes:an item voxelizer to approximate a size and a shape for each item;an item boxer to generate a minimal enclosing bounding box for each item; andan item combiner to:determine if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each minimal enclosing bounding box for each of the two or more items; andmerge the two or more items together in the bounding box if the two or more items are mergeable.4. The system of claim 3, wherein the item combiner is to:compute a free space (FS) value of each item with respect to each minimal enclosing bounding box for each item; and merge the two or more items into one bounding box if a merger criterion is met.5. The system of claim 1, wherein the at least one sensor includes a 3D sensor, and wherein the system further includes an object identifier to identify each of the plurality of real objects and acquire non-3D image data corresponding to each of the plurality of the real objects.6. The system of claim 1, further including a hierarchy generator to generate a hierarchical volumetric representation of each of the plurality of real objects to detect an empty part of each of the plurality of real objects.7. The system of claim 3, wherein one of the plurality of real objects is a storage space, and wherein the system further includes a storage space boxer to generate a maximum enclosed bounded box for the storage space.8. An apparatus to determine an efficient packing for objects comprising: means for generating a three-dimensional (3D) digital model of a plurality of real objects based on sensor data for each of the plurality of real objects;means for generating a bounding box for each of the plurality of real objects; andmeans for determining a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real objects contains at least one other of the plurality of real objects.9. The apparatus of claim 8, further including means for approximating a size and a shape for each of the plurality of real objects in voxels.10. The apparatus of any one of claims 8-9, wherein one or more of the plurality of real objects is an item, and wherein the apparatus further includes:means for approximating a size and a shape for each item;means for generating a minimal enclosing bounding box for each item; and item combination means for: determining if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items; andmerging the two or more items together in the bounding box if it is determined that the two or more items are mergeable.11. The apparatus of claim 10, wherein the item combination means is to: compute a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item; andmerge the two or more items into one bounding box if it is determined that a merger criterion is met.12. The apparatus of any one of claims 8-9, further including means for identifying each of the plurality of real objects and acquire non-3D image data corresponding to each of the plurality of the real objects.13. The apparatus of any one of claims 8-9, further including means for generating a hierarchical volumetric representation of each of the plurality of real objects to detect an empty part of each of the plurality of real objects.14. The apparatus of any one of claims 8-9, wherein one or more of the plurality of real objects is a storage space, the apparatus further including means for generating a maximum enclosed bounded box for the storage space.15. A method to determine an efficient packing for objects comprising: generating, by at least one sensor, a three-dimensional (3D) digital model of a plurality of real objects based on sensor data for each of the plurality of real objects; generating, by an object boxer, a bounding box for each of the plurality of real objects; anddetermining, by a packing plan generator, a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real objects contains at least one other of the plurality of real objects.16. The method of claim 15, further including approximating a size and a shape for each of the plurality of real objects in voxels.17. The method of claim 15, wherein one or more of the plurality of real obj ects is an item, and wherein the method further includes:approximating a size and a shape for each item in voxels;generating a minimal enclosing bounding box for each item;determining if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items; andmerging the two or more items together in the bounding box if it is determined that the two or more items are mergeable.18. The method of claim 17, further including:computing a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item; andmerging the two or more items into one bounding box if it is determined that a merger criterion is met. 19. The method of claim 15, further including identifying each of the plurality of real objects and acquiring non-3D image data corresponding to each of the plurality of the real obj ects.20. The method of claim 15, further including generating a hierarchical volumetric representation of each of the plurality of real obj ects to detect an empty part of each of the plurality of real obj ects.21. The method of claim 17, wherein one of the plurality of real objects is a storage space, the method further including generating a maximum enclosed bounded box for the storage space.22. At least one computer readable storage medium comprising a set of instructions, which when executed by an apparatus, cause the apparatus to: generate a three-dimensional (3D) digital model of a plurality of real objects based on sensor data for each of the plurality of real objects;generate a bounding box for each of the plurality of real objects;determine a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real objects contains at least one other of the plurality of real objects; andapproximate a size and a shape for each of the plurality of real objects in voxels.23. The at least one computer readable storage medium of claim 22, wherein one or more of the plurality of real objects is an item, and wherein the instructions, when executed, cause an apparatus to:approximate a size and a shape for each item in voxels;generate a minimal enclosing bounding box for each item;determine if any two or more items of the plurality of items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items; andmerge the two or more items together in the bounding box if it is determined that the two or more items are mergeable.24. The at least one computer readable storage medium of claim 23, wherein the instructions, when executed, cause an apparatus to:identify each of the plurality of real objects and acquire non-3D image data corresponding to each of the plurality of the real objects;compute a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item; andmerge the two or more items into one bounding box if it is determined that a merger criterion is met.25. The at least one computer readable storage medium of claim 22, wherein the instructions, when executed, cause an apparatus to:identify each of the plurality of real objects;acquire non-3D image data corresponding to each of the plurality of the real objects; and generate a hierarchical volumetric representation of each of the plurality of real objects to detect an empty part of each of the plurality of real objects,wherein one of the plurality of real objects is a storage space, and wherein the instructions, when executed, cause an apparatus to generate a maximum enclosed bounded box for the storage space.
EFFICIENT PACKING OF OBJECTSCROSS-REFERENCE TO RELATED APPLICATIONSThe present application claims the benefit of priority to U. S. Non-Provisional Patent Application No. 14/954,959 filed on November 30, 2015.TECHNICAL FIELDEmbodiments generally relate to packing objects in confined spaces. More particularly, embodiments relate to efficient packing of items in a storage space and/or in a storage container.BACKGROUNDAn ability to efficiently pack items together in a confined space may be of value in commercial and/or non-commercial settings. The items to be packed may be of regular shape (e.g., items that are packaged in standardized boxes such that the items are the boxes themselves) or they may be of irregular and/or varied shape. In the latter case, a packing sequence may fall to a worker that is tasked with "eyeballing" various items to be packed to provide a realizable packing sequence on- the-fly. Such an approach may be inefficient and costly in terms of labor inputs and time, and may provide irregular, uncertain, inefficient, and/or inconsistent results.The general problem of "bin packing" has been studied from a mathematical point of view. However, a mathematical solution is not necessarily a practical, implementable solution, since some mathematical components may be intractable in terms of available hardware resources. The intractability may be especially true given real-world demands for speed in determining a packing order.BRIEF DESCRIPTION OF THE DRAWINGSThe various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:FIG. 1 is a block diagram of an example of a system to pack obj ects according to an embodiment;FIG. 2 is a flowchart of an example of a method of packing obj ects according to an embodiment; FIG. 3 is a block diagram of an example of generating a representation of a container according to an embodiment;FIG. 4 is a flowchart of an example of a method of generating a representation of an item according to an embodiment;FIG. 5 is a flowchart of an example of a method of merging objects according to an embodiment;FIG. 6 is a block diagram of an example of an item combiner according to an embodiment;FIG. 7A is an example of an item according to an embodiment; FIG. 7B is an example of the item in FIG. 7B shown in a voxelized form according to an embodiment;FIG. 8 is an example of the voxelized item in FIG. 7B located within a bounding box according to an embodiment;FIG. 9 is an example of the voxelized item and the bounding box in FIG. 8 aligned with respect to a pair of reference axes according to an embodiment;FIG. 10 is an example of a re-voxelized item in a bounding box according to an embodiment;FIGs. 11A-11C are an example of a merger of two items according to an embodiment;FIGs. 12A-12B are examples of two identical items according to an embodiment;FIGs. 13A-13B are examples of the items in FIGs. 12A-12B shown in voxelized form according to an embodiment;FIGs. 14A-14B is an example of the items in FIGS. 13A-13B located within respective bounding boxes according to an embodiment;FIGs. 15A-15D are examples of an item shown in various orientations according to an embodiment;FIGs. 16A-16E are examples of various possible combinations of items according to an embodiment;FIG. 17 is an example of a combination of items in a bounding box according to an embodimentFIG. 18 is an example of a combination of items in a bounding box according to an embodiment FIG. 19 is a block diagram of an example of a processor according to an embodiment; andFIG. 20 is a block diagram of an example of a computing system according to an embodiment.DESCRIPTION OF EMBODIMENTSFIG. 1 shows a block diagram of an embodiment of a system 10 to facilitate a spatially efficient packing of obj ects. Generally, the term "obj ect" may refer to individual items of commerce (e.g., books, shoes, toothbrushes, furniture, food, etc.) as well as to storage spaces into which it is desired to efficiently pack items. The storage space may be a mathematically defined, bounded region of space into which items are to be placed, or it may be a physical container such as a bin, a truck, a room, a warehouse, a refrigeration unit, a rail car, etc. In one example, items may be packed into a container in a manner that may consolidate the items together to reduce the total space required to pack the items. In other words, a method and/or an apparatus may be implemented to maximize use of storage space that is available to pack a collection of items.A three-dimensional (3D) camera 12 may be used to obtain image data of a container/storage space 16 and one or more items 18. In some embodiments, the image data may be obtained or derived from data provided by a two-dimensional (2D) camera or sensor, a non-3D imager, and/or a 3D imager, alone or in combination. While embodiments below reference the container 16, it should be understood that the storage space 16 may replace the container 16 in the following examples.In the illustrated example, the items 18 include a sofa 18a, a cylinder 18b, a ball 18c, and a shoe 18d. In addition, the 3D camera may contain an imager to obtain depth data of a subject such as, for example, the container 16 and/or the items 18. The 3D camera may be, for example, a dedicated, stand-alone unit, or may be incorporated into another device such as a smart phone, tablet computer, notebook computer, tablet computer, convertible tablet, personal digital assistant/PDA, mobile Internet device/MID, wearable computer, desktop computer, camcorder, video recorder, media player, smart TV, gaming console, etc., or any combination thereof. For example, a platform that includes a 3D camera technology to be implemented in embodiments may include Intel® RealSense® (registered trademarks of Intel Corporation). In the illustrated example, the 3D camera sends captured image data to a packing analyzer 20that analyzes the image data. The illustrated packing analyzer 20 includes an object modeler 22, which further includes a container modeler 24 and an item modeler 26. The modeler 24, 26 may generate 3D models of the container 16 or items 18, respectively. In this regard, various modeling protocols may be utilized (e.g., a point cloud model, etc.) to generate the 3D models. The illustrated packing analyzer 20 also includes an object voxelizer 28, which further includes a container voxelizer 30 and an item voxelizer 32. The voxelizers 30, 32 may generate a voxelized form of a respective model, which may reduce computation time in manipulating depictions of objects (e.g., containers, items, etc.).The approximate size and shape of objects may be represented in voxels. A voxel may refer to a unit of 3D space that represents data points of a 3D point cloud or other data that lies within the 3D cloud. The voxel may provide a relatively more computationally efficient manner of representing that data. A size of the voxel may be chosen, wherein the size may represent a tradeoff between accuracy and runtime performance and/or may depend on a specific application/use of the voxel. Further information regarding voxelization may be found in a paper to Kaufman et al. entitled "Volume Graphics," Computer IEEE, July 1993, 51-65.The packing analyzer 20 also includes an object boxer 34, which further includes a container boxer 36 and item boxer 38. As discussed below, the boxers 36, 38 may generate maximal (volume) enclosed bounded boxes or minimal (volume) enclosed bounding boxes with respect to voxelized depictions of containers and items, respectively.In some embodiments, the container modeler 24 and the item modeler 26 may be unitary with one another, or they may be separate. For example, the modelers 24, 26 may be unitary or separate circuits, software components, and combinations thereof. Similarly, the container voxelizer 30 and the item voxelizer 32 may be unitary with one another, or they may be separate. For example, the voxelizers 30, 32 may be unitary or separate circuits, software components, and combinations thereof. Also, the container boxer 36 and the item boxer 38 may be may be unitary with one another or they may be separate. For example, the boxers 36, 38 may be unitary or they may be separate circuits, software components, or combinations thereof.The packing analyzer 20 further includes an object identifier 39 to identify objects using non-image data such as data provided by a manufacturer of an object, object blueprints, data provided by inertial measurement devices, barcode scanners, and/or other model data that may be available through the Internet cloud. The packing analyzer 20 also includes an item combiner 40 to combine objects, an image analyzer 42 to analyze images of objects, and constraints 44 that may be applied. As discussed below, the object identifier 39, the item combiner 40, the image analyzer 42, and the constraints 44 may be implemented to further facilitate a spatially efficient packing of objects. Moreover, the packing analyzer 20 includes a packing plan generator 46 that may use the analysis provided by the packing analyzer 20 to generate a packing plan. In one example, a machine, a robot, and/or a human worker may use the packing plan as a guide to pack the items 18 into the container 16.FIG. 2 shows a flowchart of an example of a method 50 of packing items into a storage space, such as the container 16 (FIG 1), discussed above. The method 50 may be implemented in one or more modules as a set of logic instructions stored in a machine- or computer-readable storage medium such as random access memory (RAM), read only memory (ROM), programmable ROM (PROM), flash memory, etc., as configurable logic such as, for example, programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), as fixed-functionality logic hardware using circuit technology such as, for example, application specific integrated circuit (ASIC), complementary metal oxide semiconductor (CMOS) or transistor-transistor logic (TTL) technology, or any combination thereof. For example, computer program code to carry out operations shown in the method 50 may be written in any combination of one or more programming languages, including an object oriented programming language such as C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. Moreover, the method 50 may be implemented using any of the herein mentioned circuit technologies.Illustrated processing block 52 takes as input 3D data of a container and an item, such as the container 16 and the items 18 (FIG. 1), discussed above. The 3D data may be raw 3D data provided by a depth sensor in a 3D camera, or it may be processed 3D image data. The 3D data may be forwarded along two generally parallel branches in the method 50, which may be executed in parallel or sequentially.A first branch begins at illustrated processing block 54, where the 3D data may be used by a container modeler, such as the container modeler 24 (FIG. 1), discussed above, to construct a model of the container. The model may be in the form of a point cloud, depicting the interior surface of the container, or the model may be of another form suitable for constructing a model of the interior of the container. Additional sources of information that may be useful in the construction of the model at block 54 may be provided by processing block 56. The sources of information may include data provided by a manufacturer of the container, barcode scanner data, data derived from handmade mappings of the container, container blueprints, model data that may be available through the Internet cloud, and other sources. Data may additionally be provided by inertial sensors of an inertial measurement unit (IMU) to determine a local direction of gravity, which may be of use in differentiating between a container ceiling and a container floor.The sources of information may be combined at block 54 with the data from block 52 to generate a model of the interior of the container. The model may reflect a size and a shape of interior container walls and space, as well as of other container elements that may bear on its use as storage. For example, the other container elements may include shelves, walls, immovable architectural features, and so forth. In some embodiments, an object recognition algorithm may be used to recognize the other container elements including, for example, shelves, closets, or other features.Processing block 58 generates volumetric and box representations of the container based on the model generated in block 54. In one example illustrated in FIG. 3, a method 86 to generate volumetric and box representations may be implemented as a module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality hardware logic using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.In the illustrated method 86, a 3D frame of the container may be provided at processing block 88. Processing block 90 uses the data provided by the frame to generate a 3D point cloud model of the container. Often, the number of data points presented by the 3D point cloud model may tax available computational resources. Thus, it may be useful to reduce a size of a data set used in the model of the container. Notably, processing block 92 reduces the size of the data set used to model the container by voxelizing the 3D point cloud. In one example, block 92 may be implemented by the container voxelizer 30 (FIG. 1), discussed above. As noted above, voxels may provide for a relatively more computationally efficient depiction of data. Thus, instead of being presented in terms of large numbers of individual points corresponding, for example, to the volume or the interior surface of the container, the model may be built up of voxels, each voxel representing groups of such points. A size of a voxel may be determined by available computational resources and/or by the complexity of the container being modeled. The task of performing further computations on the model may become computationally simpler at the potential expense of some loss of fine granularity when voxels are used rather than individual points. In some settings, such as where the container may be of simple form and the computational resources on hand are extensive, an embodiment may omit, bypass and/or ignore voxelizing the container space.Referring back to FIG. 2, block 58 also generates a box representation of the container based on the maximal enclosed bounded box ("bounded box") that can be accommodated into the container or into a volumetric subset of the container. As illustrated in FIG. 3, the maximal enclosed bounded box may be generated at processing block 94. In one example where the container has a box-like interior, the bounded box may simply be the largest box that may be placed inside the container. In addition, processing block 96 may generate a hierarchical representation of the interior of the container, indicating the sizes and shapes of regions that are occupied and of regions that are available to be occupied by an item.Referring back to FIG. 2, the volumetric representation and box representations of the container are passed to a packing algorithm at processing block 60.The second branch of the method 50 begins at illustrated processing block 72, wherein the data of the items may be used to construct models of each of the items. In one example, the item modeler 26 (FIG. 1), discussed above, may be used to construct respective models of each of the items. The models may be in the form of a point cloud, depicting the exterior surface of each of the items, or the models may be of another form suitable for constructing a model of exterior surfaces, accessible interior surfaces, and/or interior space of the items.Additional sources of information that may be useful in the construction of the model may be provided by processing block 74. The sources of information may include data provided by a manufacturer of the items, data derived from handmade mappings of the items, barcode scanner data, item blueprints, data provided by an IMU, and/or any other models that may be available of the items (e.g., on storage, available over the Internet, etc.). The sources of information may be combined at block 72 with the data from block 52 to generate a model of the exterior of each item. The model may reflect a size and a shape of exterior surfaces of a given item, and/or any interior surfaces (e.g., concave surfaces) that may be accessible from outside of the item and into which other items may be placed.Processing block 76 generates volumetric and box representations of each item based on the model generated at block 72. In one example illustrated in FIG. 4, a method 98 to generate volumetric and box representations may be implemented as a module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality hardware logic using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.In the illustrated method 98, a 3D frame of the item may be provided at processing block 100. Processing block 102 uses the data provided by the frame to generate a 3D point cloud model of the item. Often, the number of data points presented by the 3D point cloud model may tax available computational resources. Thus, it may be useful to reduce a size of a data set used in the model of the item. Notably, processing block 104 reduces the size of the data set used to model the item by voxelizing the 3D point cloud. In one example, processing block 92 may be implemented by the item voxelizer 32 (FIG. 1), discussed above. Thus, instead of being presented in terms of large numbers of individual points corresponding, for example, to the outer surface, the volume, or the interior surface of the item, the model may be built up of voxels representing groups of exteriorly accessible such points (e.g., mainly of the surface of the item).As noted above, a size of a voxel may be determined by the available computational resources and/or the complexity of the item being modeled. The task of performing further computations on the model may become computationally simpler at the potential expense of some loss of fine granularity when voxels are used rather than individual points. In some settings, such as where the item may be of simple form and the computational resources on hand are extensive, an embodiment may omit, bypass, and/or ignore voxelizing the item. Processing block 106 generates a minimal enclosing bounding box (MEBB) of the voxelized item. The MEBB bounds the item subject to a constraint, such as a constraint requiring that the volume of the MEBB be minimized or as close to a minimum volume value as the computational resources on hand permit. Thus, the MEBB may be considered a smallest box into which one may pack an item without disturbing (e.g., crushing or bending) the item. Further information concerning the determination and use of bounding boxes, including MEBBs, is included in a paper to Barequet et al. entitled "Efficiently Approximating the Minimum-Volume Bounding Box of a Point Set in Three Dimensions," Journal of Algorithms Vol. 38, Issue 1, January 2001, 91-109.The voxels generated at block 104 may not, in general, be aligned with respect to sides of the MEBB generated at block 106, which can make subsequent calculations (e.g., calculations involving a spatial manipulation of an item, etc.) more difficult. Processing block 107 re-voxelizes the item to align the sides of the voxels with the sides of the MEBB, which may simplify subsequent calculations. In one example, the alignment may be implemented with respect to a Cartesian coordinate system (e.g., Z, Y, Z coordinates) natural to the MEBB.Processing block 108 may generate a hierarchical representation of the item. For example, block 108 may indicate and quantify areas in space that are occupied by the item. In addition, block 108 may indicate and quantify accessible interior (e.g., concave) regions that may be available to be occupied by another item.In one embodiment of the illustrated method 98, shown in broken line in FIG. 4, items may be tested to see if they are too large to fit into the container. For example, when a MEBB has been generated at block 106, processing block 110 determines if the item is too large to be fitted into the container by comparing the size and the shape of the MEBB of the item to the size and the shape of the bounded box of the container. Additionally, an item that does not fit into the container in one orientation may fit in another.To test for a possibility of no fit, the item may be rotated 90 degrees, 180 degrees, or 270 degrees about any of its principal axes to determine if a fit may occur. If the item is too large, the item may be ignored. In one example, an item that is too large may be removed from consideration for placement into the container at processing block 112. If, however, the item is not too large, control passes back to block 107 for re-voxelizing the item and the method 98 may proceed as before. Returning back to FIG. 2, the output of block 76 in FIG. 2 is a representation of the item, contained within an MEBB. It is noted that the MEBB is a mathematical construct and not a physical box, although the items themselves may be physical boxes (e.g. shoe boxes that hold shoes).Processing block 78 determines whether it is desired to consolidate the items, for example by merging the items together. In some circumstances, consolidation may not be required or possible. For example, when the items themselves are pre-packaged in sealed boxes (e.g., shoes being shipped in shoe boxes), it may be acceptable to pass along MEBBs of the individual sealed boxes to the packing algorithm at processing block 60.On the other hand, if processing block 78 determines that further consolidation of the item is desired and/or appropriate, then processing block 80 attempts to combine individual items together by testing whether the items may be merged into one MEBB. The mathematics utilized in one aspect of embodiments is discussed below.For a given object (e.g., a container, an item, etc.) a Free Space (FS) function may refer to a measure of the degree to which a given item occupies its associated MEBB. In addition, a merger criterion may be defined in terms of the mathematical relationship among values of the FS function for individual items/bounding boxes relative to a FS value computed for a union of the items into a bounding box. In some embodiments, the bounding box may be a MEBB of one of the items to start, and in other embodiments the bounding box may be a newly computed MEBB for the two items combined together.A number of FS functions and merger criteria may be used in embodiments. According to one embodiment, the FS function may be a Normalized Volume (NV) function which may be defined as the ratio of the volume of an item to the volume of the MEBB of the item:NV(I) = vol(I)/vol(MEBB(I)) (Eqn.1)where I is the item, vol(x) is a function that computes the volume of its argument, and MEBB(x) is a function that computes an MEBB for its argument x (e.g., item I). By this measure, an NV value of 1 would be optimal and lesser values closer to 0 would indicate a poorer utilization of space. A merger criterion to determine whether to merge a combination of two items, i and j, into a single MEBB (e.g., that may belong to one of the items, or that may be a new MEBB) may then be:0.5(NV(It) + NV(Ij)) < NVfli Ul}) (Eqn. 2)If Eqn. 2 evaluates to "True," then the merger of items Ii and Ij is spatially favorable over keeping the items separate, and merger may occur (i.e., the possible combination may proceed to become a merger of the items).According to another embodiment, the FS function may be a Free Space Utilization (FSU) function which may be defined as a difference between the volume of an MEBB of the item and the volume of the space that may be used by the item:FSU(I) = vol(MEBB(I)) - vol(I) (Eqn.3)where I is the item, vol(x) is a function that computes the volume of its argument (here, an item, a bounding box, or more generally, an object), and MEBB(x) is a function that computes the bounding box of item x. The FSU function returns non-negative values, and it is zero for all items that are themselves boxes or convex objects that may be enclosed into a tight box without empty voxels. In general, a low value of FSU indicates better utilization of space, with a value of 0 being optimal. Higher values of FSU indicate poorer utilization of space.In order to improve total free space utilization in a container, a greedy approach to combine objects may be used. While this approach may be used with any of the FS functions and merger criteria discussed above, they will be explained in greater detail with respect to the FSU function and merger criteria. In the greedy approach, pairs of items are merged into a single MEBB if the FSU of the combined items is less than or equal to the sum of FSUs of the separate items, each in its own MEBB. A test that may be used as a merger criterion to determine whether to merge a combination of two items, i and j, into a single MEBB may include:FSU(h) + FSU(Ij) > FSU(b Ul}) (Eqn. 4) The illustrated method 50 may seek to minimize the FSU of the union of any two items by merging them. In some embodiments, and depending on the available computational resources, the method 50 may be extended to cover the union of any subset of items. In one embodiment, an iterative greedy approach may be used to find items to combine. For example, the items may be sorted by the volume of their respective MEBBs, and pairs of items may be combined into a single bounding box iteratively, over the items Ii from large to small, for each item ¾ that has not yet been considered. The iterative greedy approach may be repeated until the total FSU of the merged items stops to decrease.Pseudo code for one implementation of an algorithm to merge items is presented below:Input: N items sorted by the volume of their MEBB1 : For i=l to N2: Forj=i to N3: Find best union Ii U Ij which minimizes FSU(Ii U Ij)4: If FSU(Ii) + FSU(Ij) > FSU(Ii U Ij)5 : then: Merge Ii = Ii U Ij6: Repeat until no further merging take places.FIG. 5 shows a flowchart of a method 1 10 to determine whether to merge two items together. The method 1 10 may be implemented as a module or related component in a set of logic instructions stored in a non-transitory machine- or computer-readable storage medium such as RAM, ROM, PROM, firmware, flash memory, etc., in configurable logic such as, for example, PLAs, FPGAs, CPLDs, in fixed-functionality hardware logic using circuit technology such as, for example, ASIC, CMOS or TTL technology, or any combination thereof.Models of items that have a predetermined level of accuracy may be made available by processing block 112. Processing block 114 sorts the items based on the volumes of each item's respective MEBBs. Processing block 116 selects a highest volume available item Ii. Processing block 118 merges each unassigned item Ij>i to the current item I, provided that the merge results in a value of merged FSU that is no greater than the sum of the individual FSU values of the items and provided that the merger is physically realizable.The method of fitting one item into another item's MEBB (or into a newly computed MEBB) may generally entail consideration of both translating one item into another item's MEBB, and orienting one item with respect to another. In some circumstances, an item may not be translationally fitted into a region of space such as an MEBB containing an item while the item is in one orientation notwithstanding favorable FSU calculations. And yet, it may be that in another orientation, possibly utilizing a different translation, the item may be fitted into a region of space such as an MEBB that contains or that is to contain another item. Processing block 1 18 may search for a favorable translation and orientation by rotating items and their respective MEBBs to determine if a fit may occur (e.g., if an item may be translated into the MEBB of another item in a physically realizable manner).In one embodiment, an item and its respective MEBB may be rotated by 90 degrees, 180 degrees, or 270 degrees about any of the three principal axes of the MEBB in search of an orientation that fits. Translation may generally be at a granularity of a single voxel, or integral multiples thereof. If a favorable orientation and translatory motion can be found, then merger may proceed if the FSU calculations and merger criterion warrant the merger. If a fit cannot be found, then there may be no merger, notwithstanding a favorable FSU calculation. In some embodiments, determining the feasibility of a fit may be made after FSU calculations, and in other embodiments, determining the feasibility of a fit may be made prior to FSU calculations.Processing box 120 determines whether stop criteria (based on the FSU) have been met, such as a desired measure of consolidation or improvement in spatial overall utilization. If the criteria are met, then processing block 122 outputs the bounding box Ii for the now merged combination, which is then to be considered as one of the items to be packed in the container. In one embodiment, the bounding box Ii may be the MEBB of the i-th item. In another embodiment, a new MEBB of the combined items may be computed, and merger may occur if the volume of the new MEBB is no greater than (and preferably less than) the sum of the volumes of the MEBBs of each of the items prior to merger. If, on the other hand, it is determined at processing block 120 that the stop criteria are not met and that further consolidation of the packing of items may be desired and/or appropriate, then the MEBB for Ii is set at processing block 124 to be the union of Ii and Ij, (i.e., Ii is now to be considered as containing both the item(s) previously in Ii alone along with the item(s) previously in Ij). The merged item may be then fed back to processing block 118 for consideration of merger with further items. The illustrated method 110 may be extended to include obj ects generally, such containers alone, containers and items, and aggregations of any combination of items and containers. FIG. 6 shows an embodiment of an item combiner 130 that may be used to conduct pair-wise comparisons of respective FS values for items and determine whether the items should be merged. The illustrated item combiner 130 may generally implement one or more aspects of the method 110 (FIG. 5), already discussed. In the illustrated example, item data 132a and item data 132b, corresponding to two respective items, may be passed to the item combiner 130. Item FSU determiner 134 may calculate an FSU of individual items or merged items. Combined FSU determiner 136 computes the FSU of a combination of items under consideration for merger, and FSU comparator 138 compares the FSU of the combination to the sum of the FSU values for each respective item, as in equation 2. If the combination is favorable (e.g., in the case of an embodiment utilizing the FSU function as its FS function, the combination at least does not result in an increase in the FSU value) and physically realizable, then the two items may be merged together by item mergerer 140.When the item merger 140 determines that the two items cannot be merged together in one orientation (e.g., due to physical constraints of size), then item ori enter 143 varies the orientation of the smaller of the items by rotating the smaller item 90 degrees, 180 degrees, or 270 degrees with respect to the principal axes of the MEBB of the smaller item to see if a fit may occur. If so, the items may be merged in the selected, fitable orientation (e.g., most tightly fitted orientation, most space saving orientation, etc.). If not there may be no merger. In some embodiments the item mergerer 140 may also compute a translation to insert a smaller of the two items into a region of space partly occupied by the larger item. In other embodiments, the item ori enter 143 may compute the translation.A hierarchy generator 142 may provide a hierarchical representation of an item, such as an octree, that may be computed to detect empty parts of the MEBB (e.g., parts of the MEBB of the item that are not occupied by the item itself) and/or to determine empty space available for use in consolidating a plurality of items into one MEBB. The item combiner 130 provides a listing of merged items 144 that may be passed to a packing algorithm, such as the packing algorithm at processing block 60 (FIG. 2), discussed above.Additional technical approaches may be used to facilitate determinations of item mergers. Referring back to FIG. 2, for example, processing block 84 may provide item recognition data to assist in making determinations of whether to merge two items. Item recognition data may, for example, include image analysis data. Item recognition data may use heuristics to determine likely item attributes that may be of use in generating packing sequence instructions, such as item fragility or preferred orientation. In addition, processing block 62 may provide additional constraints on the manner in which items may be packed together. The constraints may include limitations on weight, weight distribution, item orientation, item fragility, location preferences, temperature, loading/unloading scheduling, item compatibility, etc. In one embodiment, block 84 may provide image analysis data for the items to facilitate the action of the packing algorithm at block 60.The packing algorithm at block 60 may take the items, including items that may have been consolidated through merger into a single MEBB, and generate a detailed packing plan for packing the items into the container (or other storage space) that is output at processing block 66. The final placement of the items may be preceded by packing items into physical containers (e.g., cardboard boxes, coolers, plastic containers, shipping containers, etc.), or not, and may be tailored to the available labor or machinery available to pack the items into the container. The packing algorithm may be a heuristic packing algorithm. In one example, a wall- filling 3D container packing problem algorithm may be employed. Further information regarding one example of an algorithm may be found in a paper to Lim et al. entitled "3D Container Packing Heuristics," Applied Intelligence 22, 125-134, 2005.FIGs. 7A-11C illustrate aspects of an item, the item in voxelized form, and the voxelized item located within a bounding box according to an embodiment. For ease of illustration, the item it is presented in two dimensions. However, it should be understood that embodiments may be readily extended to three dimensions.FIG. 7A shows an item 150 in the general shape of an oval. The item may, for example, be one item among many items that are to be packed together in a storage container, along with a plurality of other items. FIG. 7B shows the item 150 as a voxelized item 152, defined in terms of voxels 154. Although the voxels are here shown as two-dimensional squares, in three dimensions the voxels would be cubes of some convenient and/or predetermined dimension. FIG. 8 shows the voxelized item 152 contained within a respective MEBB 156. The voxels 154 typically may not have a computationally convenient orientation with respect to the sides of the MEBB 156. FIG. 9 illustrates an example of a voxelized item and a bounding box aligned with respect to a pair of reference axes according to an embodiment. As shown in FIG. 9, the MEBB 156 may be oriented along Cartesian coordinate X-Y axes (a Z-axis is included in the 3D case) that may be suitable both for a box shape and that may be suitable for the bounded box of the container into which the items may ultimately be placed. In general, the voxels 154 may be oriented along one another's sides, but the voxel sides may not have a computationally convenient orientation with respect to the MEBB.FIG. 10 illustrates a re-voxelized bounding box according to an embodiment. As shown in FIG. 10, the voxelized item 152 may be re-oriented to provide a re-voxelized item 153. Notably, the orientation of the voxels 157 has been shifted so that the sides of the voxels may be in alignment with the coordinate axes of the MEBB 156.FIGs. 11A-11C illustrate another aspect of a merger process according to an embodiment, in which determining whether to merge two or more items also includes varying the orientation of an item with respect to another item so that merger may be physically realizable and/or favorable. The upper half of FIG. 11A shows an MEBB 160 that contains a first item 164. The shape of the first item 164 may be such that the MEBB 160 includes some free space 166 that may be available to receive another item. The lower half of FIG. 11A shows the re-voxelized item 153 and its MEBB 156, having a generally horizontal orientation with respect to the free space 166. Were the MEBB 156 to be vertically moved in the direction of the arrows towards the free space 166, the MEBB 156 would be blocked by the first item 164.In some embodiments, such as when the item orienter 143 (FIG. 6), discussed above, is utilized, a smaller MEBB and its associated item may be reoriented about the MEBB coordinate axes either 90 degrees, 180 degrees, or 270 degrees in search of a fit with respect to a larger MEBB into which merger is sought. In FIG. 11B, for example, the MEBB 156 has been rotated 90 degrees to be inserted into the free space 166 of the MEBB 160. Thus, merger is accomplished in a physically realizable manner, as illustrated in FIG. 11C.In the example shown in FIG. 11C, the merged items are seen to fit together into one of the MEBBs of one of the items. However, embodiments may be extended to include merging items into newly computed MEBBs. For example, in another embodiment, it may be the case that the items are merged but do not fit in any item's MEBB. Provided the spatial utilization of the merged pair is favorable, the merger may proceed, and a new MEBB may be computed for the merged pair. As shown in FIGs. 12A-12B, item A and item B are shown in two dimensions and are identical to one another for ease of explanation. It should be understood, however, that embodiments may be readily extended to three dimensions, and/or may be applicable to items that differ in both size and shape from one another.FIGs. 13A-13B show items A,B in simplified, voxelized form. FIG. 13 A shows the item A as a voxelized item, defined in terms of voxels 174. FIG. 13B shows the item B as a voxelized item, defined in terms of voxels 176. The voxels are equal in size, so that in terms of voxel elements, the illustrated volume of each item is 5:Vol(A) = 5; andVol(B) = 5.Next, as shown in FIGs. 14A-14B, an MEBB 178 is placed about item A and an MEBB 180 is placed about item B. In addition, the voxels are aligned so that their sides are in alignment with the MEBBs 178, 180, respectively. The volume of the illustrated MEBB about each item is 8:,Vol(MEBB(A)) = 8; andVol(MEBB(B)) = 8.Notably, the MEBB 178 of A includes empty space 181, and the MEBB180 of B includes empty space 182. In the illustrated example, the empty space 181 is equal in size to the empty space 182, although in general, the size and shape of empty spaces may differ.In one example, a first item (e.g., a larger of the two items under consideration) may be held fixed while testing various combinations of oriented and translated spatial transformations Tr of a second item (a smaller item of the two items under consideration) to determine which item can be fitted into a newly calculated bounding box that takes up less space than the sum of the MEBBs of the items to start. In addition, the items may be merged together into the newly calculated bounding box as their new, joint MEBB if calculations are favorable, for example as discussed below.FIGs. 15A-15C show item B in various orientations, differing from one another by a 90 degree counter-clockwise rotation. In one embodiment, one item may be rotated with respect to another item by 90 degrees, 180 degrees, or 270 degrees. In this example, item A may be held fixed in space relative to a coordinate system, and item B may assume various rotations and translations with respect to item A. As illustrated in FIGs. 16A-16E, various combinations of item A and item B may result from rotations of item B with respect to A and translations of the rotated item B towards item A. The depictions are illustrative of possible combinations and are not intended to be exhaustive.As shown in FIGs. 17-18, two of the combinations illustrated in FIG 16A- 16B are referred to as "comb-1" and "comb-2". In the illustrated example of FIG. 17, comb-1 has a combined item volume of 10:Vol(A) + Vol(B) = 10.Similarly, in the illustrated example of FIG. 18, comb-2 again has a combined item volume of 10:Vol(A) + Vol(B) = 10.MEBBs are then computed for each combination. Thus, in FIG. 17, comb- 1 has a MEBB 184, wherein the volume of MEBB 184 is 16:Vol(MEBB(AUB)comb-i) = 16Similarly, in FIG. 18, comb-2 has a MEBB 186, wherein the volume of MEBB 186 is 10:Vol(MEBB(AUB)comb2) = 10In terms of the FSU, as shown in equation 5:FSU(A) = Vol(MEBB(A)) - Vol(A) = (8-5) = 3FSU(B) = Vol(MEBB(B)) - Vol(B) = (8-5) = 3FSU((AUB)comb-i) = 16-10 = 6FSU((AUB)comb-2) = 10-10 = 0The FSU criteria, as shown in equation 6 is then evaluated, first for comb- 1 depicted in FIG. 17. In this regard, we evaluate the left hand side of equation 6 for comb-1 :(Vol(MEBB(A)) - Vol(A)) + (Vol(MEBB(B)) - Vol(B)) = (8-5) + (8-5) =6The right hand side of equation 6 for comb-1 is FSU((AUB)comb-i) = 16- 10 = 6.. As the inequality of the criterion is not met (i.e., 6 is not greater than 6), the criterion is not met, and a merger of items A and B together into MEBB 184 is not performed for comb-1. Other possible mergers may also be evaluated, including comb-2. In the illustrated example, the left hand side of equation 6 is the same for both combinations, but the right hand side for comb-2 is different; namely, FSU((AUB)Comb-2) = 10-10 = 0. In this case, the evaluation criterion is met, as 6 > 0, and the combination of items A and B depicted in FIG. 18B is accepted as the merger. Thus, the combination of items A and B depicted in FIG. 18B is the merger that is accepted. In general, several prospective combinations may pass the merger criterion test, in which case the combination that offers the best utilization of space is the one for which merger proceeds. Thus, for two combinations m and n that both pass the merger criterion test, m would be selected where FSU((AUB)Comb-m ) < FSU((AUB)Comb-n).Pseudo code for one implementation of an algorithm to merge items is presented below:1. Given N items, sort the N items according to their MEBB volume in descending order from 1 to N (item 1 is the largest by MEBB).2. For item i (item A)3. holding item A fixed in space4. for all remaining items j=i+l, i+2 . . .N, < size of A5. sequentially taking the objects j (B in the example),6. computer possible transformations of B Tr(B) usingallowed rotations of 90° , 180°, and 270° and translations that are multiples of voxel size7. for each such transformation evaluate a merger criterionfor a combination of A and Tr(B) using MEBBs for eachcombination8. Choose the Tr(B) that optimizes the merger criterion9. Merge item A and Tr(B), use the MEBB for the mergeditems A and Tr(B)10. Choose as B the item j that when optimally transformedprovides best merger of A and B11. Merge A with the optimal choice of B in its optimally transformed state into a MEBB for the merged items. Merger with B happens only if there is such an item B, in which case item B is removed from the list of items. 12. Repeat the above from step 2 forward beginning with the next smallest item i + 1.The merger implementation may be practiced with a number of free space functions and merger criteria, including those presented in equations 1 - 4.Thus, embodiments described herein may be used to provide for consolidated packing of items into an enclosed space, whether the items are box- shaped or not, leading to relatively better space utilization. Additionally, embodiments may be used in multiple applications. For example, embodiments may be implemented as a mobile application using a handheld device such as smart phone for personal use in generating packing suggestions for placing items inside car trunk, truck, or closet where space may be limited. Embodiments may also be incorporated into dedicated handheld device for shipping companies to use in generating more optimal loading/unloading sequence of packages in their trucks. Embodiments may also be used in the field of robotics, where it the system may be incorporated into fully automatic packing solution system.FIG. 19 illustrates a processor core 200 according to one embodiment. The processor core 200 may be the core for any type of processor, such as a microprocessor, an embedded processor, a digital signal processor (DSP), a network processor, or other device to execute code. Although only one processor core 200 is illustrated in FIG. 19, a processing element may alternatively include more than one of the processor core 200 illustrated in FIG. 19. The processor core 200 may be a single-threaded core or, for at least one embodiment, the processor core 200 may be multithreaded in that it may include more than one hardware thread context (or "logical processor") per core.FIG. 19 also illustrates a memory 270 coupled to the processor core 200. The memory 270 may be any of a wide variety of memories (including various layers of memory hierarchy) as are known or otherwise available to those of skill in the art. The memory 270 may include one or more code 213 instruction(s) to be executed by the processor core 200, wherein the code 213 may implement the methods already discussed. The processor core 200 follows a program sequence of instructions indicated by the code 213. Each instruction may enter a front end portion 210 and be processed by one or more decoders 220. The decoder 220 may generate as its output a micro operation such as a fixed width micro operation in a predefined format, or may generate other instructions, microinstructions, or control signals which reflect the original code instruction. The illustrated front end portion 210 also includes register renaming logic 225 and scheduling logic 230, which generally allocate resources and queue the operation corresponding to the convert instruction for execution.The processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include a number of execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 250 performs the operations specified by code instructions.After completion of execution of the operations specified by the code instructions, back end logic 260 retires the instructions of the code 213. In one embodiment, the processor core 200 allows out of order execution but requires in order retirement of instructions. Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.Although not illustrated in FIG. 19, a processing element may include other elements on chip with the processor core 200. For example, a processing element may include memory control logic along with the processor core 200. The processing element may include I/O control logic and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.Referring now to FIG. 20, shown is a block diagram of a computing system 1000 embodiment in accordance with an embodiment. Shown in FIG. 20 is a multiprocessor system 1000 that includes a first processing element 1070 and a second processing element 1080. While two processing elements 1070 and 1080 are shown, it is to be understood that an embodiment of the system 1000 may also include only one such processing element.The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood that any or all of the interconnects illustrated in FIG. 20 may be implemented as a multi-drop bus rather than point-to-point interconnect.As shown in FIG. 20, each of processing elements 1070 and 1080 may be multicore processors, including first and second processor cores (i.e., processor cores 1074a and 1074b and processor cores 1084a and 1084b). Such cores 1074a, 1074b,1084a, 1084b may be configured to execute instruction code in a manner similar to that discussed above in connection with FIG. 19.Each processing element 1070, 1080 may include at least one shared cache1896a, 1896b. The shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a,1074b and 1084a, 1084b, respectively. For example, the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared cache 1896a,1896b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.While shown with only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments are not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088. As shown in FIG. 20, MC's 1072 and 1082 couple the processors to respective memories, namely a memory 1032 and a memory 1034, which may be portions of main memory locally attached to the respective processors. While the MC 1072 and 1082 is illustrated as integrated into the processing elements 1070, 1080, for alternative embodiments the MC logic may be discrete logic outside the processing elements 1070, 1080 rather than integrated therein.The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 1076 1086, respectively. As shown in FIG. 20, the I/O subsystem 1090 includes P-P interfaces 1094 and 1098. Furthermore, I/O subsystem 1090 includes an interface 1092 to couple I/O subsystem 1090 with a high performance graphics engine 1038. In one embodiment, bus 1049 may be used to couple the graphics engine 1038 to the I/O subsystem 1090. Alternately, a point-to-point interconnect may couple these components.In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments are not so limited.As shown in FIG. 20, various I/O devices 1014 (e.g., speakers, cameras, sensors) may be coupled to the first bus 1016, along with a bus bridge 1018 which may couple the first bus 1016 to a second bus 1020. In one embodiment, the second bus 1020 may be a low pin count (LPC) bus. Various devices may be coupled to the second bus 1020 including, for example, a keyboard/mouse 1012, communication device(s) 1026, and a data storage unit 1019 such as a disk drive or other mass storage device which may include code 1030, in one embodiment. The illustrated code 1030 may implement the method 22 (FIG. 2), the method 32 (FIG. 4) and/or the method 44 (FIG. 5), already discussed, and may be similar to the code 213 (FIG. 19), already discussed. Further, an audio I/O 1024 may be coupled to second bus 1020 and a battery 1010 may supply power to the computing system 1000.Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of FIG. 20, a system may implement a multi-drop bus or another such communication topology. Also, the elements of FIG. 20 may alternatively be partitioned using more or fewer integrated chips than shown in FIG. 20.Additional Notes and Examples:Example 1 may include a system to determine efficient packing for objects, comprising at least one sensor to provide sensor data to an object modeler that is to generate a digital three-dimensional (3D) model of each of a plurality of real objects, an object boxer communicatively coupled with the at least one sensor to generate a bounding box for each of the plurality of real objects, and a packing plan generator communicatively coupled with the object boxer to determine a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real objects contains at least one other of the plurality of real objects.Example 2 may include the system of Example 1, further including an object voxelizer to approximate a size and a shape for each of the plurality of real objects in voxels.Example 3 may include the system of any one of Examples 1 to 2, wherein one or more of the plurality of real objects is an item, and wherein the apparatus further includes an item voxelizer to approximate a size and a shape for each item, an item boxer to generate a minimal enclosing bounding box for each item and an item combiner to determine if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each minimal enclosing bounding box for each of the two or more items and merge the two or more items together in the bounding box if the two or more items are mergeable.Example 4 may include the system of any one of Examples 1 to 3, wherein the item combiner is to compute a free space (FS) value of each item with respect to each minimal enclosing bounding box for each item, and merge the two or more items into one bounding box if a merger criterion is met.Example 5 may include the system of any one of Examples 1 to 4, wherein the at least one sensor includes a 3D sensor, and wherein the system further includes an object identifier to identify each of the plurality of real objects and acquire non-3D image data corresponding to each of the plurality of the real objects.Example 6 may include the system of any one of Examples 1 to 5, further including a hierarchy generator to generate a hierarchical volumetric representation of each of the plurality of real objects to detect an empty part of each of the plurality of real objects. Example 7 may include the system of any one of Examples 1 to 6, wherein one of the plurality of real obj ects is a storage space, and wherein the system further including a storage space boxer to generate a maximum enclosed bounded box for the storage space.Example 8 may include an apparatus to determine an efficient packing for obj ects comprising an object modeler to generate a three-dimensional (3D) digital model of a plurality of real objects based on sensor data for each of the plurality of real obj ects, an obj ect boxer communicatively coupled with the at least one sensor to generate a bounding box for each of the plurality of real obj ects, and a packing plan generator communicatively coupled with the object boxer to determine a spatially efficient packing sequence for the plurality of real objects, wherein at least one of the plurality of real obj ects contains at least one other of the plurality of real objects.Example 9 may include the apparatus of Example 8, further including an object voxelizer to approximate a size and a shape for each of the plurality of real objects in voxels.Example 10 may include the apparatus of any one of Examples 8 to 9, wherein one or more of the plurality of real objects is an item, and wherein the apparatus further includes an item voxelizer to approximate a size and a shape for each item, an item boxer to generate a minimal enclosing bounding box for each item, and an item combiner to determine if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items and merge the two or more items together in the bounding box if it is determined that the two or more items are mergeable.Example 11 may include the apparatus of any one of Examples 8 to 10, wherein the item combiner is to compute a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item, and merge the two or more items into one bounding box if it is determined that a merger criterion is met.Example 12 may include the apparatus of any one of Examples 8 to 11, further including an obj ect identifier to identify each of the plurality of real obj ects and acquire non-3D image data corresponding to each of the plurality of the real obj ects.Example 13 may include the apparatus of any one of Examples 8 to 12, further including a hierarchy generator to generate a hierarchical volumetric representation of each of the plurality of real obj ects to detect an empty part of each of the plurality of real obj ects.Example 14 may include the apparatus of any one of Examples 8 to 13, wherein one or more of the plurality of real obj ects is a storage space, the apparatus further including a storage space boxer to generate a maximum enclosed bounded box for the storage space.Example 15 may include a method to determine an efficient packing for obj ects comprising generating, by at least one sensor, a three-dimensional (3D) digital model of a plurality of real objects based on sensor data for each of the plurality of real objects, generating, by an object boxer, a bounding box for each of the plurality of real obj ects, and determining, by a packing plan generator, a spatially efficient packing sequence for the plurality of real obj ects, wherein at least one of the plurality of real obj ects contains at least one other of the plurality of real obj ects.Example 16 may include the method of Example 15, further including approximating a size and a shape for each of the plurality of real obj ects in voxels.Example 17 may include the method of any one of Examples 15 to 16, wherein one or more of the plurality of real objects is an item, and wherein the method further includes approximating a size and a shape for each item in voxels, generating a minimal enclosing bounding box for each item, determining if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items, and merging the two or more items together in the bounding box if it is determined that the two or more items are mergeable.Example 18 may include the method of any one of Examples 15 to 17, further including computing a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item and merging the two or more items into one bounding box if it is determined that a merger criterion is met.Example 19 may include the method of any one of Examples 15 to 18, further including identifying each of the plurality of real obj ects and acquiring non-3D image data corresponding to each of the plurality of the real obj ects.Example 20 may include the method of any one of Examples 15 to 19, further including generating a hierarchical volumetric representation of each of the plurality of real obj ects to detect an empty part of each of the plurality of real objects. Example 21 may include the method of any one of Examples 15 to 20, wherein one of the plurality of real objects is a storage space, the method further including generating a maximum enclosed bounded box for the storage space.Example 22 may include at least one computer readable storage medium comprising a set of instructions, which when executed by an apparatus, cause the apparatus to generate a three-dimensional (3D) digital model of a plurality of real obj ects based on sensor data for each of the plurality of real obj ects, generate a bounding box for each of the plurality of real objects, and determine a spatially efficient packing sequence for the plurality of real objects, wherein at least one other of the plurality of real objects.Example 23 may include the at least one computer readable storage medium of Example 20, wherein the instructions, when executed, cause an apparatus to approximate a size and a shape for each of the plurality of real obj ects in voxels.Example 24 may include the at least one computer readable storage medium of any one of Examples 22 to 23, wherein one or more of the plurality of real obj ects is an item, and wherein the instructions, when executed, cause an apparatus to approximate a size and a shape for each item in voxels, generate a minimal enclosing bounding box for each item, determine if any two or more items of the plurality of items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items, and merge the two or more items together in the bounding box if it is determined that the two or more items are mergeable.Example 25 may include the at least one computer readable storage medium of any one of Examples 22 to 24, wherein the instructions, when executed, cause an apparatus to compute a free space value (FS) of each item with respect to each minimal enclosing bounding box for each item and merge the two or more items into one bounding box if it is determined that a merger criterion is met.Example 26 may include the at least one computer readable storage medium of any one of Examples 22 to 25, wherein the instructions, when executed, cause an apparatus to identify each of the plurality of real obj ects and acquire non-3D image data corresponding to each of the plurality of the real obj ects.Example 27 may include the at least one computer readable storage medium of any one of Examples 22 to 26, wherein the instructions, when executed, cause an apparatus to generate a hierarchical volumetric representation of each of the plurality of real obj ects to detect an empty part of each of the plurality of real objects.Example 28 may include the at least one computer readable storage medium of any one of Examples 22 to 27, wherein one of the plurality of real obj ects is a storage space, and wherein the instructions, when executed, cause an apparatus to generate a maximum enclosed bounded box for the storage space.Example 29 may include an apparatus to determine an efficient packing for obj ects comprising means for generating a digital model of a plurality of real obj ects based on three-dimensional (3D) sensor data for each of the plurality of real obj ects, means for generating a bounding box for each of the plurality of real objects, and means for determining a spatially efficient packing sequence for the plurality of real obj ects, wherein a largest real obj ect of the plurality of real obj ects contains one or more smaller objects of the plurality of real objects.Example 30 may include the apparatus of Example 29, further including means to approximate a size and a shape for each of the plurality of real objects in voxels.Example 31 may include the apparatus of any one of Examples 29 to 30, wherein one or more of the plurality of real obj ects is an item, the apparatus further including means for approximating a size and a shape for each item in voxels, means for generating a minimal enclosing bounding box for each item, means for determining if any two or more items are mergeable into a bounding box that occupies less volume than a sum of volumes for each individual minimal enclosing bounding box for each of the two or more items, and means for merging the two or more items together in the bounding box if it is determined that the two or more items are mergeable.Example 32 may include the apparatus of any one of Examples 29-31, further including means for computing a free space utilization value (FSU) of each item with respect to each minimal enclosing bounding box for each item, and means for merging the two or more items into one bounding box if it is determined that the sum of each FSU value is greater than or equal to an FSU of a physically realizable union of the two or more items in the one bounding box.Example 33 may include the apparatus of any one of Examples 29-32, further including means for identifying each of the plurality of real obj ects and acquiring non-3D image data corresponding to each of the plurality of the real objects. Example 34 may include the apparatus of any one of Examples 29-33, further including means for generating a hierarchical volumetric representation of each of the plurality of real obj ects to detect an empty part of each of the plurality of real objects.Example 35 may include the apparatus of any one of Examples 29-34, wherein one of the plurality of real objects is a storage space, the apparatus further including means for generating a maximum enclosed bounded box for the storage space.Example 36 may include a method to determine efficient packing order for a plurality of items with respect to a container, comprising generating, by an item voxelizer, a voxelized digital model of each of the plurality of items based on three- dimensional (3D) sensor data, generating, by at least one sensor, a digital model of the container, generating, by an item boxer, an enclosing bounding box for each of the plurality of items, determining, by an item combiner, which individual items of the plurality of items are mergeable into a single bounding box having less volume than volumes of the bounding boxes of the individual items added together, merging, by the item combiner, the mergeable items into the single bounding box if a combination of the mergeable items is physically realizable to generate one or more merged items, and generating, by a packing plan generator, a packing order of the plurality of items including the one or more merged items into the container.Example 37 may include the method of Example 36, further including orienting the voxels so that respective voxel sides are aligned to sides of the enclosing bounding box for a given item.Example 38 may include the method of any one of Examples 36 to 37, wherein determining which item bounding boxes may be merged together includes varying the orientation of one bounding box with respect to another bounding box.Example 39 may include the method of any one of Examples 36 to 38, wherein varying the orientation is limited to rotations of integral multiples of 90 degrees with respect to any combination of coordinate axes defined for a given bounding box.Example 40 may include the method of any one of Examples 36 to 39, wherein a hierarchical volumetric representation of the item is computed that includes a representation of empty space within the bounding boxes. Example 41 may include the method of any one of Examples 36 to 40, further including applying shipping constraints on the packing order.Example 42 may include an apparatus to determine an efficient packing order for a plurality of items with respect to a container, comprising means for generating a voxelized digital model of each of a plurality of items using 3D sensor data, means for generating a digital model of at least one container, means for generating an enclosing bounding box for each of the items, means for determining which of the items may be merged together into a single bounding box having less volume than the volumes of the bounding boxes of the individual items added together, means for merging the items together into a single bounding box if their combination is physically realizable, and means for generating a packing order of the items, including combined items, into the container.Example 43 may include the apparatus of Example 42, further including means for orienting the voxels so that their sides are aligned to sides of the enclosing bounding box for a given item.Example 44 may include the apparatus of any one of Examples 42 to 43, further including means varying the orientation of one bounding box with respect to another bounding box.Example 45 may include the method of any one of Examples 42 to 44, further including means for computing a hierarchical volumetric representation of each item that includes a representation of empty space within the bounding boxes.Example 46 may include a method to determine efficient packing order for a plurality of items with respect to a container comprising generating, by an item voxelizer, a voxelized digital model of each of the plurality of items, generating, by at least one sensor, a digital model of the container, generating, by an item boxer, an enclosing bounding box for each of the plurality of items, evaluating, by an item combiner, a free space function for each of the plurality of items, determining, by the item combiner, a plurality of spatial transformations of a second item of the plurality of items with respect to a first item of the plurality of items, evaluating, by the item combiner, a merger criterion for the first item and the second item based on each of the plurality of spatial transformations, and merging, by the item combiner, the first item and the second item into a common bounding box if the merger criterion is met using a most favorable of the spatial transformations of the second item for which the merger criterion is met. Example 47 may include the method of Example 46, wherein the free space function is NV(I) = vol(I)/vol(MEBB(I)), where NV(I) is a normalized volume of an item I, vol(I) is a volume of an item I, and vol(MEBB(I)) is a volume of the minimal enclosing bounding box of an item I.Example 48 may include the method of Example 47, wherein the merger criterion for two items I; and Ij is that 0.5(NV(Ii) + NV(Ij)) < NV(Ii U Ij), where (I; U Ij) is a union of the two items Ii and Ij under a given spatial transformation.Embodiments are applicable for use with all types of semiconductor integrated circuit ("IC") chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the computing system within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.The term "coupled" may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical or other connections. In addition, the terms "first", "second", etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.As used in this application and in the claims, a list of items j oined by the term "one or more of may mean any combination of the listed terms. For example, the phrases "one or more of A, B or C" may mean A; B; C; A and B; A and C; B and C; or A, B and C.Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.
A processor includes a compute array comprising a first plurality of compute engines serially connected along a data flow path such that data flows between successive compute engines at successive times. The first plurality of compute engines includes an initial compute engine and a final compute engine. The data flow path includes a recirculation path connecting the final compute engine to the initial compute engine with no compute engine therebetween.
Claims 1. A processor comprising: a compute array comprising a first plurality of compute engines serially connected along a data flow path such that data flows between successive compute engines at successive times, the first plurality of compute engines comprising an initial compute engine and a final compute engine, wherein the data flow path comprises a recirculation path connecting the final compute engine to the initial compute engine with no compute engine therebetween. 2. The processor of claim 1, further comprising a second plurality of compute engines configured to recirculate data independent of the first plurality of compute engines. 3. The processor of claim 1, wherein the recirculation path is configured to recirculate data from the final compute engine to the initial compute engine in one clock cycle. 4. The processor of claim 1, wherein the first plurality of compute engines consists essentially of the initial compute engine and the final compute engine. 5. The processor of claim 1, wherein the first plurality of compute engines consists essentially of the initial compute engine, the final compute engine, and six additional compute engines disposed therebetween on the data flow path. 6. The processor of claim 1, wherein the first plurality of compute engines consists essentially of the initial compute engine, the final compute engine, and two additional compute engines disposed therebetween on the data flow path. 7. The processor of claim 1 , wherein the first plurality of compute engines consists essentially of the initial compute engine, the final compute engine, and fourteen additional compute engines disposed therebetween on the data flow path. 8. The processor of claim 1, further comprising a control block for issuing instructions to the compute array based on a stored program, wherein the compute array is configured to execute the issued instructions in successive compute engines at successive times. 9. The processor of claim 1 , further comprising a memory operative with each of the first plurality of compute engines. 10. The processor of claim 1, wherein each of the first plurality of compute engines has a pipeline architecture. 11. The processor of claim 1 , wherein the processor is disposed within a digital signal processor that further comprises a control processor. 12. A method of forming a processor, the method comprising: forming a compute array comprising a first plurality of compute engines serially connected along a data flow path such that data flows between successive compute engines at successive times, the first plurality of compute engines comprising an initial compute engine and a final compute engine, wherein the data flow path comprises a recirculation path connecting the final compute engine to the initial compute engine with no compute engine therebetween. 13. A method of data processing in a compute array comprising a first plurality of compute engines, the first plurality of compute engines comprising an initial compute engine and a final compute engine and at least one compute engine therebetween, the method comprising: flowing data between successive compute engines at successive times, beginning with the initial compute engine; and after data flows to the final compute engine, recirculating data from the final compute engine directly to the initial compute engine. 14. The method of claim 13, further comprising flowing data among a second plurality of compute engines independent of the first plurality of compute engines. 15. The method of claim 13, wherein data recirculates from the final compute engine to the initial compute engine in one clock cycle. 16. The method of claim 13, wherein the first plurality of compute engines consists essentially of the initial compute engine and the final compute engine. 17. The method of claim 13, further comprising forming six additional compute engines disposed between the initial compute engine and the final compute engine on the data flow path. 18. The method of claim 13, further comprising forming two additional compute engines disposed between the initial compute engine and the final compute engine on the data flow path. 19. The method of claim 13, further comprising forming fourteen additional compute engines disposed between the initial compute engine and the final compute engine on the data flow path. 20. A processor comprising: a compute array comprising at least three compute engines; a data flow path connecting the compute engines, data flowing along the data path between successive compute engines at successive times; and a recirculation path directly connecting two compute engines not directly connected along the data flow path.
DIGITAL SIGNAL PROCESSOR COMPRISING A COMPUTE ARRAY WITH A RECIRCULATION PATH AND CORRESPONDING METHOD Related Application [0001] This application is a continuation-in-part of and claims priority to U.S. Patent Application Serial No. 12/701,090, filed February 5, 2010, the entire disclosure of which is incorporated by reference herein. Technical Field [0002] This invention relates to processor architectures and, more particularly, to digital signal processor architectures that facilitate high performance digital signal processing computations. The disclosed digital signal processors may be utilized with other processors or as stand-alone processors. Background [0003] A digital signal processor (DSP) is a special purpose computer that is designed to optimize performance for digital signal processing applications, such as, for example, fast Fourier transforms, digital filters, image processing, signal processing in wireless systems and speech recognition. DSP applications are typically characterized by real-time operation, high interrupt rates and intensive numeric computations. In addition, DSP applications tend to be intensive in memory access operations and to require the input and output of large quantities of data. DSP architectures are typically optimized for performing such computations efficiently. [0004] The core processor of a DSP typically includes a computation block, a program sequencer, an instruction decoder and all other elements required for performing digital signal computations. The computation block is the basic computation element of the DSP and typically includes one or more computation units, such as a multiplier and an arithmetic logic unit (ALU), and a register file. [0005] Digital signal computations are frequently repetitive in nature. That is, the same or similar computations may be performed multiple times with different data. Thus, any increase in the speed of individual computations is likely to provide significant enhancements in the performance of the DSP.[0006] Some applications, such as base stations in wireless systems, have performance and timing requirements that exceed the capabilities of current DSPs. To meet these requirements, designers have used DSPs in combination with ASICs (application specific integrated circuits) and/or FPGAs (field programmable gate arrays). Such systems lack flexibility and are relatively expensive. Further, the required performance increases as next generation wireless systems are introduced. High power dissipation is usually a problem in high performance processors. [0007] DSP designs may be optimized with respect to different operating parameters, such as computation speed, power consumption and ease of programming, depending on intended applications. Furthermore, DSPs may be designed for different word sizes. A 32-bit architecture that utilizes a long instruction word and wide data buses and which achieves high operating speed is disclosed in U.S. Pat. No. 5,954,811, issued Sep. 21, 1999 to Garde, the entire disclosure of which is incorporated by reference herein. The core processor includes dual computation blocks. Notwithstanding very high performance, the disclosed processor does not provide an optimum solution for all applications. [0008] Furthermore, even DSPs that incorporate multiple computation blocks generally suffer from latency as instructions or data circulate and recirculate among all or a subset of the computation blocks. [0009] Accordingly, there is a need for further innocations in DSP architecture and performance. Summary [0010] Latency is decreased in a DSP configured to recirculate data and instructions among a group of individual compute engines within the processor. The compute engines are arranged such that, once the data and/or instructions have progressed from the initial compute engine to the final compute engine, the data and/or instructions recirculate back to the initial compute engine with low latency. The low-latency recirculation path directly connects the final compute engine with the initial compute engine in the sense that the data and/or instructions do not traverse other compute engines within the processor during recirculation, and thus recirculation may be completed within, e.g., a single clock cycle. The flow of data and/or instructions (and subsequent recirculation) may proceed through all of the compute engines in the processor or through only a subset of the compute engines.[0011] In an aspect, embodiments of the invention feature a processor including a compute array. The compute array includes or consists essentially of a first plurality of compute engines serially connected along a data flow path such that data flows between successive compute engines at successive times. The first plurality of compute engines includes or consists essentially of an initial compute engine and a final compute engine, and the data flow path includes or consists essentially of a recirculation path connecting the final compute engine to the initial compute engine with no compute engine therebetween. [0012] The processor may include a second plurality of compute engines configured to recirculate data independent of the first plurality of compute engines. The recirculation path may be configured to recirculate data from the final compute engine to the initial compute engine in one clock cycle. The first plurality of compute engines may consist essentially or consist of the initial compute engine and the final compute engine. The first plurality of compute engines may consist essentially or consist of the initial compute engine, the final compute engine, and six additional compute engines disposed therebetween on the data flow path. The first plurality of compute engines may consist essentially or consist of the initial compute engine, the final compute engine, and two additional compute engines disposed therebetween on the data flow path. The first plurality of compute engines may consist essentially or consist of the initial compute engine, the final compute engine, and fourteen additional compute engines disposed therebetween on the data flow path. [0013] The processor may include a control block for issuing instructions to the compute array based on a stored program, and the compute array may be configured to execute the issued instructions in successive compute engines at successive times. The processor may include a memory operative with each of the first plurality of compute engines. Each of the first plurality of compute engines may have a pipeline architecture. The processor may be disposed within a digital signal processor that includes a control processor. [0014] In another aspect, embodiments of the invention feature a method of forming a processor that includes forming a compute array. The compute array includes or consists essentially of a first plurality of compute engines serially connected along a data flow path such that data flows between successive compute engines at successive times. The first plurality of compute engines includes or consists essentially of an initial compute engine and a final compute engine, and the data flow path includes or consists essentially of a recirculation pathconnecting the final compute engine to the initial compute engine with no compute engine therebetween. [0015] In yet another aspect, embodiments of the invention feature a method of data processing in a compute array that includes or consists essentially of a first plurality of compute engines. The first plurality of compute engines includes or consists essentially of an initial compute engine, a final compute engine, and at least one compute engine therebetween. Data is flowed between successive compute engines at successive times, beginning with the initial compute engine. After data flows to the final compute engine, data is recirculated from the final compute engine directly to the initial compute engine. [0016] Data may be flowed among a second plurality of compute engines independent of the first plurality of compute engines. Data may recirculate from the final compute engine to the initial compute engine in one clock cycle. The first plurality of compute engines may consist essentially or consist of the initial compute engine and the final compute engine. Two, six, or fourteen additional compute engines disposed between the initial compute engine and the final compute engine may be formed on the data flow path. [0017] In a further aspect, embodiments of the invention feature a processor including or consisting essentially of a compute array, a data flow path, and a recirculation path. The compute array includes or consists essentially of at least three compute engines. The data flow path connects the compute engines, data flowing along the data path between successive compute engines at successive times. The recirculation path directly connects two compute engines not directly connected along the data flow path. Brief Description of the Drawings [0018] In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments of the present invention are described with reference to the following drawings, in which:[0019] FIG. 1 is a schematic block diagram of a DSP in accordance with an embodiment of the invention; [0020] FIG. 2 is a schematic block diagram of an embodiment of the processor shown in FIG. 1; [0021] FIG. 3 is a schematic block diagram of an embodiment of one of the compute engines shown in FIG. 2; [0022] FIG. 4 is a schematic block diagram of a second embodiment of the compute array shown in FIG. 2; [0023] FIG. 5 is a schematic block diagram that illustrates a memory address space for flow operations in accordance with an embodiment of the invention; [0024] FIG. 6 is a schematic block diagram that illustrates a memory address space for SIMD operations in accordance with an embodiment of the invention; [0025] FIG. 6 A is a schematic block diagram that illustrates a memory address space for SIMD operations in accordance with another embodiment of the invention; [0026] FIG. 7 is a schematic block diagram of the flow unit in accordance with an embodiment of the invention; [0027] FIG. 8 A is a schematic block diagram of the flow unit executing a flow load instruction; [0028] FIG. 8B is a schematic block diagram that illustrates execution of the flow load instruction; [0029] FIG. 9 A is a schematic block diagram of the flow unit executing a flow store instruction; [0030] FIG. 9B is a schematic block diagram that illustrates operation of the flow store instruction; [0031] FIG. 10A is a schematic block diagram of the compute array, showing datapath buses in accordance with an embodiment of the invention; [0032] FIG. 10B is a schematic block diagram of the compute array depicted in FIG. 10A, showing data flow and recirculation paths in accordance with an embodiment of the invention; [0033] FIGS. IOC and 10D are schematic block diagrams of compute arrays, showing data flow and recirculation paths for alternative arrangements of compute engines in accordance with various embodiments of the invention;[0034] FIG. 11 is a schematic block diagram that illustrates symmetric filter computations in accordance with an embodiment of the invention; [0035] FIG. 12 is a schematic block diagram of a compute engine in accordance with an embodiment of the invention; [0036] FIG. 13 is a schematic block diagram that illustrates grouping of compute engines; [0037] FIG. 14 is a schematic block diagram of an embodiment of the control block shown in FIG. 2; [0038] FIG. 15 is a schematic block diagram that illustrates DMA interleaving in accordance with an embodiment of the invention; [0039] FIG. 16 is a schematic diagram that illustrates column-by-column processing using a processor in accordance with an embodiment of the invention; [0040] FIG. 17 is a schematic diagram that illustrates row-by-column processing using a processor in accordance with an embodiment of the invention; [0041] FIG. 18 is a schematic diagram that illustrates row-by-row processing using a processor in accordance with an embodiment of the invention; [0042] FIG. 19 is a schematic diagram that illustrates row-by-row processing with shift using a processor in accordance with an embodiment of the invention; and [0043] FIG. 20 is a schematic diagram that illustrates row-by-row processing with shift and broadcast using a processor in accordance with an embodiment of the invention. Detailed Description [0044] A schematic block diagram of a DSP in accordance with an embodiment of the invention is shown in FIG. 1. A DSP 10 includes a control processor 12, a memory 14, I/O ports 16 and a processor 20. The control processor 12 interacts with processor 20 and accesses memory 14. A DMAl bus 22 carries data between memory 14 and processor 20. A DMA2 bus 24 carries data between memory 14, processor 20 and I/O ports 16. I/O ports 16 may communicate directly with processor 20 through a FIFO or an I/O buffer. I/O ports 16 provide an interface to external memory, external devices and/or an external processor, such as a host computer.[0045] By way of example, control processor 12 may have an architecture of the type disclosed in U.S. Pat. No. 5,896,543, issued Apr. 20, 1999 to Garde (the entire disclosure of which is incorporated by reference herein) and sold by Analog Devices, Inc. as the TigerSharc DSP. The memory 14 may include three independent, large capacity memory banks. In a preferred embodiment, each of the memory banks has a capacity of 64K words of 32 bits each. Each of the memory banks may have a 128-bit data bus, so that up to four consecutive aligned data words of 32 bits each can be transferred to or from each memory bank in a single clock cycle. [0046] A schematic block diagram of a first embodiment of processor 20 is shown in FIG. 2. Processor 20 may include a control block 30 and a compute array 32. Control block 30 may include control logic 34, a DMA controller 36, integer ALUs 38 and 40, a program sequencer 42, a program memory 44 and a data memory 46. Control block 30 issues instructions and data addresses to compute array 32 based on a stored program. [0047] Compute array 32 includes two or more compute engines. In the embodiment of FIG. 2, compute array 32 includes eight compute engines 50, 51, 52, ... , 57. Each compute engine may be referred to as a "section" of the compute array. The compute engines are connected serially such that instructions issued by control block 30 and corresponding data advance, or "flow", through the compute engines 50, 51, 52, ... , 57 and are executed in each of the compute engines at successive times. In applications where grouping of compute engines is utilized, as discussed below, instructions issued by control block 30 and corresponding data flow through a subset, or group, of the compute engines and are executed at successive times. In one embodiment, instructions advance through successive compute engines on successive clock cycles. By way of example, an instruction issued by control block 30 may advance to compute engine 50 on clock cycle 1, to compute engine 51 on clock cycle 2 and to compute engine 57 on clock cycle 8. However, the invention is not limited in this respect, and each instruction may advance from one compute engine to the next compute engine after any number of clock cycles. Furthermore, instructions do not necessarily enter compute array 32 at the first compute engine, but can enter at any of the compute engines. This, feature is useful, for example, in applications which utilize grouping of compute engines. Each of the compute engines may be individually pipelined and thus may require several clock cycles to complete execution of an instruction. In addition, data flows to successive compute engines at successive times as described in detail below.[0048] In the embodiment of FIG. 2, instructions flow through all or a subset of the compute engines on successive clock cycles. In other embodiments, instructions issued by control block 30 can be broadcast to all of the compute engines or a subset of the compute engines. In this embodiment, the broadcast instructions are delayed according to the position of the compute engine in the compute array such that each broadcast instruction executes in successive compute engines at successive times. For example, a broadcast instruction may have no delay in compute engine 50, a one clock cycle delay in compute engine 51, a two cycle delay in compute engine 52, etc. In each case, each issued instruction executes in successive compute engines at successive times. In cases where grouping of compute engines is utilized, each instruction executes in successive compute engines of the group at successive times. [0049] A schematic block diagram of an embodiment of a single compute engine is shown in FIG. 3. Each compute engine includes an instruction pipe 70 for controlling instruction flow through the array of compute engines and a flow unit 72 for controlling data flow in the array of compute engines. Instruction pipes 70 in successive compute engines are coupled together by an instruction bus 120. Flow units 72 in successive compute engines are coupled together by a flow bus 102. In general, each compute engine can be configured with one or more flow units and one or more flow buses. Instruction pipe 70 holds instructions and provides control signals to the compute engine for execution of the instructions. The instruction pipe 70 in each compute engine may be one or more clock cycles in length. In one embodiment, instruction pipe 70 in each compute engine is one clock cycle in length. [0050] Each compute engine further includes a compute block 74, a register file 76 and a DMA buffer 82. In the embodiment of FIG. 2, each compute engine includes an X-memory 78 and a Y-memory 80. The memory associated with each of the compute engines may be implemented as an SRAM. As discussed below, the memory may be implemented in different ways. In another embodiment, each compute engine may include a single memory. Compute block 74 may include one or more compute units. Compute block 74 may include a multiplier 90, an arithmetic logic unit (ALU) 92, and a MAC (multiplier accumulator) adder 96 (FIG. 12), for example. Compute block 74 interacts with register file 76 to perform digital signal computations in response to an instruction in instruction pipe 70. Register file 76 interacts with memories 78 and 80 and with flow unit 72 to obtain specified data for the digital signal computations and to provide results to specified destinations. The data locations and the result destinations are specified by instructions. Data is transferred to and from memories 78 and 80through DMA buffer 82 and DMA buses 22 and 24. In some embodiments, compute block 74 may include a configurable gate array (CGA) 98. [0051] A flow memory data (FMD) bus 94, flow bus 102 and a back flow data (BFD) bus 124 are coupled to an input of flow unit 72 through a switch 104. FMD bus 94, flow bus 102 and BFD bus 124 are coupled to an output of flow unit 72 through a switch 106. In a normal mode of operation, switch 104 connects the input of flow unit 72 via flow bus 102 to the flow unit in the previous compute engine and flow data is received from the previous compute engine on flow bus 102. In the normal mode, switch 106 connects the output of flow unit 72 via flow bus 102 to the flow unit in the next compute engine, and flow data is supplied to the next compute engine on flow bus 102. When grouping of compute engines is utilized, as described below, switch 106 connects flow output 102 to BFD bus 124 in the last compute engine of a group, so that flow data is maintained within a group of compute engines. In the case of a flow load operation, FMD bus 94 is connected by switch 104 to the input of flow unit 72 in a selected compute engine, thereby connecting memory to flow unit 72. In the case of a flow store operation, the output of flow unit 72 in a selected compute engine is connected by switch 106 to FMD bus 94. When the compute array 32 is used in a recirculate mode, flow data output from the last compute engine 57 is coupled on flow bus 102 to the flow data input of first compute engine 50, as shown in FIG. 10A. [0052] A schematic block diagram of a second embodiment of compute array 32 is shown in FIG. 4. Like elements in FIGS. 2 and 4 have the same reference numerals. The embodiment of FIG. 4 includes compute engines 50, 51, 52, ... , 57. In the embodiment of FIG. 4, each compute engine does not include an individual X-memory and a Y-memory. Instead, compute array 32 includes a single X-memory 84 and a single Y-memory 86, which are accessed by each of the compute engines. Each memory 84, 86 may be a DRAM and may have a row width that is sufficient for parallel load/store operations with each of the compute engines. For example, memories 84 and 86 may be 1024 bits wide in order to load/store four 32-bit words in parallel for each of the eight compute engines. The use of a large DRAM may be more space- efficient than eight smaller SRAMs. Instead of implementing eight memory sections, a single DRAM can be utilized by interposing staging buffers between the DRAM and the compute engines so as to provide the data in a row, sequentially, to each section of the compute array. Memory accesses pass through a staging buffer 88 which functions similar to a delay line. A load delay line in staging buffer 88 has a delay that increases with section number. That is,section 0 has no delay, section 1 has a one-cycle delay, etc., and section 7 has a seven-cycle delay. For a store delay line in the staging buffer 88, the delay is reversed. That is, section 0 has a seven-cycle delay, section 1 has a six-cycle delay, etc., and section 7 has no delay. The staging buffer 88 can be built with dynamic logic, since it is cleared in eight clock cycles. [0053] Compute array 32 may have different memory configurations within the scope of the invention. FIG. 2 illustrates an embodiment wherein each compute engine includes X- memory 78 and Y-memory 80, and FIG. 4 illustrates an embodiment wherein compute array 32 includes single X-memory 84 and single Y-memory 86. In other embodiments, each compute engine may include a single memory or more than two memories. In further embodiments, a single memory may serve the entire compute array. In additional embodiments, the compute array may include one memory, typically a larger memory, that serves the entire compute array and individual section memories, typically smaller memories, that are associated with respective compute engines. In general, one or more memories may be configured to serve one, two, four, or eight compute engines in those embodiments having eight compute engines. [0054] Compute array 32 may include a row cache 130 (FIG. 4) which allows a data row of 1024 bits to be cached in a single cycle for flow accesses. This frees up the memory on subsequent flow accesses to that row. Because of the row cache, the programmer does not need to access quad words in order to minimize memory accesses. This often simplifies and reduces the size of the program. It also reduces power dissipation and allows better DMA access to memory. The row cache 130 preferably includes a load row cache (LRC) and a store row cache (SRC). Preferably, the row caches support only flow access, and not SIMD access. The row cache may be used in the embodiments of FIGS. 2 and 2A. Compute array 32 may include one or more load row caches and one or more store row caches. [0055] The load row cache holds a currently accessed entire row of memory. It is automatically loaded the first time a flow load instruction is executed. Its function is to act like a cache to reduce the number of subsequent flow accesses to memory. It also provides an unpack function, particularly for short words. The controller does not access memory for flow again until the row address changes. For example, this buffer can hold 64 short words and save 63 accesses to memory. [0056] The store row cache packs incoming flow results until filled. When filled, the store row cache initiates a store to memory. It acts like a cache to reduce multiple individual accesses to memory. It also provides a pack function, particularly for short words. Thecontroller does not store the store row cache to memory until the row address changes or until the row cache is full. For example, this buffer can hold 64 short word results and save 63 accesses to memory. [0057] The control block 30 issues load/store and compute instructions for compute array 32. Instructions enter the compute array from the left and flow from one section to the next on every clock cycle until they exit the array, eight clock cycles later. Data entering the array flows in the same way through the array, as specified by an instruction. In a similar way, the results of computations flow across the compute engines as specified by an instruction, and when flow is completed, the results can be stored in memory. Data and results may recirculate through the compute array 32 if more than eight compute engines are needed for an application. Conversely, the eight compute engines may be configured into groups for applications that require fewer compute engines. Groups are implemented by a switch in each compute engine. The memory in each compute engine is a common resource for all the compute engines when using flow instructions but is a local resource for SIMD (single instruction, multiple data) instructions. [0058] Instructions, data and results flow across the array horizontally from section to section, on each clock cycle. This flow creates the programming illusion that there is one processor and one memory, whether processing horizontally across the array, vertically in SIMD fashion, or in a combination of both, hence SIMD-FLOW. The ability to perform SIMD and flow operations at the same time adds significantly to the versatility of the architecture. The versatility of the flow architecture is enhanced by the ability to group the sections during flow operations to allow tasks that require a smaller number of operations to be optimally managed. With SIMD operations, there can be a considerable difference in what each section does. This is because most instructions can be conditional or can be modified by the unique ID of each section. [0059] In embodiments which include an individual memory in each section of the compute array, each section can perform an independent table lookup. Table lookup from memory can be performed in each section by adding an address offset register in each section to a common address broadcast to all sections by the address generator in the control block 30. This allows each section to perform in an individual table lookup. The table is duplicated in each section to allow this simultaneous lookup. The memory access instruction has a bit that specifies whether the address offset register is invoked.[0060] Different addressing schemes may be utilized for SIMD operations (non-flow operations within a single compute engine) and flow operations (operations across compute engines). The addressing scheme is selected based on the instruction type. [0061] For flow memory accesses, the memory appears as a single, very wide word memory, with each row having 1024 bits or 32 words of 32 bits each in the embodiment which includes eight sections, each having a memory row of four 32-bit words. To accomplish this, each memory section responds only to its column address. The flow address applies to only one memory section. The selected section places its data on the FMD bus 94. When using groups, the flow address is common to each group. Thus, for two groups of four sections each, the row size is 16 words but there are two sets of this row address. A flow addressing scheme in accordance with an embodiment of the invention is illustrated in FIG. 5. As shown in FIG. 5, the flow address increases linearly across all sections. [0062] For SIMD memory accesses, the X-memory and the Y-memory each appear as eight parallel memory banks. Each memory section receives an identical address and operates identically. Thus data is stored as eight data sets. The address and control for the memory sections flows from left to right on each clock cycle on the instruction flow bus 120, so that successive sections respond on successive cycles. The SIMD memory may be summarized as including eight identical memories, each section having a common address. The address is broadcast to all sections, and each section has four 32-bit words per row. The address space is that of one section. Thus, a load instruction loads one or more registers in each section from the same address in each memory section. However, each section responds one cycle after the previous section since the instruction flows across the array. A SIMD memory addressing scheme in accordance with an embodiment of the invention is illustrated in FIG. 6. As shown in FIG. 6, the SIMD address increases within each section. [0063] While FIGS. 5 and 6 illustrate different addressing schemes for flow operation and SIMD operation, it should be understood that different addressing schemes are not required. More particularly, a linear address space may be utilized for both flow operations and SIME operations. In that case, the SIMD memory access uses a larger address increment to access the next memory row. [0064] A SIMD memory addressing scheme in accordance with another embodiment of the invention is illustrated in FIG. 6 A. This addressing scheme allows the same memory address space to be used for both flow operations and SIMD operations. For SIMD memory accesses,the address generator increments the SIMD addresses on a modulo 32 basis at each quad boundary, i.e., the carry from address bit al carries to address bit a5. This is because the compute array itself implicitly increments the bits a4:2. Thus, the address space can remain identical and linear for SIMD and flow operations. The address generator modifies the address such that the carry from bit al is connected to bit a5 to adjust for the fact that the intervening bits are incremented implicitly in the array. Thus, an increment by one would cause the address to go from 0, 1, 2, 3, 32, 33, 34, 35, 64, 65, 66, etc. The intervening addresses are implicitly used in the array for SIMD accesses. The X-memory and Y-memory each appear as eight parallel memory banks. Each memory section receives an identical address and operates identically. Thus, data is stored as eight data sets. The address and control for the memory sections flows from left to right on each clock cycle (on the instruction flow bus) so that successive sections respond on successive cycles. [0065] A schematic block diagram of flow unit 72 in accordance with an embodiment of the invention is shown in FIG. 7. A quad D register 100 (D3:0) receives inputs on flow bus 102 and provides outputs on flow bus 102. The D register 100 interacts with selected registers in register file 76. Flow unit 72 may further include a quad adder 110 and a quad A register 112 (A3:0). Adder 110 receives inputs on flow bus 102, and inputs from selected registers in register file 76. The results are placed in A register 112, and A register 112 provides outputs on flow bus 102. Flow unit 72 may further include a quad D' register 114 (D'3:0) which receives inputs on BFD bus 124 and provides outputs on BFD bus 124. Flow unit 72 may include other functions 116, such as transfer, compare, exclusive OR, for example, and a configurable gate array (CGA) 117 to provide flexibility. [0066] Flow operations are of two basic types: (1) a flow load operation from memory with an operation such as a shift across all sections, and (2) a flow store operation to memory, with an operation such as accumulate across all sections. Flow memory accesses are similar to SIMD memory accesses, except that they access memory by row rather than by column. [0067] The following instructions support flow operations: OP Rm == [Jm]; // Flow Load Instruction [Jm] OP == Rm; // Flow Store Instruction[0068] The flow load instruction loads the Rm register from a memory location Jm and performs an operation OP across all sections, typically a delay line shift or a broadcast. The flow store instruction performs an operation OP, typically an accumulation, on register Rm across all sections and stores the results to a memory location Jm. The "==" sign indicates a flow instruction. The OP operation indicates several types of operations that can be performed during flow operations, for example, accumulate, shift, transfer, broadcast, and exclusive or. Flow operations are performed in the flow unit 72, which has access to the register file 76 in a manner similar to a memory access and may be thought of as an alternate memory access. [0069] The flow load operation, or DFLOW operation, is described with reference to FIGS. 8 A and 8B. In connection with a flow load operation, flow unit 72 in each of the compute engines utilizes quad D register 100 (D3:0), which defines a flow load path. In first compute engine 50, the flow load operation involves a load from memory on FMD bus 94. In each of the other compute engines 51, 52, ... , 57, the flow load operation involves a flow between sections on flow bus 102, the details depending on the specified operation OP and the specified register or registers. [0070] Consider an example of a flow shift operation given by SH R6==[J3+=1], which implements a delay of three cycles in each section. The flow load operation begins in successive sections of compute array 32 on successive clock cycles. This instruction involves the following operations in each of the compute engines. First, registers R6:4 are read from register file 76 into locations D2:0 in quad D register 100. Next, register DO is shifted out on flow bus 102. Then, registers D2: 1 are shifted to registers Dl :0, and the incoming flow is shifted to register D2. In the first section (compute engine 50), the incoming flow is from memory on FMD bus 94. In the other sections, the incoming flow is from the previous section on flow bus 102. Finally, registers D2:0 are written into registers R6:4 in register file 76. [0071] In the flow load operation, the shift register length is determined by the entry point of the DFLOW instruction. In the flow load instruction, the register number is specified modulo 4. Thus, SH R2 and SH R14 both define a shift register having a length of three words. Further, the instruction can specify one to four words. The same number of registers are shifted out as are shifted in. The register options for the flow load operation are R0; Rl; Rl :0; R2; R2: 1 ; R2:0; R3; R3 :2; R3 : 1 ; and R3 :0. Higher number registers are specified using modulo 4 notation.[0072] A broadcast instruction of the type B R15: 12 == [mem] broadcasts the memory contents into registers R15: 12 in each section of the compute array. [0073] The flow store operation, or AFLOW operation, is described with reference to FIGS. 9A and 9B. In connection with a flow store operation, flow unit 72 utilizes adder 110 and quad A register 112 (A3 :0), which define a flow store path. The flow unit 72 operates on data as it flows between sections of the compute array. The primary AFLOW operation is accumulation, but other operations such as exclusive or, compare, max/min, transfer, logical and shift may be utilized. The flow may be from any register in register file 76. The A register 112 holds accumulation results between sections. The output of the last section (compute engine 57) is stored in memory on FMD bus 94. The flow path can sum the Rn registers from each section of the compute array, so that the AFLOW output is the sum of all the Rn registers in the compute array. When AFLOW data enters a section, it is added to the local Rn registers and is output to the next section which receives the data in the following cycle. This repeats for all sections. The result appears at the last section after eight clock cycles. For example, the instruction [adrl]+==R0; sums register R0 from each of the eight sections of the compute array and stores the result at address adrl in memory. The AFLOW operation initializes to zero the flow input to the first section of the compute array. [0074] Consider an example of a flow accumulate instruction given by [J3+=l]==+R5:4, as illustrated in FIG. 9A. The flow store operations begin in successive sections of compute array 32 on successive clock cycles. First, registers R5:4 are read from register file 76 and are summed with the incoming flow. The result is placed temporarily in A register 112. The sum out from A register 112 represents the flow input plus registers R5:4. [0075] The flow unit 72 is described above as interacting with selected registers in register file 76. For example, the contents of D register 100 can be written to selected registers in register file 76, and vice versa. In addition, the contents of selected registers in register file 76 can be added to the flow data. In other embodiments, flow unit 72 does not interact with register file 76. Instead, compute block 74 may read from and write to registers in flow unit 72. [0076] Datapath buses between compute engines 50-57 of compute array 32 are described with reference to FIG. 10A. Instruction bus 120 is connected between instruction pipes 70 of successive compute engines. Instructions enter and flow through the compute array on the instruction bus 120. Instructions enter the first section (compute engine 50) and are clocked into each successive section on each clock cycle. This bus supplies the addresses for two orthree memory accesses and the controls for two compute operations. Much of the control logic for the compute array may be located in control block 30 so as to avoid duplication in each section. The three memory accesses in each cycle can be to X-memory 78 or 84, Y-memory 80 or 86 and DMA buffer 82. DMA buses 22 and 24 supply the compute array with DMA data, which may be driven by one of several input sources or sent to several output destinations. The DMA data is placed in the two quad word DMA buffer 82 in each section. The DMA bus can be used by the main processor 12 to access the compute array memory directly. DMA buses 22 and 24 can be used as two 128 buses or a single 256-bit bus. DMA buses 22 and 24 are coupled through a buffer 122. [0077] The flow bus 102 allows data to transfer, broadcast or shift from section to section and to recirculate if more than eight sections are required. Typically, the flow data to be shared or to be shifted across the compute array enters the array in the first section and is shifted across all the sections. The flow bus 102 is also used to accumulate or otherwise operate on computation results by flowing the results from left to right in each section. Flow for the shift function and the accumulate or broadcast function are often needed together but not necessarily in the same cycle. Because flow operations can be arranged so they are not needed on every cycle, flow bus 102 can be used for both DFLOW and AFLOW operations if used in different cycles. In other embodiments, compute array 32 may include two or more flow buses, and DFLOW and AFLOW operations can execute simultaneously. [0078] The FMD bus 94 is used to load data when grouping has not been selected. The FMD bus 94 is also used to store flow data to memory. The last compute engine in the group provides the data and the address tag for the store location. For loads, only the first section provides the address, and one of the sections 1-7 responds by driving its data to section 0. For stores without grouping, the last section provides the data and the address. FMD bus 94 is coupled to control block 30 through buffer 122. [0079] The BFD bus 124 is used when groups of two or four compute engines are selected. The BFD bus 124 is switched to support grouping of sections. The BFD bus 124 allows flow data to return to the start of the group and is also used for flow memory access within the group. For groups of two or four compute engines, the BFD bus 124 is used for recirculation, for data shuffle and for flow memory access.[0080] FIG. 1 OB is a simplified version of FIG. 10A depicting compute engines 50-57 and various paths along which data may flow therebetween on, e.g., flow bus 102 and/or BFD bus 124. Data flow path 125, shown as solid arrows between compute engines 50-57, is the typical path along which data is shifted as successive instructions are executed in successive compute engines. For example, data flows from compute engine 50 to compute engine 51 on a single clock cycle, from compute engine 51 to compute engine 52 on the next clock cycle, etc. In the illustrated embodiment containing eight compute engines, data recirculates along recirculation path 126 from compute engine 57 back to compute engine 50 if more than eight instructions are executed. In the "circular" arrangement of compute engines 50-57 depicted in FIG. 10B, data flows along data flow path 125 and recirculation path 126 only between compute engines adjacent to each other; thus, each successive instruction may be executed on a successive clock cycle, i.e., latency is low. [0081] As described above (and below with reference to FIG. 13), compute engines 50-57 may be grouped such that only a subset of the compute engines executes a particular group of instructions on particular data. For example, if only two compute engines are utilized in a group, data may flow along data flow path 125 from, e.g., compute engine 50 to compute engine 51, and then may recirculate back to compute engine 50 along recirculation path 127. As in the above-described embodiment utilizing eight compute engines, latency is low since the two grouped compute engines are adjacent. Any two adjacent compute engines may be grouped in the same advantageous fashion. [0082] However, in the embodiment depicted in FIG. 10B, latency may result in groupings containing more than two and fewer than all eight compute engines 50-57. For example, in a group containing four compute engines 50-53, data mainly flows along data flow path 125 between compute engines with low latency. However, once the data reaches compute engine 53, it must recirculate to compute engine 50 along recirculation path 128. Since compute engines 50 and 53 are not adjacent, data recirculating on recirculation path 128 must flow back through compute engines 51 and 52 in order to reach compute engine 50. Thus, data flowing along recirculation path 128 requires three clock cycles to travel between compute engine 53 and compute engine 50, increasing latency and hampering the overall performance of processor 20. Similarly, increased latency results from compute engine groupings containing three, five, six, or seven compute engines, as recirculation between the final and initial compute engines in a group requires more than a single clock cycle.[0083] Referring to FIG. IOC, an alternative arrangement of compute engines 50-57 decreases latency for groupings containing four compute engines while maintaining low latency for groupings of two or all eight compute engines. Just as in the embodiment depicted in FIG. 10B, compute engines 50 and 51 form a two-engine group having a single-cycle (i.e., low-latency) recirculation path 127, and compute engines 50-57 form an eight-engine group having a single-cycle recirculation path 126. In addition, the alternative arrangement depicted in FIG. IOC enables compute engines 50-53 to form a four-engine group having a single-cycle recirculation path 128, as compute engines 50 and 53 are adjacent. (Similarly, compute engines 54-57 may form another four-engine group having a low-latency recirculation path between adjacent compute engines 57 and 54.) Further, any recirculation latency in groupings of other numbers of compute engines 50-57 (i.e., groups having three, five, six, or seven compute engines) is no worse than in the embodiment illustrated in FIG. 10B. [0084] FIG. 10D depicts an arrangement of sixteen compute engines having low-latency recirculation paths for groups of two, four, eight, and sixteen compute engines. As in FIGS. 10B and IOC, single-cycle recirculation path 127 may be utilized between any two adjacent compute engines (for clarity, only one such recirculation path 127 is shown in FIG. 10D), single-cycle recirculation paths 128 are utilized in four-engine groups (e.g., groups including compute engines CE0-CE3, CE4-CE7, CE8-CE11, or CE12-CE15), and single-cycle recirculation paths 126 are utilized in eight-engine groups (e.g., groups including compute engines CE0-CE7 or CE8-CE15). Additionally, single-cycle recirculation path 129 connects compute engines CE15 and CEO when data is cycling through all sixteen compute engines. [0085] The DFLOW path permits pre-addition of taps for symmetric filters to double performance. Symmetric filters have coefficients that are symmetric about a center point of the filter. Data values corresponding to equal coefficients can be pre-added prior to multiplication by the coefficients, thereby reducing the required number of multiplications. The taps needed for pre-addition can be made available by flowing the BFD bus 124 through the D'3:0 registers as shown in FIG. 11. When using 16-bit data, the data is accessed as two pairs of long words. Registers D'l :0 and D 1 :0 are added in the ALU and then multiplied by four short coefficients with sideways sum. The number of taps per section can be two, four, or eight for 32-bit data. For 16-bit data, the most efficient use is with eight or sixteen taps per section. To implement the back flowing shift register, 16 or 32 bits of the BFD bus 124 are redirected to pass through the D'3:0 registers. This shifts in the opposite direction to the DFLOW path.[0086] A memory address bus 118 (FIG. 4) provides flow memory addresses when loading or storing from X-memory 78 or 84, or Y-memory 80 or 86. The address is available to all sections in the group in the same cycle, but only the section in the group with the corresponding address responds. [0087] One of the compute engines is shown in greater detail in FIG. 12. Register file 76 may be a multi-port register file having between 32 and 64 registers. The register file 76 supports multiplier 90, ALU 92, flow unit 72, MAC adder 96, and memory banks 78 or 84 and 80 or 86, and may have ten ports to support these units. The multiplier 90 and the flow unit 72 perform a read-modify-write in each cycle. The register file 76 may also have a shadow copy which can be used for a single cycle context switch to support interrupts when streaming data. [0088] The multiplier 90 may be a 32 x 32 fixed point multiplier, with built-in correlation capabilities and selectable data precision. The multiplier can perform quad 16-bit multiplies with both real and complex data types. Complex 16-bit data includes 8-bit real and 8-bit imaginary data. An additional cycle of latency is required for 32 x 32 multiplies. The MAC adder 96 and the multiplier 90 are semi-independent units. The path from the multiplier to the MAC adder may contain partial sums which can be resolved in the MAC adder. [0089] The versatility of the flow architecture is enhanced by the ability to group sections during flow operations. This allows tasks that require a smaller number of operations to be optimally managed. Examples include 4-tap FIRs and 4 x 4 matrix operations. The grouping uses the special organization on the BFD bus 124 to select different memory sections for this purpose. The group operation is specified by the flow instructions on a cycle-by-cycle basis. Grouping allows flow operations to occur within subsets of the eight sections. As shown in FIG. 13, the first four sections (0-3) can be combined into a first group 140 and the last four sections (4-7) can be combined into a second group 142, for example. Each group recirculates its data within the group and each group may operate on different data. The compute array can be subdivided into four groups of two or two groups of four. Grouping can also be used to provide odd numbers of sections working together, e.g. three or seven sections. The unused sections apply automatic zero padding. The group control can apply two types of actions: (a) common data to all groups, or (b) independent groups where group memory data is used within the group.[0090] Groups are configured using the switches 104 and 106 (FIG. 3) in appropriate compute engines. An example of compute array 32 configured as groups 140 and 142, each having four compute engines and each having recirculation of flow data, is shown schematically in FIG. 13. A switch 150 between sections 3 and 4 connects the flow bus 102 output of section 3 to the BFD bus 124 in section 3 and connects the flow bus 102 input of section 4 to the BFD bus 124 in section 4. Also, a switch 152 in section 0 connects the flow bus 102 input of section 0 to the BFD bus 124 in section 0. This permits separate flow operations within groups 140 and 142. [0091] When more than eight sections are required for an application, the compute array allows the flow operation to recirculate, creating the effect of a very long flow array. The number of recirculations is practically unlimited, but throughput remains limited by the number of compute engines, i.e. the compute engines are time-shared. Recirculation feeds the DFLOW and AFLOW operations from the last section back to the first section or from the end of a group to the start of a group. At the end of the recirculation sequence, the accumulated result is written to memory. [0092] Shuffle is an operation wherein memory data is interchanged between sections of the compute array. Data can be shuffled, or interchanged, in groups of two, four or eight. This operation is useful for the fast Fourier transform where the first three stages interchange data between sections when the data is stored sequentially in memory in rows. The shuffle operation uses the load row cache and the store row cache as a buffer. Initially, data is transferred from selected memory locations in each section to the load row cache. Then, the load row cache of each section is moved, in a shuffle operation, to the store row cache of the next section using the flow and BFD buses. This operation is repeated for the desired shuffle shift. Then, the store row cache is stored to the specified memory locations in all sections. A quad word is transferred each time the shuffle operation is executed. If the shuffle is between nearest neighbors, the shuffle operation is executed once. If the shuffle is between sections four spaces apart, then the shuffle operation is repeated twice. If the shuffle is between sections eight spaces apart, the shuffle operation is repeated four times. [0093] FIG. 14 shows the elements of the control block 30 and their interactions with the compute array 32 and the system buses. The control block 30 includes program memory 44, program sequencer 42, integer ALUs JALU 38 and KALU 40, an interrupt controller and interrupt stack 48, DMA controller 36 and control logic 34. In addition, data memory 46 isused for storing parameters and for saving and restoring during context switch. Data memory 46 may include two small memory banks of IK words, for example. Control block 30 may further include a decode unit 60 and a digital oscillator 62. Two local buses in control block 30 allow ALUs 38 and 40 to save and restore to the banks of data memory 46. The save/restore can operate simultaneously in the compute array 32 and the control block 30, doubling the performance compared to the main processor. [0094] The control block 30 issues instructions (load/store and compute) to compute array 32. Instructions enter the compute array 32 at compute engine 50 and pass into each compute engine sequentially on successive clock cycles until they exit the array eight clock cycles later. The flow data specified in the instruction enters the array and flows through the compute array in the same way. [0095] The DMA bus 22 connects only to the compute array 32 and is used primarily to transfer I/O data to the memory of compute array 32. DMA bus 22 can also be used to transfer data from the main memory 14. The DMA bus 24 can connect either to the compute array 32 or to the control block 30 and allows direct read/write between the processor 20 and main memory 14, in either direction. The transfers can be via DMA or program control. The control processor 12 (FIG. 1) can access the processor 20 via the DMA bus 24 to read or write certain registers, write an interrupt vector or check on status, download program memory, or download or upload data memory in either the control block 30 or in the compute array 32. [0096] The program memory 44 is relatively small compared to the program memory available in the control processor 12. The memory may be 4K words of 32 bits and may be implemented as a cache. The program memory may be 256-bits wide to allow up to eight instructions to be issued per cycle, including two instructions for the IALUs, one instruction for the program sequencer and two instructions for the compute array computations plus one extra instruction for immediate data. The data memory 46 is used primarily to hold additional parameters for the IALUs and for save/restore on context switches. The program sequencer 42 is a simplified version of the program sequencer used in the main processor 12 and fetches up to eight instructions per cycle. The program sequencer includes a program counter, branch hardware, a branch target buffer, an interrupt vector table, etc. The JALU 38 and the KALU 40 are integer ALUs having an address space of 32 bits. Each of JALU38 and KALU40 can access either compute array memory bank. Interrupt stack 48 allows multiple interrupts to be serviced sequentially according to their priority.[0097] The processor 20 may support data streaming from DMA or directly from I/O ports via a FIFO to the flow path. The streaming data passes through compute array 32 and is processed without being loaded from memory. Data streaming may be supported by a fast context switch. In data streaming, I/O data is not written to memory but is placed in the DMA buffer 82 to be processed directly by the compute engines. [0098] DMA interleaving may be utilized as shown in FIG. 15. DMA buffers 82 for each section are illustrated in FIG. 15. In order to interleave by four, as shown in the first row of FIG. 15, two quad words are loaded into location DMAB (0) of each buffer. Then two quad words are loaded into location DMAB (1) of each DMA buffer; two quad words are loaded into location DMAB (2) of each DMA buffer; and two quad words are loaded into location DMAB (3) of each DMA buffer. In order to interleave by two, as shown in the second row of FIG. 15, four quad words are loaded into locations DMAB (2,0) of each DMA buffer and then four quad words are loaded into locations DMAB (3,1) of each DMA buffer. In order to interleave by pairs, as illustrated in the third row of FIG. 15, four quad words are loaded into locations DMAB (1,0) of each DMA buffer and then four quad words are loaded into locations DMAB (3,2) of each DMA buffer. For DMA transfers with no interleave, quad words are loaded sequentially by section. In order to perform a SIMD load, one section of the DMA buffer is loaded at a time. In addition, group loading can be performed by loading groups of DMA buffers. [0099] The SIMD-FLOW architecture of compute array 32 is well-suited to operating on data sets either by column or by row or by a combination of row and column. The following examples illustrate such operations. [0100] FIG. 16 shows a column-by-column dot product where each section of the compute array operates SIMD-style on a single channel (eight channels DA through DH). No data is shared between sections of the compute array. This works well if the arriving data sets are arranged in columns and there is no data to be shared. Thus in section 0, coefficient CA0 is multiplied by data DA0, coefficient CA1 is multiplied by data DAI, etc., and the results are accumulated to provide a channel A sum. Similar operations are performed in the other sections. [0101] Row-by-column processing is illustrated in FIG. 17. The coefficients CA0-CA7 are arranged horizontally by row in memory while the data DAO-DAn is arranged vertically by column in memory. Each coefficient is flowed (broadcast) across the entire array using theDFLOW path. The coefficients flow through each section, along with the corresponding instruction, so that each section appears to be performing the same operation but time-delayed. This is an example of a SIMD multiply accumulate operation with shared coefficients. For beam-forming, this allows one set of coefficients to be shared by all time samples of each antenna (horizontal) while the summation is done across all the antennas (vertically). If the data and coefficients are interchanged, a polyphase FIR filter can be implemented, with each section implementing a different phase of the filter from the same data. Matrix multiply can be done with the method of FIG. 17. The row of one matrix C is broadcast and multiplied with eight columns of the second matrix D. The eight results are in the MAC of each section. [0102] Row-by-row processing is illustrated in FIG. 18. The example of FIG. 18 shows a row-by -row dot product as in a vector-by-vector multiplication. In row-by-row operations, neither data nor coefficients are flowed, but the multiplier result from each section is accumulated in the AFLOW path. Each section sums its product with the arriving sum and passes the result on to the next section. [0103] Row-by-row processing with shift is illustrated in FIG. 19. The example of FIG. 19 shows how an FIR filter can be implemented with a row-by-row operation. The row of data is stored in a shift register (DFLOW path) to implement a tapped delay line. The data is maintained in memory, then loaded into the flow path, shifted and optionally returned to memory. To save power and memory bandwidth, the load and store operations can be performed after multiple shifts. When operating on a small number of channels, the DFLOW data can be kept in the register file and this data is not copied to memory. [0104] Row-by-row processing with shift and broadcast is illustrated in the example of FIG. 20. FIG. 20 shows a variant of an FIR filter implementation where the data is shifted as usual in the delay line, but the coefficients are flow broadcast to all sections. The accumulation is done in place in an accumulation register MRO in each section and these are stored using a SIMD instruction. [0105] Processor 20 is shown in FIG. 1 and described above as operating with a control processor. However, processor 20 is not limited in this respect. The architecture of processor 20 permits operation as a stand-alone processor configured for computation-intensive applications.[0106] As described above, processor 20 can perform flow operations that flow from compute engine to compute engine on successive cycles, can perform SIMD operations within individual compute engines and can perform a combination of flow operations and SIMD operations. Flow operations are driven by flow instructions, and SIMD operations are driven by SIMD instructions. Memory may be accessed according to the type of operation being performed. In the compute array, multiple compute engines receive the same instructions but delayed in time. Processor 20 includes a common sequencer and address generators for multiple compute engines. [0107] The processor architecture described herein achieves low power dissipation. The design is compact and uses short bus lengths and small drivers. The power consumption in the control block is amortized over multiple compute engines. Small memories are utilized, and reduced data movement is required for many applications. The register file may have a capacity of 256 words, so that fewer memory accesses are required. The design may be optimized for deep pipelines and low voltage rather than for fast devices. [0108] The terms and expressions employed herein are used as terms and expressions of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described or portions thereof. In addition, having described certain embodiments of the invention, it will be apparent to those of ordinary skill in the art that other embodiments incorporating the concepts disclosed herein may be used without departing from the spirit and scope of the invention. Accordingly, the described embodiments are to be considered in all respects as only illustrative and not restrictive. [0109] What is claimed is:
A biometric system may include an ultrasonic sensor array, a light source system and a control system. Some implementations may include an ultrasonic transmitter. The control system may be capable of controlling the light source system to emit light and of receiving signals from the ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with the light emitted by the light source system. The control system may be capable of performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array.
A biometric system, comprising: a substrate; an ultrasonic sensor array on, or proximate, the substrate; a light source system; and a control system capable of: controlling the light source system to emit light; receiving signals from the ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object, said emissions due to the target object being illuminated with light emitted by the light source system; and performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array.The biometric system of claim 1, wherein the control system is capable of selecting a wavelength of the light emitted by the light source system.The biometric system of claim 2, wherein the control system is capable of selecting the wavelength and a light intensity associated with the selected wavelength to illuminate portions of the target object.The biometric system of claim 1, wherein the control system is capable of selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array.The biometric system of claim 1, further comprising an ultrasonic transmitter.The biometric system of claim 5, wherein the control system is capable of controlling the ultrasonic transmitter to obtain fingerprint image data via the ultrasonic sensor array and wherein the authentication process involves evaluating the fingerprint image data.The biometric system of claim 1, wherein the light emitted by the light source system is transmitted through the substrate.The biometric system of claim 1, wherein the light source system includes one or more laser diodes or light-emitting diodes. -47-The biometric system of claim 1, wherein the light source system includes at least one infrared, optical, red, green, blue, white or ultraviolet light-emitting diode or at least one infrared, optical, red, green, blue or ultraviolet laser diode.The biometric system of claim 1, wherein the light source system is capable of emitting a light pulse with a pulse width less than about 100 nanoseconds.The biometric system of claim 1, wherein the light source system is capable of emitting a plurality of light pulses at a pulse frequency between about 1 MHz and about 100 MHz.The biometric system of claim 11, wherein the pulse frequency of the plurality of light pulses corresponds to an acoustic resonant frequency of the ultrasonic sensor array and the substrate.The biometric system of claim 1, wherein the control system is further capable of comparing, for the purpose of user authentication, attribute information obtained from received image data, based on the signals from the ultrasonic sensor array, with stored attribute information obtained from image data that has previously been received from an authorized user.The biometric system of claim 13, wherein the attribute information obtained from the received image data and the stored attribute information includes attribute information corresponding to at least one of sub-epidermal features, muscle tissue features or bone tissue features.The biometric system of claim 14, wherein the attribute information obtained from the received image data and the stored attribute information includes attribute information corresponding to sub-epidermal features and wherein the sub-epidermal features include one or more features from a list of features consisting of features of the dermis, features of the subcutis, blood vessel features, lymph vessel features, sweat gland features, hair follicle features, hair papilla features and fat lobule features.The biometric system of claim 13, wherein the attribute information obtained from the received image data and the stored attribute information includes information regarding fingerprint minutia. -48-The biometric system of claim 1, wherein the control system is further capable of, for the purpose of user authentication: obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter; and obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system.The biometric system of claim 17, wherein the ultrasonic image data obtained via insonification of the target object includes fingerprint image data and wherein the ultrasonic image data obtained via illumination of the target object includes vascular image data.The biometric system of claim 1, wherein the target object is a finger or a finger-like object.The biometric system of claim 1, wherein the target object is positioned on a surface of the ultrasonic sensor array or positioned on a surface of a platen that is acoustically coupled to the ultrasonic sensor array.The biometric system of claim 1, wherein the control system is further configured to make a liveness determination of the target object based on the received signals.A mobile device that includes the biometric system of any one of claims 1-20.A biometric system, comprising: a substrate; an ultrasonic sensor array on, or proximate, the substrate; a light source system; and control means for: controlling the light source system to emit light; receiving signals from the ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object, said emissions due to the target object being illuminated with light emitted by the light source system; and performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array. -49-The biometric system of claim 23, wherein the control means includes means for selecting a wavelength of the light emitted by the light source system.The biometric system of claim 23, wherein the control means includes means for selecting the wavelength and a light intensity associated with the selected wavelength to illuminate portions of the target object.The biometric system of claim 23, wherein the control means includes means for selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array.The biometric system of claim 23, further comprising an ultrasonic transmitter.The biometric system of claim 27, wherein the user authentication process involves: ultrasonic image data obtained via insonification of the target object with ultrasonic waves from the ultrasonic transmitter; and ultrasonic image data obtained via illumination of the target object with light emitted from the light source system.A biometric authentication method, comprising: controlling a light source system to emit light; receiving signals from an ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with light emitted by the light source system; and performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array.The method of claim 29, further comprising obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter, wherein the user authentication process is based, at least in part, on the ultrasonic image data.The method of claim 29, further comprising selecting a wavelength and a light intensity of the light emitted by the light source system to selectively generate acoustic wave emissions from portions of the target object.The method of claim 29, further comprising selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array. -50-The method of claim 29, wherein controlling the light source system involves controlling a light source system of a mobile device.The method of claim 33, wherein controlling the light source system involves controlling at least one backlight or front light capable of illuminating a display of the mobile device.A non-transitory medium having software stored thereon, the software including instructions for controlling at least one device to: control a light source system to emit light; receive signals from an ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with light emitted by the light source system; and perform a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array.The non-transitory medium of claim 35, wherein the software includes instructions for obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter and wherein the user authentication process is based, at least in part, on the ultrasonic image data.The non-transitory medium of claim 35, wherein the software includes instructions for selecting a wavelength and a light intensity of the light emitted by the light source system to selectively generate acoustic wave emissions from portions of the target object.The non-transitory medium of claim 35, wherein the software includes instructions for selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array.The non-transitory medium of claim 35, wherein controlling the light source system involves controlling at least one backlight or front light capable of illuminating a display of a mobile device. -51-
CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 BIOMETRIC SYSTEM WITH PHOTOACOUSTIC IMAGING PRIORITY CLAIM [0001] This application claims priority to United States Application No. 15/149,046, filed on May 6, 2016 and entitled "BIOMETRIC SYSTEM WITH PHOTOACOUSTIC IMAGING," which is hereby incorporated by reference. TECHNICAL FIELD [0002] This disclosure relates generally to biometric devices and methods, including but not limited to biometric devices and methods applicable to mobile devices. DESCRIPTION OF THE RELATED TECHNOLOGY [0003] As mobile devices become more versatile, user authentication becomes increasingly important. Increasing amounts of personal information may be stored on and/or accessible by a mobile device. Moreover, mobile devices are increasingly being used to make purchases and perform other commercial transactions. Some mobile devices, including but not limited to smartphones, currently include fingerprint sensors for user authentication. However, some fingerprint sensors are easily spoofed. Improved authentication methods would be desirable. SUMMARY [0004] The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein. [0005] One innovative aspect of the subject matter described in this disclosure can be implemented in an apparatus. The apparatus may include a substrate, an ultrasonic sensor array on or proximate the substrate, a light source system and a control system. In some examples, the apparatus may be, or may include, a biometric system. In some implementations, a mobile device may be, or may include, the apparatus. For example, a mobile device may include a biometric system as disclosed herein. [0006] The control system may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates -1- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 or transistor logic, discrete hardware components, or combinations thereof The control system may be capable of controlling the light source system to emit light and of receiving signals from the ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object. The emissions may be due to the target object being illuminated with light emitted by the light source system. The control system may be capable of performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array. [0007] The apparatus may or may not include an ultrasonic transmitter, depending on the particular implementation. If the apparatus includes an ultrasonic transmitter, the control system may be capable of controlling the ultrasonic transmitter to obtain fingerprint image data via the ultrasonic sensor array. The authentication process may involve evaluating the fingerprint image data. [0008] In some examples, the light source system may include one or more laser diodes or light-emitting diodes. For example, the light source system may include at least one infrared, optical, red, green, blue, white or ultraviolet light-emitting diode and/or at least one infrared, optical, red, green, blue or ultraviolet laser diode. In some implementations, the light source system may be capable of emitting a light pulse with a pulse width less than about 100 nanoseconds. In some examples, the light source system may be capable of emitting a plurality of light pulses at a pulse frequency between about 1 MHz and about 100 MHz. The pulse frequency of the plurality of light pulses may, in some instances, correspond to an acoustic resonant frequency of the ultrasonic sensor array and/or the substrate. According to some implementations, the light emitted by the light source system may be transmitted through the substrate. According to some examples, the control system may be capable of selecting one or more acquisition time delays to receive acoustic wave emissions from one or more corresponding distances from the ultrasonic sensor array. [0009] In some implementations, the control system may be capable of selecting a wavelength of the light emitted by the light source system. According to some such implementations, the control system may be capable of selecting the wavelength and a light intensity associated with the selected wavelength to illuminate portions of the target object. [0010] According to some examples, the control system may be capable of comparing, for the purpose of user authentication, attribute information with stored attribute information -2- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 obtained from image data that has previously been received from an authorized user. The attribute information may be obtained from received image data, based on the signals from the ultrasonic sensor array. In some examples, the attribute information obtained from the received image data and the stored attribute information may include attribute information .. corresponding to at least one of sub-epidermal features, muscle tissue features or bone tissue features. In some implementations, the attribute information obtained from the received image data and the stored attribute information may include attribute information corresponding to sub-epidermal features. In some such implementations, the sub- epidermal features may include features of the dermis, features of the subcutis, blood vessel features, lymph vessel features, sweat gland features, hair follicle features, hair papilla features and/or fat lobule features. Alternatively, or additionally, the attribute information obtained from the received image data and the stored attribute information may include information regarding fingerprint minutia. [0011] In some examples, the control system may be capable of, for the purpose of user authentication, obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter. The control system may be capable of obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system. In some such examples, the ultrasonic image data obtained via insonification of the target object may include fingerprint image data. Alternatively, or additionally, the ultrasonic image data obtained via illumination of the target object may include vascular image data. [0012] According to some implementations, the target object may be positioned on a surface of the ultrasonic sensor array or positioned on a surface of a platen that is acoustically coupled to the ultrasonic sensor array. In some examples, the target object may be a finger or .. a finger-like object. According to some implementations, the control system may be configured to make a liveness determination of the target object based on the received signals. [0013] Other innovative aspects of the subject matter described in this disclosure can be implemented in a biometric authentication method that may involve controlling a light source .. system to emit light. The method may involve receiving signals from an ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with light emitted by the light source system. The method may involve -3- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array. [0014] In some examples, the method may involve obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter. The user authentication process may be based, at least in part, on the ultrasonic image data. [0015] In some instances, the method may involve selecting a wavelength and a light intensity of the light emitted by the light source system to selectively generate acoustic wave emissions from portions of the target object. In some examples, the method may involve selecting an acquisition time delay to receive acoustic wave emissions at a corresponding 1() .. distance from the ultrasonic sensor array. [0016] In some examples, controlling the light source system may involve controlling a light source system of a mobile device. In some such examples, controlling the light source system involves controlling at least one backlight or front light capable of illuminating a display of the mobile device. [0017] Some or all of the methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on non-transitory media. Such non- transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented in a non-transitory medium having software stored thereon. [0018] For example, the software may include instructions for controlling a light source system to emit light. The software may include instructions for receiving signals from an ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with light emitted by the light source system. The software may include instructions for performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array. [0019] According to some examples, the software may include instructions for obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter. The user authentication process may be based, at least in part, on the ultrasonic image data. In some instances, the software may include instructions for selecting -4- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 a wavelength and a light intensity of the light emitted by the light source system to selectively generate acoustic wave emissions from portions of the target object. In some examples, the software may include instructions for selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array. According to some implementations, controlling the light source system may involve controlling at least one backlight or front light capable of illuminating a display of a mobile device. [0020] Other innovative aspects of the subject matter described in this disclosure also can be implemented in an apparatus. The apparatus may include an ultrasonic sensor array, a light source system and a control system. In some examples, the apparatus may be, or may 1() include, a biometric system. In some implementations, a mobile device may be, or may include, the apparatus. For example, a mobile device may include a biometric system as disclosed herein. In some implementations, the ultrasonic sensor array and a portion of the light source system may be configured in an ultrasonic button, a display module and/or a mobile device enclosure. [0021] The control system may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof The control system may be operatively configured to control the light source system to emit light that induces acoustic wave emissions inside a target object. The control system may be operatively configured to select a first acquisition time delay for the reception of acoustic wave emissions primarily from a first depth inside the target object. The control system may be operatively configured to acquire first ultrasonic image data from the acoustic wave emissions received by the ultrasonic sensor array during a first acquisition time window. The first acquisition time window may be initiated at an end time of the first acquisition time delay. In some implementations, the first ultrasonic image data may be acquired during the first acquisition time window from a peak detector circuit disposed in each of a plurality of sensor pixels within the ultrasonic sensor array. [0022] In some examples, the apparatus may include a display. The control system may .. be configured to control the display to depict a two-dimensional image that corresponds with the first ultrasonic image data. -5- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0023] According to some examples, the acquisition time delay may be measured from a time that the light source system emits light. In some implementations, the first acquisition time window may be in the range of about 10 nanoseconds to about 200 nanoseconds. In some instances, the control system may be operatively configured to select second through Nth acquisition time delays and to acquire second through Nth ultrasonic image data during second through Nth acquisition time windows after the second through Nth acquisition time delays. Each of the second through Nth acquisition time delays may correspond to a second through an Nth depth inside the target object. In some such examples, the apparatus may include a display and the control system may be configured to control the display to depict a three-dimensional image that corresponds with at least a subset of the first through Nth ultrasonic image data. [0024] In some examples, the light source system may include one or more laser diodes, semiconductor lasers and/or light-emitting diodes. For example, the light source system may include at least one infrared, optical, red, green, blue, white or ultraviolet light-emitting diode and/or at least one infrared, optical, red, green, blue or ultraviolet laser diode. In some implementations, the light source system may be capable of emitting a light pulse with a pulse width less than about 100 nanoseconds. According to some implementations, the control system may be configured to control the light source system to emit at least one light pulse having a duration that is in the range of about 10 nanoseconds to about 500 nanoseconds. In some examples, the light source system may be capable of emitting a plurality of light pulses at a pulse frequency between about 1 MHz and about 100 MHz. [0025] In some implementations, the apparatus may include a substrate. In some such implementations, the ultrasonic sensor array may be formed in or on the substrate. In some examples, the light source system may be coupled to the substrate. According to some implementations, the light emitted by the light source system may be transmitted through the substrate. In some examples, light emitted by the light source system may be transmitted through the ultrasonic sensor array. In some implementations, the light emitted by the light source system may include a plurality of light pulses and the pulse frequency of the plurality of light pulses may correspond to an acoustic resonant frequency of the ultrasonic sensor array and/or the substrate. According to some examples, the control system may be capable of selecting one or more acquisition time delays to receive acoustic wave emissions from one or more corresponding distances from the ultrasonic sensor array. -6- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0026] In some implementations, the control system may be capable of selecting a wavelength of the light emitted by the light source system. According to some such implementations, the control system may be capable of selecting the wavelength and a light intensity associated with the selected wavelength to illuminate portions of the target object. In some examples, the control system may be configured to select one or more wavelengths of the light to trigger acoustic wave emissions primarily from a particular type of material in the target object. [0027] According to some examples, the control system may be capable of comparing, for the purpose of user authentication, attribute information obtained from received image data, based on the signals from the ultrasonic sensor array, with stored attribute information obtained from image data that has previously been received from an authorized user. In some examples, the attribute information obtained from the received image data and the stored attribute information may include attribute information corresponding to at least one of sub- epidermal features, muscle tissue features or bone tissue features. In some implementations, the attribute information obtained from the received image data and the stored attribute information may include attribute information corresponding to sub-epidermal features. In some such implementations, the sub-epidermal features may include features of the dermis, features of the subcutis, blood vessel features, lymph vessel features, sweat gland features, hair follicle features, hair papilla features and/or fat lobule features. Alternatively, or additionally, the attribute information obtained from the received image data and the stored attribute information may include information regarding fingerprint minutia. [0028] In some examples, the control system may be capable of, for the purpose of user authentication, obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter. The control system may be capable of obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system. In some such examples, the ultrasonic image data obtained via insonification of the target object may include fingerprint image data. Alternatively, or additionally, the ultrasonic image data obtained via illumination of the target object may include vascular image data. [0029] According to some implementations, the target object may be positioned on a surface of the ultrasonic sensor array or positioned on a surface of a platen that is acoustically coupled to the ultrasonic sensor array. In some examples, the target object may be a finger or -7- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 a finger-like object. According to some implementations, the control system may be configured to make a liveness determination of the target object based on the received signals. [0030] According to some implementations, controlling the light source system may involve controlling at least one backlight or front light capable of illuminating a display. The light source system may include at least one backlight or front light configured for illuminating the display and a target object. In some examples, controlling the light source system may involve controlling a light source system of a mobile device. In some such examples, controlling the light source system involves controlling at least one backlight or front light capable of illuminating a display of the mobile device. [0031] In some examples, the control system may be configured to estimate a blood oxygen level. According to some implementations, the control system may be configured to estimate a blood glucose level. [0032] In some examples, the control system may be configured to acquire second ultrasonic image data primarily from the first depth inside the target object. In some instances, the second ultrasonic image data may be acquired after a period of time corresponding to a frame rate. [0033] In some implementations, the control system may be configured for image stitching. For example, in some such implementations, the control system may be configured to acquire second ultrasonic image data at primarily the first depth inside the target object. The second ultrasonic image data may be acquired after the target object is repositioned on the apparatus or after the apparatus has been repositioned with respect to the target object. In some implementations, the control system may be configured to stitch together the first and second ultrasonic image data to form a composite ultrasonic image. [0034] The apparatus may or may not include an ultrasonic transmitter, depending on the particular implementation. If the apparatus includes an ultrasonic transmitter, the control system may be configured to acquire second ultrasonic image data from insonification of the target object with ultrasonic waves from the ultrasonic transmitter. In some such examples, the second ultrasonic image data may be acquired primarily from the first depth inside the target object and the first ultrasonic image data and the second ultrasonic image data may be acquired from a plurality of sensor pixels within the ultrasonic sensor array. In some -8- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 examples, the control system may be capable of controlling the ultrasonic transmitter to obtain fingerprint image data via the ultrasonic sensor array. The authentication process may involve evaluating the fingerprint image data and/or evaluating date that is based on the fingerprint image data, such as fingerprint minutiae. [0035] Still other innovative aspects of the subject matter described in this disclosure can be implemented in a method of acquiring ultrasonic image data that involves controlling a light source system to emit light. The light may induce acoustic wave emissions inside a target object. The method may involve selecting a first acquisition time delay to receive the acoustic wave emissions primarily from a first depth inside the target object. The method may involve acquiring first ultrasonic image data from the acoustic wave emissions received by a ultrasonic sensor array during a first acquisition time window. The first acquisition time window may be initiated at an end time of the first acquisition time delay. In some examples, the method may involve controlling a display to depict a two-dimensional image that corresponds with the first ultrasonic image data. [0036] In some examples, the acquisition time delay may be measured from a time that the light source system emits light. In some instances, the first acquisition time window may be in the range of about 10 nanoseconds to about 200 nanoseconds. [0037] In some examples, the method may involve selecting second through Nth acquisition time delays and acquiring second through /Vth ultrasonic image data during second through /Vth acquisition time windows after the second through /Vth acquisition time delays. In some such examples, each of the second through /Vth acquisition time delays may correspond to a second through an /Vth depth inside the target object. [0038] Yet other innovative aspects of the subject matter described in this disclosure can be implemented in a non-transitory medium having software stored thereon. In some examples, the software may include instructions for controlling one or more devices to control a light source system to emit light. The light may induce acoustic wave emissions inside a target object. The software may include instructions for selecting a first acquisition time delay to receive the acoustic wave emissions primarily from a first depth inside the target object. The software may include instructions for acquiring first ultrasonic image data from the acoustic wave emissions received by a ultrasonic sensor array during a first acquisition time window. In some examples, the software may include instructions for -9- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 controlling a display to depict a two-dimensional image that corresponds with the first ultrasonic image data. [0039] The first acquisition time window may, for example, be initiated at an end time of the first acquisition time delay. In some examples, the acquisition time delay is measured from a time that the light source system emits light. According to some implementations, the first acquisition time window may be in the range of about 10 nanoseconds to about 200 nanoseconds. In some examples, the software may include instructions for selecting second through Nth acquisition time delays and for acquiring second through Nth ultrasonic image data during second through Nth acquisition time windows after the second through Nth acquisition time delays. Each of the second through Nth acquisition time delays may correspond to a second through an Nth depth inside the target object. BRIEF DESCRIPTION OF THE DRAWINGS [0040] Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale. Like reference numbers and designations in the various drawings indicate like elements. [0041] Figure 1 shows an example of components of blood being differentially heated by incident light and subsequently emitting acoustic waves. [0042] Figure 2 is a block diagram that shows example components of an apparatus according to some disclosed implementations. [0043] Figure 3 is a flow diagram that provides examples of biometric system operations. [0044] Figure 4 shows an example of a cross-sectional view of an apparatus capable of performing the method of Figure 3. [0045] Figure 5 shows an example of a mobile device that includes a biometric system as disclosed herein. [0046] Figure 6 is a flow diagram that provides further examples of biometric system operations. -10- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0047] Figure 7 shows examples of multiple acquisition time delays being selected to receive acoustic waves emitted from different depths. [0048] Figure 8 is a flow diagram that provides additional examples of biometric system operations. [0049] Figure 9 shows examples of multiple acquisition time delays being selected to receive ultrasonic waves emitted from different depths, in response to a plurality of light pulses. [0050] Figures 10A-10C are examples of cross-sectional views of a target object positioned on a platen of a biometric system such as those disclosed herein. [0051] Figures 10D-10F show a series of simplified two-dimensional images and a three- dimensional reconstruction that correspond with ultrasonic image data acquired by the processes shown in Figures 10A-10C. [0052] Figure 11 shows an example of a mobile device that includes a biometric system capable of performing methods disclosed herein. [0053] Figure 12 is a flow diagram that provides an example of a method of stitching ultrasonic image data obtained via a mobile device such as that shown in Figure 11. [0054] Figure 13 is a flow diagram that shows blocks of a method of oxidized hemoglobin detection that may be performed with some disclosed biometric systems. [0055] Figure 14 representationally depicts aspects of a 4 x 4 pixel array of sensor pixels for an ultrasonic sensor system. [0056] Figure 15A shows an example of an exploded view of an ultrasonic sensor system. [0057] Figure 15B shows an exploded view of an alternative example of an ultrasonic sensor system. DETAILED DESCRIPTION [0058] The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein may be applied in a multitude -11- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 of different ways. The described implementations may be implemented in any device, apparatus, or system that includes a biometric system as disclosed herein. In addition, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, smart cards, wearable devices such as bracelets, armbands, wristbands, rings, headbands, patches, etc., Bluetooth devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, global positioning system (GPS) 1() receivers/navigators, cameras, digital media players (such as MP3 players), camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (e.g., e-readers), mobile health devices, computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS) applications including microelectromechanical systems (MEMS) applications, as well as non-EMS applications), aesthetic structures (such as display of images on a piece of jewelry or clothing) and a variety of EMS devices. The teachings herein also may be used in applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, steering wheels or other automobile parts, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art. [0059] Various implementations disclosed herein may include a biometric system that is capable of optical excitation and ultrasonic imaging of resultant acoustic wave generation. Such imaging may be referred to herein as "photoacoustic imaging." Some such implementations may be capable of obtaining images from bones, muscle tissue, blood, blood vessels, and/or other sub-epidermal features. As used herein, the term "sub- epidermal -12- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 features" may refer to any of the tissue layers that underlie the epidermis, including the dermis, the subcutis, etc., and any blood vessels, lymph vessels, sweat glands, hair follicles, hair papilla, fat lobules, etc., that may be present within such tissue layers. Some implementations may be capable of biometric authentication that is based, at least in part, on image data obtained via photoacoustic imaging. In some examples, an authentication process may be based on image data obtained via photoacoustic imaging and also on image data obtained by transmitting ultrasonic waves and detecting corresponding reflected ultrasonic waves. [0060] In some implementations, the incident light wavelength or wavelengths emitted by .. a light source system may be selected to trigger acoustic wave emissions primarily from a particular type of material, such as blood, blood cells, blood vessels, blood vasculature, lymphatic vasculature, other soft tissue, or bones. The acoustic wave emissions may, in some examples, include ultrasonic waves. In some such implementations, the control system may be capable of estimating a blood oxygen level, estimating a blood glucose level, or estimating both a blood oxygen level and a blood glucose level. [0061] Alternatively, or additionally, the time interval between the irradiation time and the time during which resulting ultrasonic waves are sampled (which may be referred to herein as the acquisition time delay or the range-gate delay (RGD)) may be selected to receive acoustic wave emissions primarily from a particular depth and/or from a particular type of material. For example, a relatively larger range-gate delay may be selected to receive acoustic wave emissions primarily from bones and a relatively smaller range- gate delay may be selected to receive acoustic wave emissions primarily from sub-epidermal features (such as blood vessels, blood, etc.), muscle tissue features or bone tissue features. [0062] Accordingly, some biometric systems disclosed herein may be capable of acquiring images of sub-epidermal features via photoacoustic imaging. In some implementations, a control system may be capable of acquiring first ultrasonic image data from acoustic wave emissions that are received by an ultrasonic sensor array during a first acquisition time window that is initiated at an end time of a first acquisition time delay. According to some examples, the control system may be capable of controlling a display to depict a two-dimensional (2-D) image that corresponds with the first ultrasonic image data. In some instances, the control system may be capable of acquiring second through /Vth ultrasonic image data during second through /Vth acquisition time windows after second -13- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 through Nth acquisition time delays. Each of the second through Nth acquisition time delays may correspond to a second through an Nth depth inside the target object. According to some examples, the control system may be capable of controlling a display to depict a three- dimensional (3-D) image that corresponds with at least a subset of the first through Nth ultrasonic image data. [0063] Particular implementations of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. Imaging sub- epidermal features (such as blood vessels, blood, etc.), muscle tissue features, etc., using ultrasonic technology alone can be challenging due to the small acoustic impedance contrast between various types of soft tissue. In some photoacoustic imaging implementations, a relatively higher signal-to-noise ratio may be obtained for the resulting acoustic wave emission detection because the excitation is via optical stimulation instead of (or in addition to) ultrasonic wave transmission. The higher signal-to-noise ratio can provide relatively more accurate and relatively more detailed imaging of blood vessels and other sub-epidermal features. In addition to the inherent value of obtaining more detailed images (e.g., for improved medical determinations and diagnoses), the detailed imaging of blood vessels and other sub-epidermal features can provide more reliable user authentication and liveness determinations. Moreover, some photoacoustic imaging implementations can detect changes in blood oxygen levels, which can provide enhanced liveness determinations. Some implementations provide a mobile device that includes a biometric system that is capable of some or all of the foregoing functionality. Some such mobile devices may be capable of displaying 2-D and/or 3-D images of sub-epidermal features, bone tissue, etc. [0064] Figure 1 shows an example of components of blood being differentially heated by incident light and subsequently emitting acoustic waves. In this example, incident light 102 has been transmitted from a light source system (not shown) through a substrate 103 and into a blood vessel 104 of an overlying finger 106. The surface of the finger 106 includes ridges and valleys, so some of the incident light 102 has been transmitted through the air 108 in this example. Here, the incident light 102 is causing optical excitation of illuminated blood and blood components in the blood vessel 104 and resultant acoustic wave generation. In this example, the generated acoustic waves 110 may include ultrasonic waves. [0065] In some implementations, such acoustic wave emissions may be detected by sensors of a sensor array, such as the ultrasonic sensor array 202 that is described below with -14- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 reference to Figure 2. In some instances, the incident light wavelength, wavelengths and/or wavelength range(s) may be selected to trigger acoustic wave emissions primarily from a particular type of material, such as blood, blood components, blood vessels, other soft tissue, or bones. [0066] Figure 2 is a block diagram that shows example components of an apparatus according to some disclosed implementations. In this example, the apparatus 200 includes a biometric system. Here, the biometric system includes an ultrasonic sensor array 202, a light source system 204 and a control system 206. Although not shown in Figure 2, the apparatus 200 may include a substrate. Some examples are described below. Some implementations of 1() the apparatus 200 may include the optional ultrasonic transmitter 208. [0067] Various examples of ultrasonic sensor arrays 202 are disclosed herein, some of which may include an ultrasonic transmitter and some of which may not. Although shown as separate elements in Figure 2, in some implementations the ultrasonic sensor array 202 and the ultrasonic transmitter 208 may be combined in an ultrasonic transceiver. For example, in some implementations, the ultrasonic sensor array 202 may include a piezoelectric receiver layer, such as a layer of PVDF polymer or a layer of PVDF-TrFE copolymer. In some implementations, a separate piezoelectric layer may serve as the ultrasonic transmitter. In some implementations, a single piezoelectric layer may serve as the transmitter and as a receiver. In some implementations, other piezoelectric materials may be used in the piezoelectric layer, such as aluminum nitride (A1N) or lead zirconate titanate (PZT). The ultrasonic sensor array 202 may, in some examples, include an array of ultrasonic transducer elements, such as an array of piezoelectric micromachined ultrasonic transducers (PMUTs), an array of capacitive micromachined ultrasonic transducers (CMUTs), etc. In some such examples, a piezoelectric receiver layer, PMUT elements in a single-layer array of PMUTs, or CMUT elements in a single-layer array of CMUTs, may be used as ultrasonic transmitters as well as ultrasonic receivers. According to some alternative examples, the ultrasonic sensor array 202 may be an ultrasonic receiver array and the ultrasonic transmitter 208 may include one or more separate elements. In some such examples, the ultrasonic transmitter 208 may include an ultrasonic plane-wave generator, such as those described below. [0068] The light source system 204 may, in some examples, include an array of light- emitting diodes. In some implementations, the light source system 204 may include one or more laser diodes. According to some implementations, the light source system may include -15- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 at least one infrared, optical, red, green, blue, white or ultraviolet light- emitting diode. In some implementations, the light source system 204 may include one or more laser diodes. For example, the light source system 204 may include at least one infrared, optical, red, green, blue or ultraviolet laser diode. [0069] In some implementations, the light source system 204 may be capable of emitting various wavelengths of light, which may be selectable to trigger acoustic wave emissions primarily from a particular type of material. For example, because the hemoglobin in blood absorbs near-infrared light very strongly, in some implementations the light source system 204 may be capable of emitting one or more wavelengths of light in the near- infrared range, in order to trigger acoustic wave emissions from hemoglobin. However, in some examples the control system 206 may control the wavelength(s) of light emitted by the light source system 204 to preferentially induce acoustic waves in blood vessels, other soft tissue, and/or bones. For example, an infrared (IR) light-emitting diode LED may be selected and a short pulse of IR light emitted to illuminate a portion of a target object and generate acoustic wave emissions that are then detected by the ultrasonic sensor array 202. In another example, an IR LED and a red LED or other color such as green, blue, white or ultraviolet (UV) may be selected and a short pulse of light emitted from each light source in turn with ultrasonic images obtained after light has been emitted from each light source. In other implementations, one or more light sources of different wavelengths may be fired in turn or simultaneously to generate acoustic emissions that may be detected by the ultrasonic sensor array. Image data from the ultrasonic sensor array that is obtained with light sources of different wavelengths and at different depths (e.g., varying RGDs) into the target object may be combined to determine the location and type of material in the target object. Image contrast may occur as materials in the body generally absorb light at different wavelengths differently. As materials in the body absorb light at a specific wavelength, they may heat differentially and generate acoustic wave emissions with sufficiently short pulses of light having sufficient intensities. Depth contrast may be obtained with light of different wavelengths and/or intensities at each selected wavelength. That is, successive images may be obtained at a fixed RGD (which may correspond with a fixed depth into the target object) with varying light intensities and wavelengths to detect materials and their locations within a target object. For example, hemoglobin, blood glucose or blood oxygen within a blood vessel inside a target object such as a finger may be detected photoacoustically. -16- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0070] According to some implementations, the light source system 204 may be capable of emitting a light pulse with a pulse width less than about 100 nanoseconds. In some implementations, the light pulse may have a pulse width between about 10 nanoseconds and about 500 nanoseconds or more. In some implementations, the light source system 204 may be capable of emitting a plurality of light pulses at a pulse frequency between about 1 MHz and about 100 MHz. In some examples, the pulse frequency of the light pulses may correspond to an acoustic resonant frequency of the ultrasonic sensor array and the substrate. For example, a set of four or more light pulses may be emitted from the light source system 204 at a frequency that corresponds with the resonant frequency of a resonant acoustic cavity in the sensor stack, allowing a build-up of the received ultrasonic waves and a higher resultant signal strength. In some implementations, filtered light or light sources with specific wavelengths for detecting selected materials may be included with the light source system 204. In some implementations, the light source system may contain light sources such as red, green and blue LEDs of a display that may be augmented with light sources of other wavelengths (such as IR and/or UV) and with light sources of higher optical power. For example, high-power laser diodes or electronic flash units (e.g., an LED or xenon flash unit) with or without filters may be used for short-term illumination of the target object. [0071] The control system 206 may include one or more general purpose single- or multi- chip processors, digital signal processors (DSPs), application specific integrated circuits .. (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof The control system 206 also may include (and/or be configured for communication with) one or more memory devices, such as one or more random access memory (RAM) devices, read- only memory (ROM) devices, etc. Accordingly, the apparatus 200 may have a memory system that includes one or more memory devices, though the memory system is not shown in Figure 2. The control system 206 may be capable of receiving and processing data from the ultrasonic sensor array 202, e.g., as described below. If the apparatus 200 includes an ultrasonic transmitter 208, the control system 206 may be capable of controlling the ultrasonic transmitter 208, e.g., as disclosed elsewhere herein. In some implementations, functionality of the control system 206 may be partitioned between one or more controllers or processors, such as a dedicated sensor controller and an applications processor of a mobile device. -17- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0072] Although not shown in Figure 2, some implementations of the apparatus 200 may include an interface system. In some examples, the interface system may include a wireless interface system. In some implementations, the interface system may include a user interface system, one or more network interfaces, one or more interfaces between the control system 206 and a memory system and/or one or more interfaces between the control system 206 and one or more external device interfaces (e.g., ports or applications processors). [0073] The apparatus 200 may be used in a variety of different contexts, many examples of which are disclosed herein. For example, in some implementations a mobile device may include the apparatus 200. In some implementations, a wearable device may include the 1() apparatus 200. The wearable device may, for example, be a bracelet, an armband, a wristband, a ring, a headband or a patch. [0074] Figure 3 is a flow diagram that provides examples of biometric system operations. The blocks of Figure 3 (and those of other flow diagrams provided herein) may, for example, be performed by the apparatus 200 of Figure 2 or by a similar apparatus. As with other methods disclosed herein, the method outlined in Figure 3 may include more or fewer blocks than indicated. Moreover, the blocks of methods disclosed herein are not necessarily performed in the order indicated. [0075] Here, block 305 involves controlling a light source system to emit light. In some implementations, the control system 206 of the apparatus 200 may control the light source system 204 to emit light. According to some such implementations, the control system may be capable of selecting one or more wavelengths of the light emitted by the light source system. In some implementations, the control system may be capable of selecting a light intensity associated with each selected wavelength. For example, the control system may be capable of selecting the one or more wavelengths of light and light intensities associated with each selected wavelength to generate acoustic wave emissions from one or more portions of the target object. In some examples, the control system may be capable of selecting the one or more wavelengths of light to evaluate a one or more characteristics of the target object, e.g., to evaluate blood oxygen levels. Some examples are described below. In some examples, block 305 may involve controlling a light source system to emit light that is transmitted through a substrate and/or other layers of an apparatus such as the apparatus 200. [0076] According to this implementation, block 310 involves receiving signals from an -18- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 ultrasonic sensor array corresponding to acoustic waves emitted from portions of a target object in response to being illuminated with light emitted by the light source system. In some instances the target object may be positioned on a surface of the ultrasonic sensor array or positioned on a surface of a platen that is acoustically coupled to the ultrasonic sensor array. The ultrasonic sensor array may, in some implementations, be the ultrasonic sensor array 202 that is shown in Figure 2 and described above. One or more coatings or acoustic matching layers may be included with the platen. [0077] In some examples the target object may be a finger, as shown above in Figure 1 and as described below with reference to Figure 4. However, in other examples the target .. object may be another body part, such as a palm, a wrist, an arm, a leg, a torso, a head, etc. In some examples the target object may be a finger-like object that is being used in an attempt to spoof the apparatus 200, or another such apparatus, into erroneously authenticating the finger-like object. For example, the finger-like object may include silicone rubber, polyvinyl acetate (white glue), gelatin, glycerin, etc., with a fingerprint pattern formed on an outside surface. [0078] In some examples, the control system may be capable of selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array. The corresponding distance may correspond to a depth within the target object. According to some examples, the control system may be capable of receiving an acquisition time delay via a user interface, from a data structure stored in memory, etc. [0079] In some implementations, the control system may be capable of acquiring first ultrasonic image data from acoustic wave emissions that are received by an ultrasonic sensor array during a first acquisition time window that is initiated at an end time of a first acquisition time delay. According to some examples, the control system may be capable of controlling a display to depict a two-dimensional (2-D) image that corresponds with the first ultrasonic image data. In some instances, the control system may be capable of acquiring second through /Vth ultrasonic image data during second through /Vth acquisition time windows after second through /Vth acquisition time delays. Each of the second through /Vth acquisition time delays may correspond to second through /Vth depths inside the target object. According .. to some examples, the control system may be capable of controlling a display to depict a reconstructed three-dimensional (3-D) image that corresponds with at least a subset of the first through /Vth ultrasonic image data. Some examples are described below. -19- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0080] In this instance, block 315 involves performing a user authentication process that is based, at least in part, on the signals from the ultrasonic sensor array. Accordingly, in some examples, the user authentication process may involve obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system. In some such examples, the ultrasonic image data obtained via illumination of the target object may include image data corresponding to one or more sub-epidermal features, such as vascular image data. [0081] According to some such implementations, the user authentication process also may involve obtaining ultrasonic image data via insonification of the target object with .. ultrasonic waves from an ultrasonic transmitter, such as the ultrasonic transmitter 208 shown in Figure 2. In some such examples, the ultrasonic image data obtained via insonification of the target object may include fingerprint image data. However, in some implementations the ultrasonic image data obtained via illumination of the target object and the ultrasonic image data obtained via insonification of the target object may both be acquired primarily from the same depth inside the target object. In some examples, both the ultrasonic image data obtained via illumination of the target object and the ultrasonic image data obtained via insonification of the target object may be acquired from the same plurality of sensor pixels within an ultrasonic sensor array. [0082] The user authentication process may involve comparing "attribute information" .. obtained from received image data, based on the signals from the ultrasonic sensor array, with stored attribute information obtained from image data that has previously been received from an authorized user during, for example, an enrollment process. In some examples, the attribute information obtained from received image data and the stored attribute information include attribute information regarding subdermal features. According to some such examples, the attribute information may include information regarding subdermal features, such as information regarding features of the dermis, features of the subcutis, blood vessel features, lymph vessel features, sweat gland features, hair follicle features, hair papilla features and/or fat lobule features. [0083] Alternatively, or additionally, in some implementations the attribute information obtained from the received image data and the stored attribute information may include information regarding bone tissue features, muscle tissue features and/or epidermal tissue features. For example, according to some implementations, the user authentication process -20- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 may involve controlling the ultrasonic transmitter to obtain fingerprint image data via the ultrasonic sensor array. In such examples, the authentication process may involve evaluating attribute information obtained from the fingerprint image data. [0084] The attribute information obtained from the received image data and the stored attribute information that are compared during an authentication process may include biometric template data corresponding to the received image data and biometric template data corresponding to the stored image data. One well-known type of biometric template data is fingerprint template data, which may indicate types and locations of fingerprint minutia. A user authentication process based on attributes of fingerprint image data may involve 1() comparing received and stored fingerprint template data. Such a process may or may not involve directly comparing received and stored fingerprint image data. [0085] Similarly, biometric template data corresponding to subdermal features may include information regarding the attributes of blood vessels, such as information regarding the types and locations of blood vessel features, such as blood vessel size, blood vessel orientation, the locations of blood vessel branch points, etc. Alternatively, or additionally, biometric template data corresponding to subdermal features may include attribute information regarding the types (e.g., the sizes, shapes, orientations, etc.) and locations of features of the dermis, features of the subcutis, lymph vessel features, sweat gland features, hair follicle features, hair papilla features and/or fat lobule features. [0086] Many spoofing techniques are based on forming fingerprint-like features on an object, which may be a finger-like object. However, making a finger-like object with detailed subdermal features, muscle tissue features and/or bone tissue features would be challenging and expensive. Making such features accurately correspond with those of an authorized user would be even more challenging. Because some disclosed implementations involve obtaining attribute information that is based on sub-epidermal features, muscle tissue features and/or bone tissue features, some such implementations may provide more reliable authentication and may be capable of providing determinations of "liveness." Some implementations described below, such as those capable of determining changes in blood oxygen and/or blood glucose levels, may provide enhanced liveness determinations. [0087] Figure 4 shows an example of a cross-sectional view of an apparatus capable of performing the method of Figure 3. The apparatus 400 is an example of a device that may be -21- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 included in a biometric system such as those disclosed herein. Here, the apparatus 400 is an implementation of the apparatus 200 that is described above with reference to Figure 2. As with other implementations shown and described herein, the types of elements, the arrangement of the elements and the dimensions of the elements illustrated in Figure 4 are merely shown by way of example. [0088] Figure 4 shows an example of a target object being illuminated by incident light and subsequently emitting acoustic waves. In this example, the apparatus 400 includes a light source system 204, which may include an array of light-emitting diodes and/or an array of laser diodes. In some implementations, the light source system 204 may be capable of 1() emitting various wavelengths of light, which may be selectable to trigger acoustic wave emissions primarily from a particular type of material. In some instances, the incident light wavelength, wavelengths and/or wavelength range(s) may be selected to trigger acoustic wave emissions primarily from a particular type of material, such as blood, blood vessels, other soft tissue, or bones. To achieve sufficient image contrast, light sources 404 of the light source system 204 may need to have a higher intensity and optical power output than light sources generally used to illuminate displays. In some implementations, light sources with light output of 1-100 millijoules or more per pulse, with pulse widths of 100 nanoseconds or less, may be suitable. In some implementations, light from an electronic flash unit such as that associated with a mobile device may be suitable. In some implementations, the pulse width of the emitted light may be between about 10 nanoseconds and about 500 nanoseconds or more. [0089] In this example, incident light 102 has been transmitted from the light sources 404 of the light system 204 through a sensor stack 405 and into an overlying finger 106. The various layers of the sensor stack 405 may include one or more substrates of glass or other material such as plastic or sapphire that is substantially transparent to the light emitted by the light source system 204. In this example, the sensor stack 405 includes a substrate 410 to which the light source system 204 is coupled, which may be a backlight of a display according to some implementations. In alternative implementations, the light source system 204 may be coupled to a front light. Accordingly, in some implementations the light source system 204 may be configured for illuminating a display and the target object. [0090] In this implementation, the substrate 410 is coupled to a thin- film transistor (TFT) substrate 415 for the ultrasonic sensor array 202. According to this example, a piezoelectric -22- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 receiver layer 420 overlies the sensor pixels 402 of the ultrasonic sensor array 202 and a platen 425 overlies the piezoelectric receiver layer 420. Accordingly, in this example the apparatus 400 is capable of transmitting the incident light 102 through one or more substrates of the sensor stack 405 that include the ultrasonic sensor array 202 with substrate 415 and the platen 425 that may also be viewed as a substrate. In some implementations, sensor pixels 402 of the ultrasonic sensor array 202 may be transparent, partially transparent or substantially transparent, such that the apparatus 400 may be capable of transmitting the incident light 102 through elements of the ultrasonic sensor array 202. In some implementations, the ultrasonic sensor array 202 and associated circuitry may be formed on or in a glass, plastic or silicon substrate. [0091] In this example, the portion of the apparatus 400 that is shown in Figure 4 includes an ultrasonic sensor array 202 that is capable of functioning as an ultrasonic receiver. According to some implementations, the apparatus 400 may include an ultrasonic transmitter 208. The ultrasonic transmitter 208 may or may not be part of the ultrasonic sensor array 202, depending on the particular implementation. In some examples, the ultrasonic sensor array 202 may include PMUT or CMUT elements that are capable of transmitting and receiving ultrasonic waves, and the piezoelectric receiver layer 420 may be replaced with an acoustic coupling layer. In some examples, the ultrasonic sensor array 202 may include an array of pixel input electrodes and sensor pixels formed in part from TFT circuitry, an overlying piezoelectric receiver layer 420 of piezoelectric material such as PVDF or PVDF-TrFE, and an upper electrode layer positioned on the piezoelectric receiver layer sometimes referred to as a receiver bias electrode. In the example shown in Figure 4, at least a portion of the apparatus 400 includes an ultrasonic transmitter 208 that can function as a plane-wave ultrasonic transmitter. The ultrasonic transmitter 208 may include a piezoelectric transmitter layer with transmitter excitation electrodes disposed on each side of the piezoelectric transmitter layer. [0092] Here, the incident light 102 causes optical excitation within the finger 106 and resultant acoustic wave generation. In this example, the generated acoustic waves 110 include ultrasonic waves. Acoustic emissions generated by the absorption of incident light may be detected by the ultrasonic sensor array 202. A high signal-to-noise ratio may be obtained because the resulting ultrasonic waves are caused by optical stimulation instead of by reflection of transmitted ultrasonic waves. -23- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 [0093] Figure 5 shows an example of a mobile device that includes a biometric system as disclosed herein. In this example, the mobile device 500 is a smart phone. However, in alternative examples the mobile device 500 may another type of mobile device, such as a mobile health device, a wearable device, a tablet, etc. [0094] In this example, the mobile device 500 includes an instance of the apparatus 200 that is described above with reference to Figure 2. In this example, the apparatus 200 is disposed, at least in part, within the mobile device enclosure 505. According to this example, at least a portion of the apparatus 200 is located in the portion of the mobile device 500 that is shown being touched by the finger 106, which corresponds to the location of button 510. 1() .. Accordingly, the button 510 may be an ultrasonic button. In some implementations, the button 510 may serve as a home button. In some implementations, the button 510 may serve as an ultrasonic authenticating button, with the ability to turn on or otherwise wake up the mobile device 500 when touched or pressed and/or to authenticate or otherwise validate a user when applications running on the mobile device (such as a wake-up function) warrant such a function. Light sources for photoacoustic imaging may be included within the button 510. [0095] In this implementation, the mobile device 500 may be capable of performing a user authentication process. For example, a control system of the mobile device 500 may be capable of comparing attribute information obtained from image data received via an ultrasonic sensor array of the apparatus 200 with stored attribute information obtained from image data that has previously been received from an authorized user. In some examples, the attribute information obtained from the received image data and the stored attribute information may include attribute information corresponding to at least one of sub-epidermal features, muscle tissue features or bone tissue features. [0096] According to some implementations, the attribute information obtained from the received image data and the stored attribute information may include information regarding fingerprint minutia. In some such implementations, the user authentication process may involve evaluating information regarding the fingerprint minutia as well as at least one other type of attribute information, such as attribute information corresponding to subdermal features. According to some such examples, the user authentication process may involve evaluating information regarding the fingerprint minutia as well as attribute information corresponding to vascular features. For example, attribute information obtained from a -24- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 received image of blood vessels in the finger may be compared with a stored image of blood vessels in the authorized user's finger 106. [0097] The apparatus 200 that is included in the mobile device 500 may or may not include an ultrasonic transmitter, depending on the particular implementation. However, in some examples, the user authentication process may involve obtaining ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter, as well as obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system. According to some such examples, the ultrasonic image data obtained via insonification of the target object may include fingerprint image data and the 1() ultrasonic image data obtained via illumination of the target object may include vascular image data. [0098] Figure 6 is a flow diagram that provides further examples of biometric system operations. The blocks of Figure 6 (and those of other flow diagrams provided herein) may, for example, be performed by the apparatus 200 of Figure 2 or by a similar apparatus. As with other methods disclosed herein, the method outlined in Figure 6 may include more or fewer blocks than indicated. Moreover, the blocks of method 600, as well as other methods disclosed herein, are not necessarily performed in the order indicated. [0099] Here, block 605 involves controlling a light source system to emit light. In this example, the light may induce acoustic wave emissions inside a target object in block 605. In some implementations, the control system 206 of the apparatus 200 may control the light source system 204 to emit light in block 605. According to some such implementations, the control system 206 may be capable of controlling the light source system 204 to emit at least one light pulse having a duration that is in the range of about 10 nanoseconds to about 500 nanoseconds or more. For example, the control system 206 may be capable of controlling the light source system 204 to emit at least one light pulse having a duration of approximately 10 nanoseconds, 20 nanoseconds, 30 nanoseconds, 40 nanoseconds, 50 nanoseconds, 60 nanoseconds, 70 nanoseconds, 80 nanoseconds, 90 nanoseconds, 100 nanoseconds, 120 nanoseconds, 140 nanoseconds, 150 nanoseconds, 160 nanoseconds, 180 nanoseconds, 200 nanoseconds, 300 nanoseconds, 400 nanoseconds, 500 nanoseconds, etc. In some such implementations, the control system 206 may be capable of controlling the light source system 204 to emit a plurality of light pulses at a frequency between about 1 MHz and about 100 MHz. In other words, regardless of the wavelength(s) of light being emitted by the light -25- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 source system 204, the intervals between light pulses may correspond to a frequency between about 1 MHz and about 100 MHz or more. For example, the control system 206 may be capable of controlling the light source system 204 to emit a plurality of light pulses at a frequency of about 1 MHz, about 5 MHz, about 10 MHz, about 15 MHz, about 20 MHz, about 25 MHz, about 30 MHz, about 40 MHz, about 50 MHz, about 60 MHz, about 70 MHz, about 80 MHz, about 90 MHz, about 100 MHz, etc. In some examples, light emitted by the light source system 204 may be transmitted through an ultrasonic sensor array or through one or more substrates of a sensor stack that includes an ultrasonic sensor array. [0100] According to this example, block 610 involves selecting a first acquisition time delay to receive the acoustic wave emissions primarily from a first depth inside the target object. In some such examples, the control system may be capable of selecting an acquisition time delay to receive acoustic wave emissions at a corresponding distance from the ultrasonic sensor array. The corresponding distance may correspond to a depth within the target object. According to some such examples, the acquisition time delay may be measured from a time that the light source system emits light. In some examples, the acquisition time delay may be in the range of about 10 nanoseconds to over about 2000 nanoseconds. [0101] According to some examples, a control system (such as the control system 206) may be capable of selecting the first acquisition time delay. In some examples, the control system may be capable of selecting the acquisition time delay based, at least on part, on user input. For example, the control system may be capable of receiving an indication of target depth or a distance from a platen surface of the biometric system via a user interface. The control system may be capable of determining a corresponding acquisition time delay from a data structure stored in memory, by performing a calculation, etc. Accordingly, in some instances the control system's selection of an acquisition time delay may be according to user input and/or according to one or more acquisition time delays stored in memory. [0102] In this implementation, block 615 involves acquiring first ultrasonic image data from the acoustic wave emissions received by an ultrasonic sensor array during a first acquisition time window that is initiated at an end time of the first acquisition time delay. Some implementations may involve controlling a display to depict a two- dimensional image that corresponds with the first ultrasonic image data. According to some implementations, the first ultrasonic image data may be acquired during the first acquisition time window from a peak detector circuit disposed in each of a plurality of sensor pixels within the ultrasonic -26- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 sensor array. In some implementations, the peak detector circuitry may capture acoustic wave emissions or reflected ultrasonic wave signals during the acquisition time window. Some examples are described below with reference to Figure 14. [0103] In some examples, the first ultrasonic image data may include image data corresponding to one or more sub-epidermal features, such as vascular image data. According to some implementations, method 600 also may involve obtaining second ultrasonic image data via insonification of the target object with ultrasonic waves from an ultrasonic transmitter. In some such examples, the second ultrasonic image data may include fingerprint image data. However, in some implementations the first ultrasonic image data and the second ultrasonic image data may both be acquired primarily from the same depth inside the target object. In some examples, both the first ultrasonic image data and the second ultrasonic image data may be acquired from the same plurality of sensor pixels within an ultrasonic sensor array. [0104] Figure 7 shows examples of multiple acquisition time delays being selected to receive acoustic waves emitted from different depths. In these examples, each of the acquisition time delays (which are labeled range-gate delays or RGDs in Figure 7) is measured from the beginning time ti of the photo-excitation signal 705 shown in graph 700. The graph 710 depicts emitted acoustic waves (received wave (1) is one example) that may be received by an ultrasonic sensor array at an acquisition time delay RGD1 and sampled during an acquisition time window (also known as a range-gate window or a range-gate width) of RGWi. Such acoustic waves will generally be emitted from a relatively shallower portion of a target object proximate, or positioned upon, a platen of the biometric system. [0105] Graph 715 depicts emitted acoustic waves (received wave (2) is one example) that are received by the ultrasonic sensor array at an acquisition time delay RGD2 (with RGD2 > RGD1) and sampled during an acquisition time window of RGW2. Such acoustic waves will generally be emitted from a relatively deeper portion of the target object. Graph 720 depicts emitted acoustic waves (received wave (n) is one example) that are received at an acquisition time delay RGDõ (with RGDõ > RGD2 > RGD1) and sampled during an acquisition time window of RGW,,. Such acoustic waves will generally be emitted from a still deeper portion of the target object. Range-gate delays are typically integer multiples of a clock period. A clock frequency of 128 MHz, for example, has a clock period of 7.8125 nanoseconds, and RGDs may range from under 10 nanoseconds to over 2000 nanoseconds. Similarly, the -27- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 range-gate widths may also be integer multiples of the clock period, but are often much shorter than the RGD (e.g. less than about 50 nanoseconds) to capture returning signals while retaining good axial resolution. In some implementations, the acquisition time window (e.g. RGW) may be between less than about 10 nanoseconds to about 200 nanoseconds or more. Note that while various image bias levels (e.g. Tx block, Rx sample and Rx hold that may be applied to an Rx bias electrode) may be in the single or low double-digit volt range, the return signals may have voltages in the tens or hundreds of millivolts. [0106] Figure 8 is a flow diagram that provides additional examples of biometric system operations. The blocks of Figure 8 (and those of other flow diagrams provided herein) may, for example, be performed by the apparatus 200 of Figure 2 or by a similar apparatus. As with other methods disclosed herein, the method outlined in Figure 8 may include more or fewer blocks than indicated. Moreover, the blocks of method 800, as well as other methods disclosed herein, are not necessarily performed in the order indicated. [0107] Here, block 805 involves controlling a light source system to emit light. In this example, the light may induce acoustic wave emissions inside a target object in block 805. In some implementations, the control system 206 of the apparatus 200 may control the light source system 204 to emit light in block 805. According to some such implementations, the control system 206 may be capable of controlling the light source system 204 to emit at least one light pulse having a duration that is in the range of about 10 nanoseconds to about 500 nanoseconds. In some such implementations, the control system 206 may be capable of controlling the light source system 204 to emit a plurality of light pulses. [0108] Figure 9 shows examples of multiple acquisition time delays being selected to receive ultrasonic waves emitted from different depths, in response to a plurality of light pulses. In these examples, each of the acquisition time delays (which are labeled RGDs in Figure 9) is measured from the beginning time ti of the photo-excitation signal 905a as shown in graph 900. Accordingly, the examples of Figure 9 are similar to those of Figure 7. However, in Figure 9, the photo-excitation signal 905a is only the first of multiple photo- excitation signals. In this example, the multiple photo-excitation signals include the photo- excitation signals 905b and 905c, for a total of three photo-excitation signals. In other implementations, a control system may control a light source system to emit more or fewer photo-excitation signals. In some implementations, the control system may be capable of controlling the light source system to emit a plurality of light pulses at a frequency between -28- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 about 1 MHz and about 100 MHz. [0109] The graph 910 illustrates ultrasonic waves (received wave packet (1) is one example) that are received by an ultrasonic sensor array at an acquisition time delay RGDi and sampled during an acquisition time window of RGWi. Such ultrasonic waves will generally be emitted from a relatively shallower portion of a target object proximate to, or positioned upon, a platen of the biometric system. By comparing received wave packet (1) with received wave (1) of Figure 7, it may be seen that the received wave packet (1) has a relatively longer time duration and a higher amplitude buildup than that of received wave (1) of Figure 7. This longer time duration corresponds with the multiple photo- excitation signals in the examples shown in Figure 9, as compared to the single photo-excitation signal in the examples shown in Figure 7. [0110] Graph 915 illustrates ultrasonic waves (received wave packet (2) is one example) that are received by the ultrasonic sensor array at an acquisition time delay RGD2 (with RGD2 > RGD1) and sampled during an acquisition time window of RGW2. Such ultrasonic waves will generally be emitted from a relatively deeper portion of the target object. Graph 920 illustrates ultrasonic waves (received wave packet (n) is one example) that are received at an acquisition time delay RGDõ (with RGDõ > RGD2 > RG131) and sampled during an acquisition time window of RGWri. Such ultrasonic waves will generally be emitted from still deeper portions of the target object. [0111] Returning to Figure 8, in this example block 810 involves selecting first through Nth acquisition time delays to receive the acoustic wave emissions primarily from first through /Vth depths inside the target object. In some such examples, the control system may be capable of selecting the first through Nth acquisition time delays to receive acoustic wave emissions at corresponding first through /Vth distances from the ultrasonic sensor array. The corresponding distances may correspond to first through /Vth depths within the target object. According to some such examples, (e.g., as shown in Figures 7 and 9), the acquisition time delays may be measured from a time that the light source system emits light. In some examples, the first through /Vth acquisition time delays may be in the range of about 10 nanoseconds to over about 2000 nanoseconds. [0112] According to some examples, a control system (such as the control system 206) may be capable of selecting the first through /Vth acquisition time delays. In some examples, -29- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 the control system may be capable of receiving one or more of the first through Nth acquisition time delays (or one or more indications of depths or distances that correspond to acquisition time delays) from a user interface, from a data structure stored in memory, or by calculation of one or more depth-to-time conversions. Accordingly, in some instances the control system's selection of the first through /Vth acquisition time delays may be according to user input, according to one or more acquisition time delays stored in memory and/or according to a calculation. [0113] In this implementation, block 815 involves acquiring first through /Vth ultrasonic image data from the acoustic wave emissions received by an ultrasonic sensor array during first through /Vth acquisition time windows that are initiated at end times of the first through Nth acquisition time delays. According to some implementations, the first through /Vth ultrasonic image data may be acquired during first through /Vth acquisition time windows from a peak detector circuit disposed in each of a plurality of sensor pixels within the ultrasonic sensor array. [0114] In this example, block 820 involves processing the first through Nth ultrasonic image data. According to some implementations block 820 may involve controlling a display to depict a two-dimensional image that corresponds with one of the first through /Vth ultrasonic image data. In some implementations, block 820 may involve controlling a display to depict a reconstructed three-dimensional (3-D) image that corresponds with at least a subset of the first through /Vth ultrasonic image data. Various examples are described below with reference to Figures 10A-10F. [0115] Figures 10A-10C are examples of cross-sectional views of a target object positioned on a platen of a biometric system such as those disclosed herein. In this example, the target object is a finger 106, which is positioned on an outer surface of a platen 1005. Figures 10A-10C show examples of tissues and structures of the finger 106, including the epidermis 1010, bone tissue 1015, blood vasculature 1020 and various sub- epidermal tissues. In this example, incident light 102 has been transmitted from a light source system (not shown) through the platen 1005 and into the finger 106. Here, the incident light 102 has caused optical excitation of the epidermis 1010 and blood vasculature 1020 and resultant generation of acoustic waves 110, which can be detected by the ultrasonic sensor array 202. [0116] Figures 10A-10C indicate ultrasonic image data being acquired at three different -30- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 range-gate delays (RGDi, RGD2 and RGD), which are also referred to herein as acquisition time delays, after the beginning of a time interval of photo excitation. The dashed horizontal lines 1025a, 1025b and 1025n in Figures 10A-10C indicate the depth of each corresponding image. In some examples the photo excitation may be a single pulse (e.g., as shown in Figure 7), whereas in other examples the photo excitation may include multiple pulses (e.g., as shown in Figure 9). Figure 10D is a cross-sectional view of the target object illustrated in Figures 10A-10C showing the image planes 1025a, 1025b, ... 1025n at varying depths through which image data has been acquired. [0117] Figure 10E shows a series of simplified two-dimensional images that correspond with ultrasonic image data acquired by the processes shown in Figures 10A-10C with reference to the image planes 1025a, 1025b and 1025n as shown in Figure 10D. The two- dimensional images shown in Figure 10E provide examples of two-dimensional images corresponding with ultrasonic image data that a control system could, in some implementations, cause a display device to display. [0118] Image, of Figure 10E corresponds with the ultrasonic image data acquired using RGD,, which corresponds with the depth 1025a shown in Figures 10A and 10D. Image' includes a portion of the epidermis 1010 and blood vasculature 1020 and also indicates structures of the sub-epidermal tissues. [0119] Image2 corresponds with ultrasonic image data acquired using RGD2, which corresponds with the depth 1025b shown in Figures 10B and 10D. Image2 also includes a portion of the epidermis 1010, blood vasculature 1020 and indicates some additional structures of the sub-epidermal tissues. [0120] Imager, corresponds with ultrasonic image data acquired using RGDõ, which corresponds with the depth 1025n shown in Figures 10C and 10D. Imager, includes a portion of the epidermis 1010, blood vasculature 1020, some additional structures of the sub- epidermal tissues and structures corresponding to bone tissue 1015. Imager, also includes structures 1030 and 1032, which may correspond to bone tissue 1015 and/or to connective tissue near the bone tissue 1015, such as cartilage. However, it is not clear from Image', Image2 or Imager, what the structures of the blood vasculature 1020 and sub- epidermal tissues are or how they relate to one another. [0121] These relationships may be more clearly seen the three- dimensional image shown -31- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 in Figure 10F. Figure 1OF shows a composite of Image', Image2 and Image, as well as additional images corresponding to depths that are between depth 1025b and depth 1025n. A three-dimensional image may be made from a set of two-dimensional images according to various methods known by those of skill in the art, such as a MATLAB reconstruction routine or other routine that enables reconstruction or estimations of three- dimensional structures from sets of two-dimensional layer data. These routines may use spline-fitting or other curve-fitting routines and statistical techniques with interpolation to provide approximate contours and shapes represented by the two-dimensional ultrasonic image data. As compared to the two-dimensional images shown in Figure 10E, the three- dimensional image shown in Figure 1OF more clearly represents structures corresponding to bone tissue 1015 as well as sub-epidermal structures including blood vasculature 1020, revealing vein, artery and capillary structures and other vascular structures along with bone shape, size and features. [0122] Figure 11 shows an example of a mobile device that includes a biometric system capable of performing methods disclosed herein. A mobile device that includes such a biometric system may be capable of various types of mobile health monitoring, such as the imaging of blood vessel patterns, the analysis of blood and tissue components, etc. [0123] In this example, the mobile device 1100 includes an instance of the apparatus 200 that is capable of functioning as an in-display photoacoustic imager (PAT). The apparatus 200 may, for example, be capable of emitting light that induces acoustic wave emissions inside a target object and acquiring ultrasonic image data from acoustic wave emissions received by an ultrasonic sensor array. In some examples, the apparatus 200 may be capable of acquiring ultrasonic image data during one or more acquisition time windows that are initiated at the end time of one or more acquisition time delays. [0124] According to some implementations, the mobile device 1100 may be capable of displaying two-dimensional and/or three-dimensional images on the display 1105 that correspond with ultrasonic image data obtained via the apparatus 200. In other implementations, the mobile device may transmit ultrasonic image data (and/or attributes obtained from ultrasonic image data) to another device for processing and/or display. [0125] In some examples, a control system of the mobile device 1100 (which may include a control system of the apparatus 200) may be capable of selecting one or more -32- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 wavelengths of the light emitted by the apparatus 200. In some examples, the control system may be capable of selecting one or more wavelengths of light to trigger acoustic wave emissions primarily from a particular type of material in the target object. According to some implementations, the control system may be capable of estimating a blood oxygen level .. and/or of estimating a blood glucose level. In some implementations, the control system may be capable of selecting one or more wavelengths of light according to user input. For example, the mobile device 1100 may allow a user or a specialized software application to enter values corresponding to one or more wavelengths of the light emitted by the apparatus 200. Alternatively, or additionally, the mobile device 1100 may allow a user to select a desired function (such as estimating a blood oxygen level) and may determine one or more corresponding wavelengths of light to be emitted by the apparatus 200. For example, in some implementations, a wavelength in the mid-infrared region of the electromagnetic spectrum may be selected and a set of ultrasonic image data may be acquired in the vicinity of blood inside a blood vessel within a target object such as a finger or wrist. A second wavelength in another portion of the infrared region (e.g. near IR region) or in a visible region such as a red wavelength may be selected and a second set of ultrasonic image data may be acquired in the same vicinity as the first ultrasonic image data. A comparison of the first and second sets of ultrasonic image data, in conjunction with image data from other wavelengths or combinations of wavelengths, may allow an estimation of the blood glucose levels and/or blood oxygen levels within the target object. [0126] In some implementations, a light source system of the mobile device 1100 may include at least one backlight or front light configured for illuminating the display 1105 and a target object. For example, the light source system may include one or more laser diodes, semiconductor lasers or light-emitting diodes. In some examples, the light source system may include at least one infrared, optical, red, green, blue, white or ultraviolet light-emitting diode or at least one infrared, optical, red, green, blue or ultraviolet laser diode. According to some implementations, the control system may be capable of controlling the light source system to emit at least one light pulse having a duration that is in the range of about 10 nanoseconds to about 500 nanoseconds. In some instances, the control system may be capable of controlling the light source system to emit a plurality of light pulses at a frequency between about 1 MHz and about 100 MHz. [0127] In this example, the mobile device 1100 may include an ultrasonic authenticating -33- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 button 1110 that includes another instance of the apparatus 200 that is capable of performing a user authentication process. In some such examples, the ultrasonic authenticating button 1110 may include an ultrasonic transmitter. According to some examples, the user authentication process may involve obtaining ultrasonic image data via insonification of a target object with ultrasonic waves from an ultrasonic transmitter and obtaining ultrasonic image data via illumination of the target object with light emitted from the light source system. In some such implementations, the ultrasonic image data obtained via insonification of the target object may include fingerprint image data and the ultrasonic image data obtained via illumination of the target object may include image data corresponding to one or more sub-epidermal features, such as vascular image data. [0128] In this implementation, both the display 1105 and the apparatus 200 are on the side of the mobile device that is facing a target object, which is a wrist in this example, which may be imaged via the apparatus 200. However, in alternative implementations, the apparatus 200 may be on the opposite side of the mobile device 1100. For example, the display 1105 may be on the front of the mobile device and the apparatus 200 may be on the back of the mobile device. According to some such implementations, the mobile device may be capable of displaying two-dimensional and/or three-dimensional images, analogous to those shown in Figures 10E and 10F, as the corresponding ultrasonic image data are being acquired. [0129] In some implementations, a portion of a target object, such as a wrist or arm, may be scanned as the mobile device 1100 is moved. According to some such implementations, a control system of the mobile device 1100 may be capable of stitching together the scanned images to form a more complete and larger two-dimensional or three-dimensional image. In some examples, the control system may be capable of acquiring first and second ultrasonic image data at primarily a first depth inside a target object. The second ultrasonic image data may be acquired after the target object or the mobile device 1100 is repositioned. In some implementations, the second ultrasonic image data may be acquired after a period of time corresponding to a frame rate, such as a frame rate between about one frame per second and about thirty frames per second or more. According to some such examples, the control system may be capable of stitching together or otherwise assembling the first and second ultrasonic image data to form a composite ultrasonic image. [0130] Figure 12 is a flow diagram that provides an example of a method of stitching -34- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 ultrasonic image data obtained via a mobile device such as that shown in Figure 11. As with other methods disclosed herein, the method outlined in Figure 12 may include more or fewer blocks than indicated. Moreover, the blocks of method 1200 are not necessarily performed in the order indicated. [0131] Here, block 1205 involves receiving an indication to obtain stitched ultrasonic images via a mobile device. In this example, block 1205 involves receiving an indication to obtain stitched two-dimensional ultrasonic images. In alternative examples, block 1205 may involve receiving an indication to obtain stitched three-dimensional ultrasonic images. For example, a software application running on a mobile device may recognize that a larger view of an area of interest within a target object is desired after receiving an answer to a prompt provided to a user, and provide an indication to stitch or otherwise assemble a collection of two-dimensional or three-dimensional images obtained as the mobile device is moved over and around the area of interest. [0132] In this example, block 1210 involves receiving an indication of a first acquisition time delay. Block 1205 and/or block 1210 may, for example, involve receiving input from a user interface system, e.g., in response to user interaction with a graphical user interface via touch screen, in response to user interaction with a button, etc. In some implementations, the acquisition time delay may correspond with a distance from an ultrasonic sensor array of the mobile device and/or to a depth within a target object. Accordingly, the user input may correspond to time, distance, depth or another appropriate metric. In alternative examples wherein block 1205 involves receiving an indication to obtain stitched three- dimensional ultrasonic images, block 1210 may involve receiving an indication of first through /Vth acquisition time delays. According to some examples, a control system of the mobile device may receive one or more acquisition time delays from a user interface, from a data structure stored in memory, etc., in block 1210. [0133] In this example, block 1215 involves controlling a light source system of the mobile device to emit light at a current position of the mobile device. In this example, the light induces acoustic wave emissions inside a target object. According to this implementation, block 1220 involves acquiring, at the current position, ultrasonic image data from the acoustic wave emissions. In this implementation, the acoustic wave emissions are received by an ultrasonic sensor array of the mobile at the current position of the mobile device during a first acquisition time window that is initiated at an end time of the first -35- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 acquisition time delay. In alternative examples wherein block 1205 involves receiving an indication to obtain stitched three-dimensional ultrasonic images, block 1220 may involve acquiring, at the current position, ultrasonic image data during first through Nth acquisition time windows after corresponding first through Nth acquisition time delays. [0134] In this implementation, block 1225 involves processing the acquired ultrasonic image data. In some examples, block 1225 may involve displaying the acquired ultrasonic image data. According to some implementations, block 1225 may involve identifying distinctive features of the acquired ultrasonic image data. Such distinctive features may be used for aligning the ultrasonic image data acquired in block 1220 with previously-acquired or subsequently-acquired ultrasonic image data from an overlapping area of the target object. Such distinctive features may be used during further processes of image stitching, e.g., as described below. [0135] In this example, block 1230 involves receiving an indication that the mobile device has changed position. For example, block 1230 may involve receiving inertial sensor data from an inertial sensor system of the mobile device, such as sensor data from one or more accelerometers or angular rate sensors (e.g. gyroscopes) within the mobile device. Based on the inertial sensor data, a control system of the mobile device may determine that the mobile device has changed position. In some implementations, image data from a front- facing or rear-facing camera may be used to detect that the mobile device has changed position. In some implementations, a user may be prompted to provide an indication when the mobile device has changed positioned, for example, by pressing or otherwise touching an image-capture button. [0136] In block 1235, it is determined whether to continue obtaining ultrasonic image data. In some instances, block 1235 may involve receiving an indication from a user interface system to stop obtaining ultrasonic image data. In some instances, block 1235 may involve receiving an indication as to whether a predetermined time interval for obtaining ultrasonic image data has elapsed. [0137] If it is determined to continue obtaining ultrasonic image data in block 1235, in this example the process reverts to block 1215 and the light source system emits light at the current position of the mobile device. The process then continues to block 1220 and additional ultrasonic image data are acquired, at the current position, during the first -36- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 acquisition time window that is initiated at the end time of the first acquisition time delay. [0138] The process then continues to block 1225, in which at least the additional ultrasonic image data are processed. In some examples, at least the additional ultrasonic image data may be displayed. According to some implementations, block 1225 may involve identifying distinctive features of the additional ultrasonic image data. In some such implementations, the distinctive features may be used for aligning the additional ultrasonic image data acquired in block 1220 with previously-acquired or subsequently- acquired ultrasonic image data from an overlapping area of the target object. [0139] Since at least two instances of ultrasonic image data will have been acquired after 1() two iterations of blocks 1215 and 1220, block 1225 may involve a registration process for image stitching. In some implementations, the registration process may involve a search for image alignments that minimize the sum of absolute differences between values of overlapping image pixels. In some examples, the registration process may involve a random sample consensus (RANSAC) method. In some examples, block 1225 may involve an image alignment process. In some such implementations, block 1225 may involve a compositing process, during which images are aligned such that they appear as a single composite image. According to some implementations, block 1225 may involve an image blending process. For example, block 1225 may involve motion compensation, seam line adjustment to minimize the visibility of seams between adjacent images, etc. [0140] In this implementation, method 1200 continues until it is determined in block 1235 not to continue obtaining ultrasonic image data, at which point the process ends. However, some implementations may involve additional operations after it is determined in block 1235 not to continue obtaining ultrasonic image data. In some such implementations, stitched ultrasonic image data may be displayed, stored in a memory and/or transmitted to another device. [0141] Figure 13 is a flow diagram that shows blocks of a method of oxidized hemoglobin detection that may be performed with some disclosed biometric systems. As with other methods disclosed herein, the method outlined in Figure 13 may include more or fewer blocks than indicated. Moreover, the blocks of method 1300 are not necessarily performed in the order indicated. [0142] Here, block 1305 involves receiving an indication that a target object (such as a -37- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 finger, palm or wrist) is positioned proximate a biometric system that includes an ultrasonic sensor array and a light source system. For example, block 1305 may involve receiving an indication that the target object is positioned on a platen of the biometric system. In some implementations, an application running on a mobile device having a biometric system with photoacoustic imaging capability may cue a user to touch or press a button to indicate when the target object is positioned on the platen. In some implementations, the biometric system may sense ultrasonically or capacitively when the target object is in contact with the platen surface and provide the indication accordingly. [0143] In this implementation, block 1310 involves selecting an acquisition time delay. For example, block 1310 may involve selecting an acquisition time delay according to user input received from a user interface system. The acquisition time delay may correspond with a target of interest, such as blood in a blood vessel in this example. In some implementations, block 1310 also may involve selecting a first wavelength of light and a second wavelength of light and a light intensity associated with each selected wavelength for illuminating the target object. According to some implementations, block 1310 may involve selecting one or more wavelengths of light according to user input regarding a desired type of functionality, such as oxidized hemoglobin detection, estimating a blood glucose level, etc. [0144] According to this example, block 1315 involves illuminating the target object with light of the first wavelength. For example, block 1315 may involve illuminating the target object with near-infrared light, which is strongly absorbed by oxygenated hemoglobin. [0145] Here, block 1320 involves acquiring first ultrasonic image data at the selected acquisition time delay. In this example, the first ultrasonic image data corresponds to acoustic waves that were induced by illuminating the target object with light of the first wavelength, such as near-infrared light. [0146] In this example, block 1325 involves illuminating the target object with light of the second wavelength. For example, instead of illuminating the target object with near- infrared light, block 1325 may involve illuminating the target object with a different wavelength of light, such as light in the visible range. Light in the visible range, such as red or green light, is not strongly absorbed by oxygenated hemoglobin, but instead tends to be transmitted. [0147] According to this implementation, block 1330 involves acquiring second -38- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 ultrasonic image data at the selected acquisition time delay. In this example, the second ultrasonic image data correspond to acoustic waves that were induced by illuminating the target object with light of the second wavelength, such as red or green light. By comparing the first ultrasonic image data with the second ultrasonic image data, blood oxygen levels may be estimated. For example, with appropriate calibration coefficients, the signal levels from the first ultrasonic image data may be normalized by the signal levels from the second ultrasonic image data in a region of interest such as within a blood vessel and the ratio compared to a stored table of values that converts the normalized data into, for example, blood oxygen level as a percentage of oxygen saturation (e.g. SO2), as a percentage of 1() peripheral oxygen saturation (e.g. Sp02) or as a percentage of arterial oxygen saturation (e.g. Sa02). [0148] Figure 14 representationally depicts aspects of a 4 x 4 pixel array 1435 of sensor pixels 1434 for an ultrasonic sensor system. Each pixel 1434 may be, for example, associated with a local region of piezoelectric sensor material (PSM), a peak detection diode (D1) and a readout transistor (M3); many or all of these elements may be formed on or in a substrate to form the pixel circuit 1436. In practice, the local region of piezoelectric sensor material of each pixel 1434 may transduce received ultrasonic energy into electrical charges. The peak detection diode D1 may register the maximum amount of charge detected by the local region of piezoelectric sensor material PSM. Each row of the pixel array 1435 may then be scanned, e.g., through a row select mechanism, a gate driver, or a shift register, and the readout transistor M3 for each column may be triggered to allow the magnitude of the peak charge for each pixel 1434 to be read by additional circuitry, e.g., a multiplexer and an A/D converter. The pixel circuit 1436 may include one or more TFTs to allow gating, addressing, and resetting of the pixel 1434. [0149] Each pixel circuit 1436 may provide information about a small portion of the object detected by the ultrasonic sensor system. While, for convenience of illustration, the example shown in Figure 14 is of a relatively coarse resolution, ultrasonic sensors having a resolution on the order of 500 pixels per inch or higher may be configured with an appropriately scaled structure. The detection area of the ultrasonic sensor system may be selected depending on the intended object of detection. For example, the detection area may range from about 5 mm x 5 mm for a single finger to about 3 inches x 3 inches for four fingers. Smaller and larger areas, including square, rectangular and non- rectangular -39- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 geometries, may be used as appropriate for the target object. [0150] Figure 15A shows an example of an exploded view of an ultrasonic sensor system. In this example, the ultrasonic sensor system 1500a includes an ultrasonic transmitter 20 and an ultrasonic receiver 30 under a platen 40. According to some implementations, the ultrasonic receiver 30 may be an example of the ultrasonic sensor array 202 that is shown in Figure 2 and described above. In some implementations, the ultrasonic transmitter 20 may be an example of the optional ultrasonic transmitter 208 that is shown in Figure 2 and described above. The ultrasonic transmitter 20 may include a substantially planar piezoelectric transmitter layer 22 and may be capable of functioning as a plane wave generator. Ultrasonic waves may be generated by applying a voltage to the piezoelectric layer to expand or contract the layer, depending upon the signal applied, thereby generating a plane wave. In this example, the control system 206 may be capable of causing a voltage that may be applied to the planar piezoelectric transmitter layer 22 via a first transmitter electrode 24 and a second transmitter electrode 26. In this fashion, an ultrasonic wave may be made by changing the thickness of the layer via a piezoelectric effect. This ultrasonic wave may travel towards a finger (or other object to be detected), passing through the platen 40. A portion of the wave not absorbed or transmitted by the object to be detected may be reflected so as to pass back through the platen 40 and be received by the ultrasonic receiver 30. The first and second transmitter electrodes 24 and 26 may be metallized electrodes, for example, metal layers that coat opposing sides of the piezoelectric transmitter layer 22. [0151] The ultrasonic receiver 30 may include an array of sensor pixel circuits 32 disposed on a substrate 34, which also may be referred to as a backplane, and a piezoelectric receiver layer 36. In some implementations, each sensor pixel circuit 32 may include one or more TFT elements, electrical interconnect traces and, in some implementations, one or more additional circuit elements such as diodes, capacitors, and the like. Each sensor pixel circuit 32 may be configured to convert an electric charge generated in the piezoelectric receiver layer 36 proximate to the pixel circuit into an electrical signal. Each sensor pixel circuit 32 may include a pixel input electrode 38 that electrically couples the piezoelectric receiver layer 36 to the sensor pixel circuit 32. [0152] In the illustrated implementation, a receiver bias electrode 39 is disposed on a side of the piezoelectric receiver layer 36 proximal to platen 40. The receiver bias electrode 39 may be a metallized electrode and may be grounded or biased to control which signals may -40- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 be passed to the array of sensor pixel circuits 32. Ultrasonic energy that is reflected from the exposed (top) surface of the platen 40 may be converted into localized electrical charges by the piezoelectric receiver layer 36. These localized charges may be collected by the pixel input electrodes 38 and passed on to the underlying sensor pixel circuits 32. The charges may be amplified or buffered by the sensor pixel circuits 32 and provided to the control system 206. [0153] The control system 206 may be electrically connected (directly or indirectly) with the first transmitter electrode 24 and the second transmitter electrode 26, as well as with the receiver bias electrode 39 and the sensor pixel circuits 32 on the substrate 34. In some implementations, the control system 206 may operate substantially as described above. For example, the control system 206 may be capable of processing the amplified signals received from the sensor pixel circuits 32. [0154] The control system 206 may be capable of controlling the ultrasonic transmitter 20 and/or the ultrasonic receiver 30 to obtain ultrasonic image data, e.g., by obtaining fingerprint images. Whether or not the ultrasonic sensor system 1500a includes an ultrasonic transmitter 20, the control system 206 may be capable of obtaining attribute information from the ultrasonic image data. In some examples, the control system 206 may be capable of controlling access to one or more devices based, at least in part, on the attribute information. The ultrasonic sensor system 1500a (or an associated device) may include a memory system that includes one or more memory devices. In some implementations, the control system 206 may include at least a portion of the memory system. The control system 206 may be capable of obtaining attribute information from ultrasonic image data and storing the attribute information in the memory system. In some implementations, the control system 206 may be capable of capturing a fingerprint image, obtaining attribute information from the fingerprint image and storing attribute information obtained from the fingerprint image (which may be referred to herein as fingerprint image information) in the memory system. According to some examples, the control system 206 may be capable of capturing a fingerprint image, obtaining attribute information from the fingerprint image and storing attribute information obtained from the fingerprint image even while maintaining the ultrasonic transmitter 20 in an "off' state. [0155] In some implementations, the control system 206 may be capable of operating the ultrasonic sensor system 1500a in an ultrasonic imaging mode or a force- sensing mode. In -41- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 some implementations, the control system may be capable of maintaining the ultrasonic transmitter 20 in an "off' state when operating the ultrasonic sensor system in a force-sensing mode. The ultrasonic receiver 30 may be capable of functioning as a force sensor when the ultrasonic sensor system 1500a is operating in the force-sensing mode. In some .. implementations, the control system 206 may be capable of controlling other devices, such as a display system, a communication system, etc. In some implementations, the control system 206 may be capable of operating the ultrasonic sensor system 1500a in a capacitive imaging mode. [0156] The platen 40 may be any appropriate material that can be acoustically coupled to the receiver, with examples including plastic, ceramic, sapphire, metal and glass. In some implementations, the platen 40 may be a cover plate, e.g., a cover glass or a lens glass for a display. Particularly when the ultrasonic transmitter 20 is in use, fingerprint detection and imaging can be performed through relatively thick platens if desired, e.g., 3 mm and above. However, for implementations in which the ultrasonic receiver 30 is capable of imaging fingerprints in a force detection mode or a capacitance detection mode, a thinner and relatively more compliant platen 40 may be desirable. According to some such implementations, the platen 40 may include one or more polymers, such as one or more types of parylene, and may be substantially thinner. In some such implementations, the platen 40 may be tens of microns thick or even less than 10 microns thick. [0157] Examples of piezoelectric materials that may be used to form the piezoelectric receiver layer 36 include piezoelectric polymers having appropriate acoustic properties, for example, an acoustic impedance between about 2.5 MRayls and 5 MRayls. Specific examples of piezoelectric materials that may be employed include ferroelectric polymers such as polyvinylidene fluoride (PVDF) and polyvinylidene fluoride- trifluoroethylene (PVDF-TrFE) copolymers. Examples of PVDF copolymers include 60:40 (molar percent) PVDF-TrFE, 70:30 PVDF-TrFE, 80:20 PVDF-TrFE, and 90:10 PVDR-TrFE. Other examples of piezoelectric materials that may be employed include polyvinylidene chloride (PVDC) homopolymers and copolymers, polytetrafluoroethylene (PTFE) homopolymers and copolymers, and diisopropylammonium bromide (DIPAB). [0158] The thickness of each of the piezoelectric transmitter layer 22 and the piezoelectric receiver layer 36 may be selected so as to be suitable for generating and receiving ultrasonic waves. In one example, a PVDF planar piezoelectric transmitter layer 22 -42- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 is approximately 28 p.m thick and a PVDF-TrFE receiver layer 36 is approximately 12 p.m thick. Example frequencies of the ultrasonic waves may be in the range of 5 MHz to 30 MHz, with wavelengths on the order of a millimeter or less. [0159] Figure 15B shows an exploded view of an alternative example of an ultrasonic sensor system. In this example, the piezoelectric receiver layer 36 has been formed into discrete elements 37. In the implementation shown in Figure 15B, each of the discrete elements 37 corresponds with a single pixel input electrode 38 and a single sensor pixel circuit 32. However, in alternative implementations of the ultrasonic sensor system 1500b, there is not necessarily a one-to-one correspondence between each of the discrete elements 37, a single pixel input electrode 38 and a single sensor pixel circuit 32. For example, in some implementations there may be multiple pixel input electrodes 38 and sensor pixel circuits 32 for a single discrete element 37. [0160] Figures 15A and 15B show example arrangements of ultrasonic transmitters and receivers in an ultrasonic sensor system, with other arrangements possible. For example, in some implementations, the ultrasonic transmitter 20 may be above the ultrasonic receiver 30 and therefore closer to the object(s) 25 to be detected. In some implementations, the ultrasonic transmitter may be included with the ultrasonic sensor array (e.g., a single-layer transmitter and receiver). In some implementations, the ultrasonic sensor system may include an acoustic delay layer. For example, an acoustic delay layer may be incorporated into the ultrasonic sensor system between the ultrasonic transmitter 20 and the ultrasonic receiver 30. An acoustic delay layer may be employed to adjust the ultrasonic pulse timing, and at the same time electrically insulate the ultrasonic receiver 30 from the ultrasonic transmitter 20. The acoustic delay layer may have a substantially uniform thickness, with the material used for the delay layer and/or the thickness of the delay layer selected to provide a desired delay in the time for reflected ultrasonic energy to reach the ultrasonic receiver 30. In doing so, the range of time during which an energy pulse that carries information about the object by virtue of having been reflected by the object may be made to arrive at the ultrasonic receiver 30 during a time range when it is unlikely that energy reflected from other parts of the ultrasonic sensor system is arriving at the ultrasonic receiver 30. In some implementations, the substrate 34 and/or the platen 40 may serve as an acoustic delay layer. [0161] As used herein, a phrase referring to "at least one of' a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, -43- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c. [0162] The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system. [0163] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function. [0164] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof Implementations of the subject matter described in this specification also may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus. [0165] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, such as a non- transitory -44- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, non-transitory media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any 1() connection may be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product. [0166] Various modifications to the implementations described in this disclosure may be readily apparent to those having ordinary skill in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the claims, the principles and the novel features disclosed herein. The word "exemplary" is used exclusively herein, if at all, to mean "serving as an example, instance, or illustration." Any implementation described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other implementations. [0167] Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed -45- CA 03019154 2018-09-26 WO 2017/192233 PCT/US2017/026196 combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. [0168] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results. [0169] It will be understood that unless features in any of the particular described implementations are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary implementations may be selectively combined to provide one or more comprehensive, but slightly different, technical solutions. It will therefore be further appreciated that the above description has been given by way of example only and that modifications in detail may be made within the scope of this disclosure. -46-
Methods for forming conductive vias or through-wafer interconnects in semiconductor substrates and resulting through wafer interconnect structures are disclosed. In one embodiment of the present invention, a method of forming a through wafer interconnect structure includes the acts of forming an aperture in a first surface of a substrate, depositing a first insulative or dielectric layer on an inner surface of the aperture, depositing an electrically conductive layer over the first dielectric layer, depositing a second insulative or dielectric layer on the inner surface of the aperture over the electrically conductive material, and exposing a portion of the electrically conductive layer through the second, opposing surface of the substrate. Semiconductor devices including through-wafer interconnects produced with the methods of the instant invention are also described.
CLAIMS What is claimed is: L A method of forming a through-wafer interconnect, the method comprising: forming a blind aperture in a first surface of a substrate; depositing a first dielectric layer on an inner surface of the aperture; depositing an electrically conductive layer over the first dielectric layer; depositing a second dielectric layer over the electrically conductive layer; and exposing a portion of the electrically conductive layer through a second, opposing surface of the substrate. 2. The method according to claim 1, further comprising exposing the first dielectric layer through a second surface of the substrate and placing a third dielectric layer over the opposing, second surface of the substrate and the exposed first dielectric layer prior to exposing the conductive layer through the second, opposing surface of the substrate. 3. The method according to claim 2, further comprising exposing a portion of the first dielectric layer through the second, opposing surface of the substrate prior to exposing a portion of the electrically conductive layer. 4. The method according to claim 2, wherein exposing a portion the first dielectric layer through the second surface of the substrate comprises removing a portion of the substrate. 5. The method according to claim 2, wherein exposing the electrically conductive layer through the second, opposing surface of the substrate further comprises removing a portion of the third dielectric layer and a portion of the first dielectric layer. 6. The method according to claim 1, wherein forming the aperture in the first surface of the substrate comprises forming the aperture through a bond pad on the first surface of the substrate. 7. The method according to claim 1, wherein depositing the second dielectric layer over the electrically conductive layer further comprises depositing the second dielectric layer over the first surface of the substrate and the inner surface of the aperture including the electrically conductive layer and wherein the method further includes removing the second dielectric layer at least from the first surface of the substrate. 8. The method according to claim 7, wherein removing the second dielectric layer at least from the first surface of the substrate comprises spacer etching. 9. The method according to claim 1, further comprising disposing a conductive material over the portion of the electrically conductive layer exposed through the second, opposing surface of the substrate. 10. The method according to claim 1, further comprising filling the aperture with a filler material. 11. The method according to claim 1, wherein depositing the electrically conductive layer comprises depositing at least one layer of a metal over the first dielectric layer. 12. A method of forming a through-wafer interconnect in a substrate, the method comprising: forming a blind aperture in a first surface of the substrate; depositing a first dielectric layer on an inner surface of the aperture; depositing an electrically conductive layer over the first dielectric layer; depositing a second dielectric layer on the first surface of the substrate and over the electrically conductive layer; removing the second dielectric layer from the first surface of the substrate, such that the second dielectric layer remains over at least a portion of the electrically conductive layer; exposing a portion of the first dielectric layer through a second surface of the substrate; placing a third dielectric layer over the opposing, second surface of the substrate and the exposed portion of the first dielectric layer; removing a portion of the third dielectric layer to expose a portion of the first dielectric layer through a remaining portion of the third dielectric layer; and removing the exposed portion of the first dielectric layer and exposing a portion of the electrically conductive layer through the second, opposing surface of the substrate and the remaining portion of the third dielectric layer. 13. A semiconductor device, comprising: a substrate having a first surface and a second, opposing surface; a through-wafer interconnect structure extending from the first surface to the second, opposing surface, the through-wafer interconnect comprising: an electrically conductive material extending from the first surface of the substrate to the second, opposing surface of the substrate, wherein a first portion of the electrically conductive material is exposed through the first surface of the substrate and a second portion of the electrically conductive material is exposed through the second, opposing surface of the substrate; a first dielectric material disposed between the electrically conductive material and the substrate and extending from the second, opposing surface of the substrate to the first portion of the conductive material; and a second dielectric material disposed over a portion of the electrically conductive material and exhibiting a surface that defines a blind aperture extending from the first surface toward the second, opposing surface. 14. The semiconductor device of claim 13, further comprising a dielectric layer covering at least a portion of the second, opposing surface of the substrate. 15. The semiconductor device of claim of claim 14, wherein the dielectric layer covering at least a portion of the second, opposing surface comprises at least one of Parylene(TM) polymer, pyralin polymer, PBO, BCB, dielectric epoxy, low silane oxide, silicon dioxide, and aluminum oxide. 16. The semiconductor device of claim 13, further comprising a second electrically conductive material disposed on the second portion of the conductive material. 17. The semiconductor device of claim 16, wherein the second electrically conductive material comprises a material selected from the group consisting of nickel, titanium nitride, titanium, polysilicon, palladium, tin, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, and combinations of any thereof. 18. The semiconductor device of claim 13, further comprising a filler material disposed in the blind aperture defined by the surface of the second dielectric material. 19. The semiconductor device of claim 18, wherein the filler material is selected from a group consisting of nickel, titanium nitride, titanium, silicon nitride, polysilicon, palladium, tin, lead, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, and combinations of any thereof. 20. The semiconductor device of claim 13, wherein the electrically conductive material comprises a material selected from the group consisting of nickel, titanium nitride, titanium, silicon nitride, polysilicon, palladium, tin, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, conductive polymer and combinations of any thereof. 21. The semiconductor device of claim 13, wherein the first dielectric material comprises a material selected from the group consisting of a low silane oxide, Parylene(TM) polymer, PBO, BCB, silicon dioxide, aluminum oxide, tetraethyl orthosilicate, spin-on glass, thermal oxide, aluminum rich oxide, silicon nitride, silicon oxynitride, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, and combinations of any thereof. 22. The semiconductor device of claim 13, wherein the second dielectric material comprises a material selected from the group consisting of a low silane oxide, Parylene(TM) polymer, PBO, BCB, silicon dioxide, aluminum oxide, tetraethyl orthosilicate, spin-on glass, thermal oxide, aluminum rich oxide, silicon nitride, silicon oxynitride, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, and combinations of any thereof. 23. The semiconductor device of claim 13, wherein the substrate comprises a material selected from the group consisting of silicon, gallium arsenide, indium phosphide, polysilicon, silicon-on-insulator, silicon-on-ceramic, silicon-on-glass, silicon-on-sapphire, a polymer, and combinations of any thereof. 24. The semiconductor device of claim 13, wherein the through-wafer interconnect has a through-substrate length of about 150 [mu]m or greater. 25. The semiconductor device of claim 13, wherein the through-wafer interconnect has a cross-sectional width of about 15 [mu]m or greater. -20- AMENDED CLAIMS received by the International Bureau on 16 January 2007 (16.01.2007) 1. A method of forming a through-wafer interconnect, the method comprising: forming a blind aperture in a first surface of a substrate proximate a bond pad; forming a collar on a top surface of the bond pad, the collar extending to an inner surface of the aperture; depositing a first dielectric layer on the inner surface of the aperture adjacent the collar; forming an interconnecting pad adjacent the aperture by depositing an electrically conductive layer over the first dielectric layer within the aperture and over a portion of the collar, the electrically conductive layer terminating on a plane parallel to the first surface of the substrate; depositing a second dielectric layer over the electrically conductive layer; and exposing a portion of the electrically conductive layer through a second, opposing surface of the substrate. 2. The method according to claim 1, further comprising exposing the first dielectric layer through a second surface of the substrate and placing a third dielectric layer over the opposing, second surface of the. substrate and the exposed first dielectric layer prior to exposing the conductive layer through the second, opposing surface of the substrate. 3. The method according to claim 2, further comprising exposing a portion of the first dielectric layer through the second, opposing surface of the substrate prior to exposing a portion of the electrically conductive layer. 4. The method according to claim 2, wherein exposing a portion the first dielectric layer through the second surface of the substrate comprises removing a portion of the substrate. 5. The method according to claim 2, wherein exposing the electrically conductive layer through the second, opposing surface of the substrate further comprises removing a portion of the third dielectric layer and a portion of the first dielectric layer. 6. The method according to claim 1, wherein forming the aperture in the first surface of the substrate comprises forming the aperture through the bond pad on the first surface of the substrate. 7. The method according to claim 1, wherein depositing the second dielectric layer over the electrically conductive layer further comprises depositing the second dielectric layer over the first surface of the substrate and the inner surface of the aperture including the electrically conductive layer and wherein the method further includes removing the second dielectric layer at least from the first surface of the substrate. 8. The method according to claim 7, wherein removing the second dielectric layer at least from the first surface of the substrate comprises spacer etching. 9. The method according to claim 1, further comprising disposing a conductive material over the portion of the electrically conductive layer exposed through the second, opposing surface of the substrate. 10. The method according to claim 1, further comprising filling the aperture with a filler material. 11. The method according to claim 1, wherein depositing the electrically conductive layer comprises depositing at least one layer of a metal over the first dielectric layer. -22- 12. A method of forming a through-wafer interconnect in a substrate, the method comprising: forming a blind aperture in a first surface of the substrate proximate a bond pad; forming a collar on a top surface of the bond pad, the collar extending to an inner surface of the aperture; depositing a first dielectric layer on the inner surface of the aperture; forming an interconnecting pad by depositing an electrically conductive layer over the first dielectric layer, the interconnecting pad in electrical contact with the bond pad; depositing a second dielectric layer on the first surface of the substrate and over the electrically conductive layer; removing the second dielectric layer from the first surface of the substrate, such that the second dielectric layer remains over at least a portion of the electrically conductive layer; exposing a portion of the first dielectric layer through a second surface of the substrate; placing a third dielectric layer over the opposing, second surface of the substrate and the exposed portion of the first dielectric layer; removing a portion of the third dielectric layer to expose a portion of the first dielectric layer through a remaining portion of the third dielectric layer; and removing the exposed portion of the first dielectric layer and exposing a portion of the electrically conductive layer through the second, opposing surface of the substrate and the remaining portion of the third dielectric layer. -23- 13. A semiconductor device, comprising: a substrate having a first surface and a second, opposing surface; a bond pad on the first surface of the substrate; a through-wafer interconnect structure proximate the bond pad and extending from the first surface to the second, opposing surface, the through-wafer interconnect comprising: a collar extending from a top surface of the bond pad toward an inner surface of the through-wafer interconnect structure; an electrically conductive material initiating on a top surface of the collar and extending from the first surface of the substrate to the second, opposing surface of the substrate, wherein a first portion of the electrically conductive material is exposed through the first surface of the substrate and a second portion of the electrically conductive material is exposed through the second, opposing surface of the substrate; a first dielectric material disposed between the electrically conductive material and the substrate and extending from the second, opposing surface of the substrate to the first portion of the conductive material; and a second dielectric material disposed over a portion of the electrically conductive material and exhibiting a surface that defines a blind aperture extending from the first surface toward the second, opposing surface. 14. The semiconductor device of claim 13, further comprising a dielectric layer covering at least a portion of the second, opposing surface of the substrate. 15. The semiconductor device of claim of claim 14, wherein the dielectric layer covering at least a portion of the second, opposing surface comprises at least one of Parylene(TM) polymer, pyralin polymer, PBO, BCB, dielectric epoxy, low silane oxide, silicon dioxide, and aluminum oxide. 16. The semiconductor device of claim 13, further comprising a second electrically conductive material disposed on the second portion of the conductive material. -24- 17. The semiconductor device of claim 16, wherein the second electrically conductive material comprises a material selected from the group consisting of nickel, titanium nitride, titanium, polysilicon, palladium, tin, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, and combinations of any thereof. 18. The semiconductor device of claim 13, further comprising a filler material disposed in the blind aperture defined by the surface of the second dielectric material. 19. The semiconductor device of claim 18, wherein the filler material is selected from a group consisting of nickel, titanium nitride, titanium, silicon nitride, polysilicon, palladium, tin, lead, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, and combinations of any thereof. 20. The semiconductor device of claim 13, wherein the electrically conductive material comprises a material selected from the group consisting of nickel, titanium nitride, titanium, silicon nitride, polysilicon, palladium, tin, tantalum, tungsten, cobalt, copper, silver, aluminum, iridium, gold, molybdenum, platinum, nickel-phosphorus, palladium-phosphorus, cobalt-phosphorus, conductive polymer and combinations of any thereof. 21. The semiconductor device of claim 13, wherein the first dielectric material comprises a material selected from the group consisting of a low silane oxide, Parylene(TM) polymer, PBO, BCB, silicon dioxide, aluminum oxide, tetraethyl orthosilicate, spin-on glass, thermal oxide, aluminum rich oxide, silicon nitride, silicon oxynitride, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, and combinations of any thereof. -25- 22. The semiconductor device of claim 13, wherein the second dielectric material comprises a material selected from the group consisting of a low silane oxide, Parylene(TM) polymer, PBO, BCB, silicon dioxide, aluminum oxide, tetraethyl orthosilicate, spin-on glass, thermal oxide, aluminum rich oxide, silicon nitride, silicon oxynitride, phosphosilicate glass, borosilicate glass, borophosphosilicate glass, and combinations of any thereof. 23. The semiconductor device of claim 13, wherein the substrate comprises a material selected from the group consisting of silicon, gallium arsenide, indium phosphide, polysilicon, silicon-on-insulator, silicon-on-ceramic, silicon-on-glass, silicon-on-sapphire, a polymer, and combinations of any thereof. 24. The semiconductor device of claim 13, wherein the through-wafer interconnect has a through-substrate length of about 150 [mu]m or greater. 25. The semiconductor device of claim 13, wherein the through- wafer interconnect has a cross-sectional width of about 15 [mu]m or greater. 26. The semiconductor device of claim 18, wherein the filler is in electrical connection with the conductive layer. 27. The method of claim 1 , further comprising removing a portion of the second dielectric layer within the aperture to expose the electrically conductive layer. 28. The method of claim 12, further comprising removing a portion of the second dielectric layer within the aperture to expose the electrically conductive layer.
METHODS OF FORMING THROUGH-WAFER INTERCONNECTS AND STRUCTURES RESULTING THEREFROMTECHNICALFIELD The present invention relates generally to semiconductor manufacturing techniques and methods of forming electrical contacts in semiconductor substrates. More particularly, the present invention relates to methods of forming through-wafer interconnects in semiconductor substrates and structures resulting therefrom.BACKGROUNDSemiconductor substrates often have vias extending therethrough, wherein the vias are filled with conductive materials to form interconnects (commonly known as a through-wafer interconnect, or "TWI") used, for example, to connect circuitry on one surface of the semiconductor device to circuitry on another surface thereof, or to accommodate connection with external circuitry.As used herein, a "via" refers to a hole or aperture having conductive material or a conductive member therein and which extends substantially through a substrate (e.g., from one surface substantially to another opposing surface). The via may be used to accommodate electrical connection of a semiconductor device, an electrical component, or circuitry located on a side of the substrate other than where bond pads have been formed. Vias are conventionally formed in a variety of substrates for a variety of uses. For example, interposers for single die packages, interconnects for multi-die packages, and contact probe cards for temporarily connecting semiconductor dice to a test apparatus often employ vias in their structures. In a more specific example, a test apparatus may be configured for the temporary and simultaneous connection of bond pads of a semiconductor die (e.g., on a full or partial wafer test apparatus). A substrate, employed as a test interposer, may include vias passing therethrough providing a pattern of conductive interconnect structures on one side of the interposer substrate to match the bond pad patterns of the semiconductor dice, as well as a plurality of interconnect structures on an opposing side of the interposer substrate for connection with the test apparatus. Thus, the vias of the interposer substrate provide electrical interconnection between the semiconductor dice (or other device) and the test apparatus. Where a via is to be formed through a semiconductive material such as silicon, one known method for constructing the via includes forming a first hole (sometimes referred to as a "precursor hole") by a so-called "trepan" process, wherein a very small bit of a router or drill is rotated about a longitudinal axis while being moved radially about the axis to create the precursor hole. The precursor hole is larger in diameter than the intended diameter of the completed via. Following precursor hole formation, an insulation (or dielectric) layer is formed in the hole by either forming a thin silicon oxide layer on the hole's surface by exposure to an oxidizing atmosphere or by oxidizing the hole and then coating it with an insulative polymeric material. When a polymeric insulative material coating is desired, a suitable polymer, such as Parylene(TM) polymer, may be vapor deposited over the substrate and into each precursor hole on one side thereof while applying a negative pressure (i.e., a vacuum) to an opposing end of the hole. In some cases, because adhesion of a given polymer material to the silicon may be relatively poor, the surface of the hole may be oxidized to improve adhesion of the polymer material.The insulative polymeric material is drawn into and fills each precursor hole and the polymer is cured. A via hole is drilled (such as by percussion drill or laser) or otherwise formed in the hardened insulative polymeric material so as to exhibit a diameter smaller than that of the precursor hole. The via hole is then filled with a conductive material, which conventionally includes a metal, metal alloy, or metal-containing material, to provide a conductive path between the opposing surfaces of the substrate. The conductive material of the via is insulated from the substrate itself by the layer or layers of insulative polymeric material.While such a method provides adequate structures for enabling electrical interconnection from one surface of a substrate to another surface of the substrate, it is noted that it is difficult to achieve dense spacing of vias and difficult to form vias exhibiting high aspect ratios (i.e., height to width, or cross-sectional dimension ratios) using such a method.In another prior art method of forming a via, a silicon wafer is provided with a thin layer of silicon dioxide on both major, opposing surfaces. A pattern is formed on the wafer by use of mask layers which prevent etching in non-via areas. An etchant is applied to both major surfaces to form holes or "feedthroughs" which meet in the middle of the wafer. A dielectric layer is then formed over the wafer surfaces including the feedthrough side walls. A metal layer is formed over the dielectric layer and conductive material is placed in the feedthroughs to complete the conductive vias. It is noted that, in order to isolate each via, the metal layer must be configured to cover the feedthrough surfaces only, or be subsequently removed from the outer surfaces of the via and wafer. Again, it is difficult to obtain high aspect ratio vias using such conventional methods and, therefore, provide a high level of density of such vias for a given application.Other prior art methods for forming vias are generally illustrated in U.S. Patent 5,166,097 to Tanielian, U.S. Patent 5,063,177 to Geller et al., and U.S. Patent 6,400,172 to Akram et al.It is a continuing desire to improve the manufacturing techniques and processes used in semiconductor fabrication. It would be advantageous to provide a more efficient method for forming through-wafer interconnects that enables a higher density of vias, enables the fabrication of high aspect ratio TWI structures and improves the simplicity of the fabrication process while maintaining or improving the reliability of the TWI structures.DISCLOSURE OF THE INVENTION The present invention discloses methods for forming conductive vias, herein also known as through-wafer interconnects (TWIs), in substrates and the resulting semiconductor devices, electrical components and assemblies including TWI structures.In one embodiment, a method of forming a through-wafer interconnect comprises forming an aperture in a first surface of a substrate, depositing a first dielectric layer on an inner surface of the aperture, depositing an electrically conductive layer over the first dielectric layer, depositing a second dielectric layer on the inner surface of the aperture, and exposing a portion of the electrically conductive layer through the second, opposing surface of the substrate. In accordance with another aspect of the present invention, another method is provided of forming through-wafer interconnect structures. The method includes forming an aperture in a first surface of the substrate, depositing a first dielectric layer on an inner surface of the aperture, depositing a conductive layer over the first dielectric layer, depositing a second dielectric layer over the first surface and at least a portion of the electrically conductive layer, and removing the second dielectric layer from the first surface of the substrate such that the second dielectric layer remains over at least a portion of the electrically conductive layer. A portion of the first dielectric layer is exposed through a second surface of the substrate and a third dielectric layer is disposed over the opposing, second surface of the substrate and the exposed portion of the first dielectric layer. A portion of the third dielectric layer is removed to expose a portion of the first dielectric layer through a remaining portion of the third dielectric layer. The exposed portion of the first dielectric layer is removed thereby exposing a portion of the electrically conductive layer through the second, opposing surface of the substrate and the remaining portion of the third dielectric layer. hi accordance with another aspect of the present invention, a semiconductor device is provided. The semiconductor device comprises a substrate having a first surface and a second, opposing surface, and a through- wafer interconnect extending into the first surface of the substrate. The through-wafer interconnect includes an electrically conductive material extending from the first surface of the substrate to the second, opposing surface of the substrate, wherein a first portion of the electrically conductive material is exposed through the first surface of the substrate and a second portion of the electrically conductive material is exposed through the second, opposing surface of the substrate. A first dielectric material is disposed between the electrically conductive material and the substrate and extends from the second, opposing surface of the substrate to the first portion of the conductive material. A second dielectric material is disposed over a portion of the electrically conductive material and exhibits a surface that defines a blind aperture extending from the first surface toward the second, opposing surface.BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS hi the drawings, which depict nonlimiting embodiments of various features of the present invention, and in which various elements are not necessarily to scale: FIGS. 1-8 illustrate cross-sectional views of semiconductor devices at different stages of fabrication, including the formation of through wafer interconnect structures, in accordance with certain aspects of the present invention; andFIG. 9 is a schematic showing a computing system including a semiconductor device configured in accordance with the present invention.MODE(S) FOR CARRYING OUT THE INVENTION In the present invention, semiconductor wafers or portions thereof, substrates and components in which a conductive via or through-wafer interconnect (TWl) is to be formed are identified herein as "substrates" regardless of the purpose of the TWI or material of construction of the substrate or TWI. Thus, for example, the term "substrate" may be used in reference to semiconductor wafers, semiconductor wafer portions, other bulk semiconductor substrates, semiconductor devices, interposers, probe test cards, and the like. The invention is described as generally applied to the construction of a semiconductor substrate. Methods of making the TWIs in semiconductor devices are described as well as the resulting structures, components and assemblies so made.The methods of forming the TWTs and the resulting structures benefit from the use of lower temperature processes than conventional methods since some of the methods disclosed herein use polymers that are used at ambient temperatures. Further, some of the methods of forming the TWIs of the present invention do not require venting for a hot solder process and no flux clean is required because no solder fill process is required. Additionally the methods described herein enable the fabrication of high aspect ratio TWIs which may or may not be filled with an electrically conductive material.Referring now to FIGS. 1-8, methods and structures in accordance with an embodiment of the present invention are disclosed. FIG. 1 illustrates a cross-section of a semiconductor device 10 having a first surface 12 and an opposing, second surface 14 in accordance with an example of one embodiment of the invention. The semiconductor device 10 comprises a semiconductor substrate 16 (i.e., a silicon substrate) and optionally may include a dielectric layer (not shown), a passivation layer 17 or conductive elements including bond pads 18 which may be coupled with internal circuitry (not shown) as will be appreciated by those of ordinary skill in the art.The substrate 16 may comprise, without limitation, a bulk semiconductor substrate (e.g., a full or partial wafer of a semiconductor material, such as silicon, gallium arsenide, indium phosphide, polysilicon, a silicon-on-insulator (SOI) type substrate, such as silicon-on-ceramic (SOC), silicon-on-glass (SOG), silicon-on-sapphire (SOS), or a polymeric material suitable for semiconductor fabrication, etc., that may include a plurality of semiconductor dice or other semiconductor devices formed therein. If the substrate 16 is a wafer, the substrate 16 may also be a full thickness wafer as received from a vendor or a wafer that has been thinned (e.g., thereby defining the second surface 14) after fabrication of the semiconductor device 10. While not specifically illustrated, the semiconductor device 10 may further include, or be further processed to include, various conductive elements, active areas or regions, transistors, capacitors, redistribution lines, or other structures used to produce integrated circuitry. The TWIs of the present invention may be formed at the semiconductor die level or at the wafer (or other bulk substrate) level, depending on the particular needs of the manufacturing process. Thus, while FIGS. 1-8 illustrate the fabrication of a single TWI in association with a single bond pad 18 (shown as two cross-sectional portions in the drawings), it should be understood that the semiconductor device 10 may be constructed to include multiple TWIs and that such TWTs may be associated with internal circuitry (not shown) or may be formed in "dead space" of the substrate 16. Further, as illustrated in FIG. 1, and depending on the type of process used to place the bond pad 18 on the semiconductor device 10, the bond pad 18 may be partially covered with a passivation layer 17. As will be appreciated by those of ordinary skill in the art, the passivation layer 17 may include an appropriate layer of insulative or dielectric material disposed on a surface of the substrate to prevent oxidation of the semiconductive material.As shown in FIG. 1, an aperture 20, formed as a blind hole in the presently disclosed embodiment, is formed in the first surface 12 of the semiconductor device 10. In one embodiment, the aperture 20 is patterned and etched through the bond pad 18 and into the substrate 16. The aperture 20 may be formed by appropriately masking and patterning a photoresist or other material (e.g., oxide hard mask), and wet or dry etching to form the aperture 20 to a predetermined depth suitable for the formation of the TWI such as, for example, deep silicon etched vias. One suitable "wet" metal etch employs a mixture of nitric acid and hydrofluoric (HF) acid in deionized (DI) water. "Dry" etching may also be termed reactive ion etching (RE). Either a wet or dry etchant may be used to form the aperture 20 as well as to etch through the bond pad 18 (and other materials above the substrate 16, if present). Further, if the substrate 16 is made from silicon, a silicon dioxide native oxide may require removal, and an HF etchant may be used for this purpose prior to etching of the underlying silicon of the substrate 16. hi other embodiments, the aperture 20 may be formed, for example, by laser drilling, laser ablation or mechanical drilling. After formation, the aperture 20 may be subjected to a cleaning process to remove any unwanted reactants or impurities formed during the aperture formation process.After the aperture 20 is formed, a metallized or other conductive layer 22 may be formed on a portion of the bond pad 18. The conductive layer 22 may provide increased material adhesion between the bond pad 18 and a subsequent conductive material such as a metal liner or a material plating. For example, if the bond pad were formed of a material such as aluminum, and if a subsequent conductive layer of material comprised nickel, a conductive layer 22 may be disposed on the bond pad 18 to ensure the adherence of the nickel plating. Still referring to FIG. 1, an insulation layer 24 is applied to the inner surface of the aperture 20. The insulation layer 24 may comprise a dielectric material such as, for example, a pulsed deposition layer (PDL) of low silane oxide (LSO), a Parylene(TM) polymer such as that which is available from Specialty Coating Systems division of Cookson Electronics, silicon dioxide (SiO2), aluminum oxide (Al2O3), an organic polymeric material suitable for passivation purposes such as polybenzoxazole (PBO) or benzocyclobutene (BCB), or combinations of any thereof. Other dielectric materials that may be used as the insulation layer 24 include tetraethyl orthosilicate (TEOS), spin-on glass, thermal oxide, a pulse deposition layer comprising aluminum rich oxide, silicon nitride, silicon oxynitride, a glass (i.e., borophosphosilicate glass (BPSG), phosphosilicate glass, borosilicate glass), or any other suitable dielectric material known in the art. Methods of depositing the insulation layer 24 are known by those of ordinary skill in the art and may vary depending on the type of material used for the insulation layer 24.A conductive layer 26 is deposited over the insulation layer 24 and may be partially disposed over the first surface 12 of the semiconductor device 10 in a manner circumscribing the aperture 20. The conductive layer 26 comprises at least one layer of a conductive material such as, for example, nickel (N). hi one embodiment, the conductive layer 26 may include another layer such as a plating-attractive coating (PAC) or some type of seed layer that is placed over the insulation layer 24 to enhance the deposition of the conductive layer 26. For instance, titanium nitride (TiN) may be placed over the insulation layer 24 using chemical vapor deposition (CVD) techniques to act as the PAC for the subsequent deposition of the seed layer with a plating process such as, for example, electroless or electrolytic plating to form the conductive layer 26.Other conductive materials that may be used to form the conductive layer 26 include, without limitation, titanium (Ti), polysilicon, palladium (Pd), tin (Sn), tantalum (Ta), tungsten (W), cobalt (Co), copper (Cu), silver (Ag), aluminum (Al), iridium, gold (Au), molybdenum (Mo), platinum (Pt), nickel-phosphorus (NiP), palladium-phosphorus (Pd-P), cobalt-phosphorus (Co-P), a Co-W-P alloy, other alloys of any of the foregoing metals, a conductive polymer or conductive material entrained in a polymer (i.e., conductive or conductor-filled epoxy) and mixtures of any thereof.Other deposition processes that may be used to deposit the various layers of the conductive layer 26 include metallo-organic chemical vapor deposition (MOCVD), physical vapor deposition (PVD), plasma-enhanced chemical vapor deposition (PECVD), vacuum evaporation and sputtering. It will be appreciated by those of ordinary skill in the art that the type and thickness of material of the various layers or materials used for the conductive layer 26 and the deposition processes used to the deposit the layers of the conductive layer 26 will vary depending on, for example, the electrical requirements and the type of desired material used to form the TWI and the intended use of the TWI. Referring now to FIG. 2, a second layer of insulation 28 is placed over the first surface 12 of the semiconductor device 10 and an interior surface of the aperture 20. In one embodiment, the second layer of insulation 28 may include Parylene(TM) polymer, but in other embodiments, the second layer of insulation 28 may include another dielectric material such as those discussed herein with reference to the insulation layer 24 of FIG. 1. Thus, the second layer of insulation 28 provides a non-solderable layer that conformally coats the interior surface of the aperture 20 and, as a result, flux clean, hot solder processes, and venting are not required as with prior art techniques of forming TWI structures.The portion of the second layer of insulation 28 overlying the conductive layer 26 and the first surface 12 of the semiconductor device 10 is removed with a process such as, for example, spacer etching with a reactive ion (dry) etch. This results in the semiconductor device 10 structure shown in FIG. 3. In other embodiments, the portion of the second layer of insulation 28 may be removed using other processes including, but not limited to, chemical mechanical planarization (CMP), mechanical stripping, suitable masking and patterning of a photoresist along with wet or dry etching, or other known process. In one embodiment, when the second layer of insulation 28 comprises Parylene(TM) polymer, the second layer of insulation 28 may be masked, patterned and etched to remove the desired portions of the second layer of insulation 28 and expose the upper portions of the conductive layer 26, referred to herein as interconnecting pads 29, as illustrated in FIG. 3.Referring now to FIG. 4, the semiconductor device is depicted as having been inverted about a horizontal line relative to that of FIGS. 1-3 for convenience in describing subsequent process acts and resulting features. The semiconductor device 10 is thinned by removing a portion of the substrate 16 from the second surface 14 of the semiconductor device 10, resulting in a newly defined second surface 14'. The thinning of the semiconductor device 10 exposes the insulation layer 24 originally formed along the surface of the aperture 20 such that it extends partially through the second surface 14' of the semiconductor device 10. The substrate 16 may be thinned using any suitable process which may include, without limitation, an abrasive technique such as CMP or conventional back grinding, the use of a chemical to selectively etch the substrate 16, or suitable masking, patterning and etching of the second surface 14 (FIGS. 1-3) such as, for example, a patterned photoresist followed by a wet or dry etch to remove the substrate 16. Referring now to FIG. 5, a third insulation layer 30 is deposited on the second surface 14 of the semiconductor device 10 and over the exposed portion of the insulation layer 24. Ih one embodiment, the third insulation layer 30 comprises a polymer such as a Parylene(TM) polymer, a pyralin polymer (also known as PI-2611 polymer, available from DuPont), polybenzoxazole (PBO), benzocyclobutene (BCB), an insulative epoxy, a PDL of LSO, silicon dioxide (SiO2), aluminum oxide (Al2O3), or any one of the materials used to form the insulation layer 24 described herein with reference to FIG. 1.As shown in FIG. 6, a portion of the third insulation layer 30 is removed and a small portion of the insulation layer 24 is again exposed. The process used to remove the portion of the third insulation layer 30 overlying the insulation layer 24 is suitable for the type of material used as the third insulation layer 30. For instance, in an embodiment where Parylene(TM) polymer or pyralin polymer is used as the third insulation layer 30, the process for removing the third insulation layer 30 may include masking and patterning a photoresist over the third insulation layer 30 and dry etching through the exposed portion of the third insulation layer 30, then stripping the photoresist. hi a further embodiment, if an insulative epoxy is used, a resist may be used to frame the epoxy pattern, the epoxy applied, and then the resist removed, leaving second surfaces 14' covered and insulation layer 24 exposed. hi another embodiment when PBO is used as the third insulation layer 30, the PBO may be selectively exposed, photodeveloped and baked to leave the protruding insulation layer 24 exposed.In yet another embodiment, a stereolithography process (i.e., such as from Japan Science Technology Agency (JST)) may be used to selectively provide the third insulation layer 30 over the second surface 14' and leave the protruding insulation layer 24 exposed. In a further embodiment, a polymer may be dispersed in a pattern over the third insulation layer 30 using PolyJet(TM) technology from Objet Geometries to leave the insulation layer 24 exposed. In another embodiment, when LSO or PDL is used as the third insulation layer 30, CMP may be used to remove the portion of the third insulation layer 30 and expose the protruding insulation layer 24. Of course, other techniques, or various combinations of such techniques, may also be used to selectively remove portions of the third insulation layer 30 as will be appreciated by those of ordinary skill in the art.Referring now to FIG. 7, the protruding portion of insulation layer 24 is removed thereby exposing a portion of the underlying conductive layer 26, which may be referred to as an interconnecting pad 31. The protruding portion of the insulation layer 24 may be removed using any suitable process depending on the type of material used for the insulation layer 24. For instance, the protruding portion of the insulation layer 24 may be mechanically removed such as by mechanical abrasion or grinding, CMP, use of an etchant selective for the insulation layer 24 to etch away the insulation layer 24, or a suitable photolithography process may be used. With the portion of the conductive layer 26 being exposed, a TWI structure is formed wherein the interconnecting pads 29 and 31 are in electrical communication with one another and, further, are in electrical connection with the bond pad 18 of the semiconductor device 10. As previously discussed, the bond pad 18 may be in electrical communication with electrical circuitry formed in or on the substrate 16. In other embodiments, the resulting TWI structure may not be connected to any circuitry associated with the substrate, but may simply provide electrical interconnection of various external electrical components located opposing sides of the substrate 16.It is noted that, in some embodiments of the present invention, removal of the third insulation layer 30 and removal of the otherwise protruding portion of insulation layer 24 may be accomplished during the same act. However, depending on the underlying material from which the conductive layer 26 is formed, it may be desirable to remove a portion of the third insulation layer 30 and expose the interconnecting pad 31 of the conductive layer in separate acts. For example, if the conductive layer 26 is formed of nickel, and a CMP process is used to remove the portion of the third insulation layer 30, such a process may not result in a uniform surface on the nickel interconnecting pad 31. Thus, a separate act, as described with respect to FIGS. 6 and 7, may be desired to expose the interconnecting pad 31.Referring now to FIG. 8, in one embodiment the exposed portion (or interconnecting pad 31) of conductive layer 26 may include one or more layers of a conductive material such as a metal cap 32. The metal cap 32 may comprise, for example, nickel (Ni), gold (Au), a combination thereof, or any other conductive material compatible with the conductive layer 26 including those previously described herein with reference to the conductive layer 26. For instance, the metal cap 32 may be deposited with a plating process or other suitable process depending on the type of conductive material used for the metal cap 32. In yet another embodiment, the aperture 20 may be filled with a conductive filler material. For example, the aperture 20 may be filled with tin (Sn), silver (Ag), copper (Cu), one of the materials that may be used for the conductive layer 26 previously described herein, any combination thereof, or other material used to fill vias or used to form a solid conductive via known in the art. Other filler materials include metal powder, a metal or alloy powder, a solder (for example, Pb/Sn or Ag/Sn), a flowable conductive photopolymer, a thermoplastic conductive resin, or resin-covered particulate metal material. Additionally, various processes may be used to fill the aperture 20 with conductive material including, for example, wave solder techniques, vacuum solder reflow techniques, or use of laser sphere techniques available from Pac Tech GmbH of Nauen, Germany to deposit solder balls in the aperture 20.In an embodiment where the aperture 20 is filled with a conductive filler material, a portion 34 of the insulation layer 24 may be removed using a suitable process such that the filler material is in electrical connection with the conductive layer 26. Additionally, the filler material may be configured to be in contact with, for example, the portions of the conductive layer 26 that have been defined herein as the interconnecting pads 29.It is further noted that, when a conductive filler is used, the substrate may be thinned to the point of allowing the filler to be exposed through the second surface 14' of the substrate, although such is not necessary due to the presence of the conductive layer 26 providing an interconnecting pad 31 on the second surface 14'.Table 1 lists data obtained for various through-wafer interconnects (TWIs) produced in accordance with the presently disclosed invention. In one embodiment, the insulation layer 24 of the various TWIs comprises a PDL and the conductive layer 26 includes a PAC of Ta or W, a seed layer of Cu, and a liner of Ni of the indicated cross-sectional thicknesses. Table 1.The methods described herein with regard to FIGS. 1-8 may be used to form TWIs in various semiconductor devices. The TWIs may have conventional sizes, such as diameters of about 15 [mu]m or greater and lengths of 150 [mu]m or more, as well as smaller TWIs applicable to enhanced miniaturization of future semiconductor devices. Of course, smaller diameter TWTs may be formed in thinner substrates and desired aspect ratios of the TWIs may be achieved by selection of an appropriate etch chemistry for the substrate material. Other considerations, such as physical strength of the substrate as it becomes thinner and the depth to which integrated circuitry extends into the substrate material may be significant factors in the depth and width of a TWI which may be formed according to the invention, rather than the process of the present invention itself. The semiconductor devices may be further configured with a redistribution layer comprising traces and, optionally, associated discrete external conductive elements formed thereon such as solder bumps, conductive epoxy or a conductor-filled epoxy, which may be disposed or formed on one of the surfaces of the semiconductor device and electrically interconnected with the TWIs by traces or disposed directly on the TWI on conductive layer 26 or metal cap 32, by techniques known to those of ordinary skill in the art. The TWIs produced in accordance with the present invention may also be used for subsequent connection to integrated circuitry, if desired. For instance, a higher level packaging system may include semiconductor devices having TWIs produced with the method of the instant invention. For example, a PC board having a first semiconductor device and a second semiconductor device may be placed in a stacked arrangement utilizing the TWI structure of the present invention. It is further noted that the above-described semiconductor device 10, or other components incorporating one or more TWI structures of the present invention, may be utilized in a computer environment. For example, referring to FIG. 9, a semiconductor device may be incorporated into a computing system 100, which may include, for example, a memory device 102 (which may include any of a variety of random access memory devices, flash memory or other types of memory devices) and a processor device 104, such as a central processing unit or other logic device, operably coupled with the memory device(s) 102. Either of the memory device 102 or the processor device may be configured with a TWI structure in accordance with the present invention. The processor device 104 may also be coupled with one or more appropriate input devices 106 (e.g., mouse, keyboard, hard drive, microphone, etc.) and one or more output devices 108 (e.g., monitor, printer, speaker, etc.).Although the foregoing description contains many specifics, these are not to be construed as limiting the scope of the present invention, but merely providing certain example embodiments. Similarly, other embodiments of the invention may be devised which do not depart from the spirit or scope of the present invention. The scope of the invention is, therefore, indicated and limited only by the appended claims and their legal equivalents, rather than by the foregoing description. All additions, deletions, and modifications to the invention, as disclosed herein, which fall within the meaning and scope of the claims are encompassed by the present invention.
A virtual machine extension (VMX) architecture. More particularly, embodiments of the invention relate to improving the performance of execution of VMREAD and VMWRITE instructions.
1.A device comprising:a virtual machine control structure (VMCS) identification (ID) decoder to decode the VMCS ID data into an address and offset value of the VMCS field to help identify at least one virtual machine architecture instruction to be executed One of the multiple micro-operations.2.The apparatus of claim 1 further comprising a register file to store said VMCS field and to store a base address of a first micro-op (uop) associated with said at least one virtual machine instruction.3.The apparatus of claim 2 further comprising an adder to add said offset value to said base address to generate an address of said first oop.4.The apparatus of claim 3 wherein said offset value is dependent on a size of said VMCS field and depends on whether said field can be read only, only written, or inaccessible.5.The apparatus of claim 1 further comprising a sequencer to store said first uop and other uops associated with said virtual machine architecture instructions.6.The apparatus of claim 5 wherein said virtual machine architecture instructions are VMWRITE instructions.7.The apparatus of claim 5 wherein said virtual machine architecture instructions are VMREAD instructions.8.A method comprising:Obtaining the first instruction from the memory;Performing a first micro-operation (uop) to store a base address of one of a plurality of uops associated with the first instruction;Performing a second uop to decode a virtual machine control structure (VMCS) identification (ID) value into an address of the VMCS field and an offset value of one of the plurality of uops;A third uop is issued from an address indicated by the sum of the base address and the offset value.9.The method of claim 8 wherein said base address is dependent on whether said first instruction reads data from said VMCS field into a register or memory location.10.The method of claim 8 wherein said base address is dependent on whether said first instruction writes data from a register or memory location to said VMCS field.11.The method of claim 8 wherein said offset value is dependent on a size of said VMCS field.12.The method of claim 11 wherein said offset value is further dependent on whether said VMCS field can be read only, can be written only, or cannot be read or written.13.The method of claim 8 wherein said first instruction is a virtual memory architecture instruction.14.The method of claim 13 wherein said first instruction is an instruction to perform a read operation.15.The method of claim 13 wherein said first instruction is an instruction to perform a write operation.16.A system comprising:a storage unit including a virtual machine manager (VMM), the manager including a virtual machine read (VMREAD) instruction;A processor includes a virtual machine control structure (VMCS) identification (ID) decoder that decodes a micro-op (uop) associated with the VMREAD instruction.17.The system of claim 16 wherein said VMM further comprises a VMWRITE instruction, the uop of the instruction being to be decoded by said VMCS ID decoder.18.The system of claim 17 wherein said VMCS ID decoder generates a VMCS field address to be accessed by said VMWRITE instruction or said VMREAD instruction.19.The system of claim 18 wherein said VMCS field is associated with a corresponding offset address of an access uop that will be read from said field or written to said field.20.The system of claim 19 wherein said VMCS ID decoder generates a base address of said access uop.21.The system of claim 20 wherein the address of the access uoop is equal to the sum of the offset address and the base address.22.The system of claim 21 wherein said base address and said VMCS field address are generated by a VMCS ID decoder that decodes VMCS ID uop.23.The system of claim 22 wherein said VMCS ID decoder is logic comprising complementary metal oxide semiconductor (CMOS) transistors.24.A device comprising:A decoding device to decode the VMCS ID data into an address and an offset value of the VMCS field to help identify one of a plurality of uops associated with at least one virtual machine architecture instruction to be executed.25.The apparatus of claim 24, further comprising storage means for storing the VMCS field and storing a base address of a first micro-op (uop) associated with the at least one virtual machine instruction.26.The apparatus of claim 25, further comprising adder means for adding said offset value to said base address to generate an address of said first uopo.27.The apparatus of claim 26, wherein the offset value is dependent on a size of the VMCS field and depends on whether the field can be read only, only written, or inaccessible.28.The apparatus of claim 25, further comprising sequencing means for storing said first uoop and other uops associated with said virtual machine architecture instructions.29.The apparatus of claim 28 wherein said virtual machine architecture instructions are VMWRITE instructions.30.The apparatus of claim 28 wherein said virtual machine architecture instructions are VMREAD instructions.
Virtual machine control structure decoderTechnical fieldEmbodiments of the invention relate to virtual machine extension (VMX) architectures. More specifically, embodiments of the present invention relate to methods and apparatus for coding VMX instructions using a virtual machine control structure (VMCS) identification decoder.Background techniqueThe Virtual Machine Extension (VMX) architecture allows multiple software programs and operating systems to use the same microprocessor logic ("hardware") by allocating processor resources to various software applications and operating systems at different times.The VMX architecture typically uses a virtual machine monitoring (VMM) program that interfaces one or more software programs, such as virtual machines (VMs), to a single microprocessor or set of processing elements. The guest software running on each VM can include a guest operating system and various guest software applications. In addition, applications and operating systems running on the VMM can be collectively referred to as VMs or guests.Typically, each VMCS entry is identified by a unique identifier rather than by an architecturally defined memory address. In at least one prior art example, the VMCS identification (ID) is a unique 32-bit identifier.The two commands supported in the VMX architecture include VMREAD and VMWRITE, which read and write data to VMCS entries, respectively. These instructions, when executed, can use the VMCS ID to locate the appropriate VMCS entry to be read or written. However, the VMREAD and VMWRITE instructions are generally capable of reading and writing data of varying sizes that allow VMREAD and VMWRITE to generate access to processing time and resources. If VMREAD and VMWRITE access those inaccessible data, it may cause further processing loss. VMCS data may be inaccessible to VMREAD and VMWRITE instructions for some reason, for example, VMCS entries contain write-only or read-only data fields, or the data is typically not accessible by these instructions.Processing losses incurred by VMREAD or VMWRITE in the VMX architecture can also compromise the performance of the computer system. Furthermore, the prior art VMCS ID decoding system has not been able to effectively solve this problem.Summary of the inventionIn order to solve the above problems, the present invention provides a virtual machine extension (VMX) architecture. More specifically, embodiments of the present invention relate to improving the performance of execution of VMREAD and VMWRITE instructions. According to an aspect of the present invention, an apparatus is provided comprising: a virtual machine control structure (VMCS) identification (ID) decoder that decodes VMCS ID data into an address and an offset value of a VMCS field, One of a plurality of micro-ops (uops) to help identify at least one virtual machine architecture instruction to be executed.According to another aspect of the present invention, a method is provided comprising: fetching a first instruction from a memory; executing a first micro-operation (uop) to store one of a plurality of uops associated with the first instruction Base address; performing a second uop to decode a virtual machine control structure (VMCS) identification (ID) value into an address of a virtual machine control structure field and an offset value of one of the plurality of uops; The third uop is issued by the address indicated by the sum of the address and the offset value.According to still another aspect of the present invention, a system is provided, comprising: a storage unit including a virtual machine manager (VMM), the manager including a virtual machine read (VMREAD) instruction; and a processor including a virtual machine control structure (VMCS) Identification (ID) decoder that decodes the micro-ops (uop) associated with the VMREAD instruction.According to still another aspect of the present invention, an apparatus is provided comprising: decoding means for decoding VMCS ID data into an address and an offset value of a VMCS field to help identify at least one virtual machine to be executed One of a plurality of micro-ops (uops) associated with an architecture instruction.DRAWINGSEmbodiments of the invention are illustrated by way of example and not limitation, in the drawingsFIG. 1 illustrates a virtual machine environment in which an embodiment of the present invention may be used.2 illustrates a computer system in which at least one embodiment of the present invention may be implemented.FIG. 3 illustrates a point-to-point (PtP) computer system in which one embodiment of the present invention may be implemented.The flowchart in Figure 4 illustrates the operation of a VMREAD or VMWRITE instruction in accordance with one embodiment of the present invention.Figure 5 illustrates logic that decodes a virtual machine control structure (VMCS) identification (ID) field in accordance with one embodiment of the present invention.Detailed waysEmbodiments of the invention relate to virtual machine extension (VMX) architectures. More specifically, embodiments of the present invention relate to improving the performance of execution of VMREAD and VMWRITE instructions.To improve processing performance in a VMX architecture, embodiments of the present invention increase the speed of decoding virtual machine control structure (VMCS) identification (ID) fields. In one embodiment of the invention, logic that may be located in a processing element (eg, a microprocessor) is used to increase VMREAD or by utilizing information provided in the VMCS ID corresponding to fields within the VMCS. The VMWRITE instruction can access the speed of the field.Figure 1 illustrates the structure ("virtual machine environment") for interfacing guest software to a microprocessor. Specifically, FIG. 1 illustrates a virtual machine manager (VMM) 101 that interfaces two virtual machines (VMs) 105 ("guest software") to the microprocessor 110. Software running within each VM can include guest operating systems as well as various software applications. In order to interface each VM with processor resources (eg, registers, memory, and input/output ("I/O") resources), a set of fields within the virtual machine control structure (VMCS) 115 (these fields may reside) In the memory 120, the status and control information can be modified, otherwise tracking is performed. More specifically, control structures such as VMCS are typically used to pass control and access to processor resources between VMMs and VM guests. At least one embodiment 125 of the present invention may reside in a processor, although other embodiments may reside in other elements of the virtual machine environment.FIG. 2 illustrates a computer system in which at least one embodiment of the present invention may be utilized. The processor 205 accesses data from the primary (L1) cache memory 210 and the main memory 215. In other embodiments of the invention, the cache memory may be a secondary (L2) cache or other memory in a computer system memory hierarchy. Illustrated within the processor of FIG. 2 is an embodiment 206 of the present invention. However, other embodiments of the invention may be implemented in other devices within the system, such as separate bus agents, or distributed throughout the system in hardware, software, or some combination thereof.Main memory may be implemented with various memory sources, such as dynamic random access memory (DRAM), hard disk drive (HDD) 220, or a remote memory source located at the computer system via network interface 230, including various storage devices and techniques . The cache memory can be located within the processor or located proximate to the processor, such as on the local bus 207 of the processor. In addition, the cache memory may contain relatively fast memory cells, such as 6-transistor (6T) cells, or other memory cells having substantially equal or faster access speeds.The computer system of Figure 2 can be a point-to-point (PtP) network of bus masters (e.g., microprocessors) that communicate over a PtP network via bus signals dedicated to each subject. At least one embodiment 206 of the present invention is located within, or at least associated with, each bus body, thereby facilitating storage operations between bus bodies in a rapid manner.Figure 3 illustrates a computer system arranged in a point-to-point (PtP) configuration. In particular, Figure 3 illustrates a system in which a processor, memory, and input/output devices are interconnected by a plurality of point-to-point interfaces.The system of Figure 3 may also include several processors, but only two processors 370, 380 are shown for clarity. Processors 370, 380 can each include a local memory controller hub (MCH) 372, 382 coupled to memories 32, 34. Processors 370, 380 can exchange data via point-to-point interface 350 using point-to-point interface circuits 378, 388. Processors 370, 380 can each exchange data with chipset 390 via respective point-to-point interfaces 352, 354 using point-to-point interface circuits 376, 394, 386, 398. Chipset 390 can also exchange data with high performance graphics circuitry 338 via high performance graphics interface 392.At least one embodiment of the present invention can be located within the memory controller hub or CSI interface 372, 382 of the processor. However, other embodiments of the invention may exist in other circuits, logic units or devices within the system of FIG. Moreover, other embodiments of the invention may be distributed within several circuits, logic units or devices illustrated in FIG.In one embodiment of the invention, each field within the VMCS can be identified by a portion of a 32-bit identifier. In addition, some fields of VMCS are accessible for different architectural reasons, some are read-only, others are write-only. In addition, some of the data in the field contains 16 bits of information, while other data contains 32 or 64 bits of information.In order to accommodate different VMCS field attributes, such as size and accessibility, embodiments of the present invention are sufficiently general enough to perform VMREAD or VMWRITE instructions to access data sizes such as 16-bit, 32-bit, and 64-bit fields. Provides improved VMCS ID decoding performance. Moreover, embodiments of the present invention can improve the performance of VMCS ID decoding of fields that are inaccessible to instructions such as VMREAD and VMWRITE, thereby allowing the system to absorb these situations more efficiently.In one embodiment of the invention, VMCS ID coding is improved by utilizing information within the VMCS ID that identifies the location of the VMCS field addressed by the VMREAD or VMWRITE instruction. Moreover, in one embodiment of the invention, the VMCS ID is utilized to initiate the read/write (or one) micro-operation (uop) responsible for performing read/write or, for example, invalid access within the VMREAD or VMWRITE instruction. The offset is decoded.4 is a flow chart illustrating operations associated with one embodiment of the present invention. After the VMREAD or VMWRITE instruction is fetched at operation 401 and the instruction is decoded into a separate uop, uop is executed at operation 405, which stores the uop to be executed corresponding to the correct source/destination of the VMREAD/VMWRITE instruction. position.For example, if the destination of the VMREAD instruction is a register, then X is specified to correspond to the location of the uop in which the VMCS data is read into the register. However, if the destination of the VMREAD instruction is a memory, X is designated to correspond to the location of the UCP that reads the VMCS data into memory. Similarly, if the source data for the VMWRITE instruction comes from a register, then X is specified to correspond to the location of the uop from which the VMCS data was written. However, if the source data for the VMWRITE instruction is from memory, then X is specified to correspond to the location of the uop from which the VMCS data is written.At operation 410, uop is executed such that the VMCS ID for the particular VMCS field is decoded. In one embodiment of the invention, the decoding of the VMCS ID results in determining the address of the VMCS field ("Y") and the offset value ("Z") that can be added to X to determine The location of the uop(s) corresponding to the read or write operation of the VMREAD or VMWRITE instruction. After the sum of X and Z has been calculated, a uop starting at a position corresponding to this sum in the uop sequencer can be performed at operation 415. In addition, at least one of the uops that perform access to the target VMCS field can use the Y value to obtain the address of the accessed field.In order to implement at least one embodiment of the present invention, such as the embodiment illustrated by the flowchart of FIG. 4, logic may be utilized for at least a portion of the described embodiments. In other embodiments, some or all of the operations illustrated in the flowchart of FIG. 4 may be performed by instructions (software) stored on a machine readable medium that, when executed by a processor, cause the processor A method including the operation shown in FIG. 4 is performed. In other embodiments, software and logic may be used together to perform the operations illustrated in the flowchart of FIG.Figure 5 illustrates processing logic including a VMCS ID decoder in which the operations illustrated in the flow chart of Figure 4 may be performed, in accordance with one embodiment of the present invention. In particular, the decoder of FIG. 5 illustrates a sequencer 501 for storing and organizing uops of instructions (eg, VMREAD and VMWRITE) that are fetched or "fetched" by the instruction fetch interface 502. For example, the uop(s) mentioned at operation 405 of FIG. 4 are issued from the sequencer to the execution unit 505, which includes a partial for generating (referred to at operation 410 of FIG. 4) Shifting Z's VMCS ID decoder 510, Z may be added by adder 515 (referred to at operation 405 of Figure 4) to generate a pointer to the next one to be sent to the execution unit in the sequencer. Uop pointer 517.The VMCS ID is decoded to address Y, which is the target field of the VMREAD/VMWRITE instruction in register file 520. In at least one embodiment, the uop that the target field is accessed uses the address Y to identify the target field. Moreover, in one embodiment, both Z and X are stored at locations corresponding to Y in the register file, which are then summed to generate a starting point within the sequencer of the uop to access the desired VMCS field. After X and Z are added in the adder to indicate the correct next uop to be executed in the sequencer, according to whether the instruction is VMREAD or VMWRITE, the correct read/write uop is executed and the desired data is read or written. . Then all uops retire through retirement logic 525.In one embodiment of the invention, there are only five possible Z values (as shown in Figure 4), and if the VMCS ID does not correspond to one of these valid fields, then Y is undefined. If Y is undefined, meaning that the field is inaccessible, then Z will represent the offset of the uop that will signal the invalid VMCS ID when executed. Conversely, if Y contains a defined value indicating that the field is accessible, then Z will represent the offset of the uop that will perform the VMCS access (read or write operation). When Z is added to X, it will represent a pointer to the correct uop that will perform the desired operation.Any or all portions of the embodiments of the invention illustrated herein may be implemented in a variety of ways including, but not limited to, logic using complementary metal oxide semiconductor (CMOS) circuit devices (hardware), stored in a storage medium. An instruction (software) or a combination of hardware and software that, when executed by a machine such as a microprocessor, causes the microprocessor to perform the operations described herein. A "microprocessor" or "processor" as used herein is intended to mean any machine or device that performs operations as a result of receiving one or more input signals or instructions, including CMOS devices.Although the invention has been described with reference to the illustrated embodiments, the description is not intended to be construed in a limiting manner. Various modifications of the illustrative embodiments and other embodiments will be apparent to those skilled in the art of the invention.
Structures and methods of including processor capabilities in an existing PLD architecture with minimal disruption to the existing general interconnect structure. In a PLD including a column of block RAM (BRAM) blocks, the BRAM blocks are modified to create specialized logic blocks including a RAM, a processor, and a dedicated interface coupled between the RAM, the processor, and the general interconnect structure of the PLD. The additional area is obtained by increasing the width of the column of BRAM blocks. Because the interconnect structure remains virtually unchanged, the interconnections between the specialized logic blocks and the adjacent tiles are already in place, and the modifications do not affect the PLD routing software. In some embodiments, the processor can be optionally disabled, becoming transparent to the user. Other embodiments provide methods of modifying a PLD to include the structures and provide the capabilities described above.
1. A programmable logic device (PLD), comprising:a plurality of programmable logic blocks arranged in an array of rows and columns; a general interconnect structure programmably interconnecting the programmable logic blocks; and a column of specialized logic blocks disposed between two columns of programmable logic blocks within the array, each specialized logic block comprising: a first random access memory (RAM); a processor; and a dedicated interface coupled between the first RAM and the processor and further programmably coupled to the general interconnect structure. 2. The PLD of claim 1, wherein for each specialized logic block:the dedicated interface comprises an enable terminal providing an enable signal; when the enable signal is at a first logic level, the dedicated interface couples the RAM to the general interconnect structure while bypassing the processor; and when the enable signal is at a second logic level, the dedicated interface couples the RAM to the processor, and further couples the processor to the general interconnect structure. 3. The PLD of claim 2, wherein each specialized logic block further comprises a configuration memory cell of the PLD coupled to the enable terminal of the dedicated interface.4. The PLD of claim 1, wherein for each specialized logic block:the first RAM comprises stored data for the processor; the specialized logic block further comprises a second RAM storing instructions for the processor; and the dedicated interface comprises a data interface coupled to the first RAM and an instruction interface coupled to the second RAM. 5. The PLD of claim 1, wherein:each of the programmable logic blocks is included in a tile having a first height; and each specialized logic block has a second height, the second height being an integral multiple of the first height. 6. The PLD of claim 1, wherein for each specialized logic block the dedicated interface and the processor operate at the same clock frequency.7. The PLD of claim 1, wherein for each specialized logic block the processor is a 16-bit processor.8. The PLD of claim 1, wherein for each specialized logic block the processor is a 32-bit processor.9. The PLD of claim 1, wherein the PLD is a Field Programmable Gate Array (FPGA).10. A system, comprising:a system bus; at least one peripheral device coupled to the system bus; and a programmable logic device (PLD), comprising: a plurality of programmable logic blocks arranged in an array of rows and columns; a general interconnect structure programmably interconnecting the programmable logic blocks; and a column of specialized logic blocks disposed between two columns of programmable logic blocks within the array, each specialized logic block comprising: a first random access memory (RAM); a processor; and a dedicated interface coupled between the first RAM and the processor and further programmably coupled to the general interconnect structure. 11. The system of claim 10, wherein the system bus is a PCI bus.12. The system of claim 10, wherein for each specialized logic block the processor is a 16-bit processor.13. The system of claim 12, wherein the system bus has a width of greater than 16 bits.14. The system of claim 10, wherein for each specialized logic block the processor is a 32-bit processor.15. The system of claim 10, wherein for each specialized logic block:the dedicated interface comprises an enable terminal providing an enable signal; when the enable signal is at a first logic level, the dedicated interface couples the RAM to the general interconnect structure while bypassing the processor; and when the enable signal is at a second logic level, the dedicated interface couples the RAM to the processor, and further couples the processor to the general interconnect structure. 16. The system of claim 15, wherein each specialized logic block further comprises a configuration memory cell of the PLD coupled to the enable terminal of the dedicated interface.17. The system of claim 10, wherein for each specialized logic block:the first RAM comprises stored data for the processor; the specialized logic block further comprises a second RAM storing instructions for the processor; and the dedicated interface comprises a data interface coupled to the first RAM and an instruction interface coupled to the second RAM. 18. The system of claim 10, wherein:each of the programmable logic blocks is included in a tile having a first height; and each specialized logic block has a second height, the second height being an integral multiple of the first height. 19. The system of claim 10, wherein for each specialized logic block the dedicated interface and the processor operate at the same clock frequency.20. The system of claim 10, wherein the PLD is a Field Programmable Gate Array (FPGA).
FIELD OF THE INVENTIONThe invention relates to programmable logic devices (PLDs) including specialized logic blocks. More particularly, the invention relates to structures and methods for including processor capabilities in RAM blocks in an existing PLD architecture with minimal disruption to the existing general interconnect structure.BACKGROUND OF THE INVENTIONProgrammable logic devices (PLDs) are a well-known type of digital integrated circuit that can be programmed to perform specified logic functions. One type of PLD, the field programmable gate array (FPGA), typically includes an array of configurable logic blocks (CLBs) and programmable input/output blocks (IOBs). The CLBs and IOBs are interconnected by a programmable general interconnect structure.The interconnect structure and logic blocks are typically programmed by loading a stream of configuration data (bitstream) into internal configuration memory cells that define how the logic blocks and interconnect are configured. The configuration data can be read from memory (e.g., an external PROM) or written into the FPGA by an external device. The collective states of the individual memory cells then determine the function of the FPGA.Some FPGAs include blocks of dedicated logic in the CLB array. "Dedicated logic" is hard-coded logic designed to perform a specific function, although the dedicated logic can be programmable to modify the function. For example, the Xilinx Virtex(R)-II FPGA includes blocks of Random Access Memory (BRAM), as shown in FIG. 1. The Xilinx Virtex-II FPGA is described in detail in pages 33-75 of the "Virtex-II Platform FPGA Handbook", published December, 2000, available from Xilinx, Inc., 2100 Logic Drive, San Jose, Calif. 95124, which pages are incorporated herein by reference.As shown in FIG. 1, in the Virtex-II FPGA the array of logic blocks and programmable general interconnect is designed as an array of "tiles". The tile approach both facilitates the physical implementation of the programmable interconnect structure and makes feasible the routing software that implements a user design in the FPGA. One tile can include, for example, a CLB 104, which includes a block of logic (configurable logic element or CLE 101a). Each CLB includes an interconnect area 105 in addition to the CLE 101, and in fact interconnect area 105 typically consumes a much larger percentage of the available area than CLE 101.As is well known in the art, interconnect area 105 includes a hierarchy of interconnect lines and several switch matrices for programmably coupling the interconnect lines to each other and to input and output ports of the CLE. Thus, interconnect area 105 constitutes a portion of the programmable general interconnect structure of the FPGA.In the Virtex-II FPGA, each BRAM block 106 consumes more than one tile in the CLB array, as shown in FIG. 1. The RAM logic 103 is surrounded by programmable interconnect 107, e.g., similar to interconnect 105 in CLB 104. As in CLB 104, the interconnect area of the BRAM block consumes a significant amount of the surface area available for the block.The tiles devoted to implementing specialized functions are often arranged in columns, as shown in FIG. 1, to simplify the routing of user designs. Another advantage of placing the specialized blocks in separate columns is that the specialized tiles can be of a different width from the CLB tiles. When a columnar arrangement is used, the height of each specialized block is the same as, or a multiple of, the height of one CLB tile.In the Virtex-II FPGA, more than one column of CLBs typically separates each column of BRAM blocks, as shown on page 60 of the Virtex-II Platform FPGA Handbook, referenced above. In FIG. 1, only one column of CLBs is shown between each column of BRAM blocks, to clarify the figure.More advanced FPGAs can include more complicated logic blocks in the CLB array. For example, the Xilinx Virtex-II Pro(TM) FPGA includes embedded processor blocks in addition to the blocks available in the Virtex-II FPGA. The Xilinx Virtex-II Pro FPGA is described in detail in pages 19-71 of the "Virtex-II Pro Platform FPGA Handbook", published Oct. 14, 2002 and available from Xilinx, Inc., which pages are incorporated herein by reference.FIG. 2 shows how the processor blocks are embedded in the Virtex-II Pro CLB array. In essence, the BRAM blocks illustrated in FIG. 1 are spread apart vertically to provide room for additional tile rows that include the processor blocks. (In the Virtex-II Pro CLB arrays, a processor block typically covers many more tiles than are shown in FIG. 2, both vertically and horizontally. The number of tiles has been reduced in the figure, for clarity.) Each processor block includes a processor (uP 211), two on-chip memory control blocks (OCMs 212a, 212b), and programmable interconnect 213. The processor and the OCMs are tightly coupled together, i.e., they are interconnected by a dedicated interface rather than being coupled together using the programmable interconnect structure of the FPGA. Additionally, the OCMs provide dedicated interfaces between the processor 211 and the adjacent BRAM blocks 103a-103d. The OCMs serve two main purposes. Firstly, and most obviously, the OCMs function to adapt the defined interface required by the processor 211 to the needs of the BRAM blocks. For example, the OCMs perform address decoding functions. Additionally, however, the interface between the processor 211 and RAM logic 103 might not be able to function at the same maximum frequency as the processor itself. By operating the OCM blocks at a slower clock frequency than the processor, the processor is freed from having to accommodate this external frequency limitation.However, there are many applications where it is desirable to operate an electronic system at the highest possible clock frequency. Many of these systems can also benefit from the advantages of reprogrammability. Therefore, it is desirable to provide programmable logic devices (PLDs) incorporating processor functionality wherein the memory access speed of the embedded processors is not limited by timing delays built into memory control blocks.Further, there are many applications that can benefit from the availability of processor functionality in a PLD, but do not require the computing power provided, for example, by the powerful processors included in the Virtex-II Pro FPGA. Many PLD users would benefit from the addition of processor capability, but prefer a lower cost to a larger die size (and the consequent increase in price) including processor capability. Further, some PLD users do not need and would not use the processor capability. It is desirable to provide a PLD that can meet the needs of each of these users. Therefore, it is desirable to provide processor capability in a PLD while minimizing the increase in die size caused by the modification.It is further desirable to minimize the disruption to the fabric of the PLD. When the processor is not used, it is desirable to have the capability of making the presence of the processor transparent to the user. Further, it is desirable to minimize the effect on the PLD routing software of modifying the PLD to include processor capability.SUMMARY OF THE INVENTIONThe invention provides structures and methods of including processor capabilities in an existing PLD architecture with minimal disruption to the existing general interconnect structure. In a PLD including a column of block RAM (BRAM) blocks, the BRAM blocks are modified to create specialized logic blocks including a RAM, a processor, and a dedicated interface coupled between the RAM, the processor, and the general interconnect structure of the PLD. The interconnect structure uses the majority of the die area within the BRAM block, and the interconnect structure is retained virtually unchanged from the BRAM block. Thus, the addition of the processor and dedicated interface causes the area of the block to increase only slightly. This additional area is obtained by increasing the width of the column of BRAM blocks, e.g., by a small fraction of one block width.Because the interconnect structure remains virtually unchanged, the interconnections between the specialized logic blocks and the adjacent tiles are already in place, and the modifications do not affect the PLD routing software.In some embodiments, the processor can be optionally disabled, (e.g., by setting a bit in a configuration memory cell) in which case the processor becomes transparent to the user. In other embodiments, the enable signal is a user-controlled signal, e.g., coupled to the general interconnect structure.In some embodiments, the specialized logic blocks replace two or more BRAM blocks. In one such embodiment, a specialized logic block includes one RAM used for processor data and another RAM used for processor instructions. Thus, this specialized logic block replaces two BRAM blocks that are vertically adjacent in the column of BRAM blocks.Because the processor is laid out in close proximity to the RAM, the interface between the two circuits can be relatively fast. Therefore, in some embodiments the dedicated interface and the processor operate at the same clock frequency.Other embodiments of the invention provide methods of modifying a PLD including columns of BRAM blocks and columns of programmable logic blocks programmably interconnected by a general interconnect structure, to include the structures and provide the capabilities described above.According to one embodiment, a programmable logic device (PLD) includes a plurality of programmable logic blocks arranged in an array of rows and columns, a general interconnect structure programmably interconnecting the programmable logic blocks, and a column of specialized logic blocks disposed between two columns of programmable logic blocks within the array. Each specialized logic block includes a first random access memory (RAM), a processor, and a dedicated interface coupled between the first RAM and the processor and further programmably coupled to the general interconnect structure.According to another embodiment, a system includes a system bus, at least one peripheral device coupled to the system bus, and a PLD substantially as described above.Another aspect of the invention provides a method of modifying a PLD including columns of BRAM blocks and columns of programmable logic blocks programmably interconnected by a general interconnect structure. Each BRAM block includes a RAM and a plurality of terminals coupling the RAM to the general interconnect structure. The method includes, for each BRAM block in a first column of BRAM blocks, widening the BRAM block to create a specialized logic block wider but having the same height as the BRAM block. The terminals of the specialized logic block have corresponding locations to the terminals of the BRAM block. The specialized logic block is then modified to include a processor and a dedicated interface coupled to the RAM from the BRAM block and to the processor, and further coupled to the general interconnect structure via the terminals of the specialized logic block.According to another embodiment, another method is provided of modifying a PLD including columns of BRAM blocks and columns of programmable logic blocks programmably interconnected by a general interconnect structure. Each BRAM block includes a RAM and a plurality of terminals coupling the RAM to the general interconnect structure. The method includes, for each group of adjacent BRAM blocks within a column, selecting a first group of adjacent BRAM blocks within the column to create a specialized logic block, and widening the specialized logic block to be wider but to have the same height as the first group of adjacent BRAM blocks. The terminals of the specialized logic block have corresponding locations to the terminals of the first group of BRAM blocks. The specialized logic block is then modified to include a processor and a dedicated interface coupled to the RAMs from each of the first group of BRAM blocks and to the processor, and further coupled to the general interconnect structure via the terminals of the specialized logic block.According to another embodiment, a method is provided of designing a new PLD based on an existing PLD architecture. The existing PLD architecture includes columns of BRAM blocks included in an array comprising rows and columns of programmable logic blocks programmably interconnected by a general interconnect structure. The method includes removing a column of the BRAM blocks, widening a space between two columns of programmable logic blocks adjacent to the removed column of BRAM blocks to create an enlarged area, creating a specialized logic block, and inserting in the enlarged area a column of the specialized logic blocks. Each specialized logic block includes a random access memory (RAM), a processor, a plurality of terminals, and a dedicated interface coupled between the RAM and the processor and further coupled to the general interconnect structure of the new PLD via the plurality of terminals. Each specialized logic block has a height equal to a height of a first group of removed BRAM blocks. For each specialized logic block, the plurality of terminals has corresponding locations to corresponding terminals in the first group of removed BRAM blocks.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures.FIG. 1 is a block diagram of an FPGA similar to the Virtex-II FPGA and incorporating specialized BRAM blocks.FIG. 2 is a block diagram of an FPGA similar to the Virtex-II Pro FPGA and incorporating specialized processor blocks and BRAM blocks.FIG. 3 is a block diagram of a first FPGA incorporating specialized blocks that offer both processor and RAM capabilities, according to one embodiment of the invention.FIG. 4 is a block diagram of a second FPGA incorporating specialized blocks that offer both processor and RAM capabilities, according to another embodiment of the invention.FIG. 5 is a block diagram of a specialized logic block for a PLD that offers both processor and RAM capabilities, according to one embodiment of the invention.FIG. 6 is a schematic diagram of a specialized logic block offering both processor and RAM capabilities, according to one embodiment of the invention.FIG. 7 is a block diagram of a system including a PLD incorporating specialized logic blocks according to one embodiment of the invention, in which the specialized logic blocks can be used for parallel processing.FIG. 8 shows the steps of a first method of modifying a PLD, according to one embodiment of the invention.FIG. 9 shows the steps of a second method of modifying a PLD, according to another embodiment of the invention.FIG. 10 shows the steps of a method of designing a new PLD based on an existing PLD architecture, according to another embodiment of the invention.DETAILED DESCRIPTION OF THE DRAWINGSThe present invention is applicable to a variety of programmable logic devices (PLDs). The present invention has been found to be particularly applicable and beneficial for field programmable gate arrays (FPGAs). While the present invention is not so limited, an appreciation of the present invention is presented by way of specific examples, in this instance by illustrating FPGAs comprising tiled arrays of logic blocks.As described above, FIG. 1 illustrates a known FPGA including BRAM blocks, while FIG. 2 illustrates a known FPGA that includes processor blocks in addition to the BRAM blocks.FIG. 3 shows a different approach to incorporating processor and RAM functions into an FPGA architecture, according to one aspect of the invention. The FPGA of FIG. 3 includes specialized logic blocks 321 offering both processor and RAM capabilities. The specialized logic blocks are substituted for BRAM blocks present in an existing PLD architecture. The terminals coupling each BRAM block to the surrounding logic blocks are retained, as is the general interconnect structure in the BRAM block. Thus, the routing software for the existing PLD architecture can still be used for the new PLD with little or no modification. Further, the processor and RAM portions of the specialized blocks are tightly coupled, lying closely together and being interconnected by dedicated interface logic. Thus, in some embodiments of the invention the dedicated interface allows memory accesses by the processor to take place at the same clock frequency as the internal processor functions.Note that the embodiment of FIG. 3 is superficially similar to the FPGA shown in FIG. 1. However, each BRAM block in FIG. 1 has been replaced by a specialized logic block offering both processor and RAM capabilities. FIG. 4 illustrates another embodiment of the invention, where the RAMs are paired together, with a single processor provided for each pair of RAMS. Thus, in the embodiment of FIG. 4 two BRAM blocks have been replaced by a specialized logic block including two RAM blocks, one processor, and a dedicated interface. In other embodiments, larger numbers of RAMs are grouped together with a single tightly-coupled processor.In addition to higher operating frequencies, another advantage of the approach illustrated in FIGS. 3 and 4 is area efficiency. For example, processor/RAM blocks 421 in FIG. 4 each include two of the RAM blocks 103 from FIG. 1. Each of blocks 421 is the same height as the corresponding two BRAM blocks in FIG. 1, and only slightly wider. Note that the addition of extra rows of tiles to accommodate the processor, as pictured in FIG. 2, is unnecessary.In one embodiment, it was found that the addition of a 16-bit processor and dedicated interface to two BRAM blocks required an increase of less than ten percent in the width of the block, with no change in height. Because the BRAM blocks are only a portion of the total chip area, the overall increase in the size of the PLD was only about one percent. This area efficiency is primarily a result of the fact that the largest portion of the area in each block is consumed by programmable routing. Adding a processor to a set of one or two (or more) BRAM blocks does not add to this programmable routing. In fact, the programmable routing already provided in the BRAM block is preferably retained with little or no alteration, thereby minimizing the effect of the substitution on the PLD routing software.FIG. 5 is a block diagram of a first specialized logic block according to one embodiment of the invention. The specialized logic block of FIG. 5 includes a processor block 531, which can be any desired processor. For example, in one embodiment, processor 531 is a hard-coded 16-bit version of the MicroBlaze(TM) processor from Xilinx, Inc. In another embodiment, processor 531 is a hard-coded 32-bit version of the MicroBlaze processor. In yet another embodiment, processor 531 is an implementation of the IBM(R) PowerPC(R) 405D5 processor. ("IBM" and "PowerPC" are registered trademarks of International Business Machines Corporation.) In other embodiments, other processors and/or other bit widths are used.The specialized logic block of FIG. 5 also includes two RAMs 532, 533. These two RAMs can be, for example, the same as RAMs 103a and 103b, or RAMs 103c and 103d, in FIG. 1. In the pictured embodiment, RAM 532 is used to store data for the processor and RAM 533 is used to store instructions for the processor. In the pictured embodiment, the dedicated interface is implemented as a pair of interfaces, a data interface 534 coupled between processor 531 and data RAM 532, and an instruction interface 535 coupled between processor 531 and instruction RAM 533. Each interface 534, 535 is also coupled to the general interconnect structure of the FPGA.In the embodiment of FIG. 5, RAMs 532, 533 are both dual-port RAMs. For example, they can be the same as the RAMs used in implementing block RAMs in the Virtex-II and Virtex-II Pro FPGAs. When a dual-port RAM is used, one port is coupled directly to the corresponding interface 534, 535 ("coupled directly", i.e., without passing through the general interconnect structure of the FPGA). The other port of each RAM is coupled to the general interconnect structure and provides user-controlled access to the data and instructions stored in RAMs 532, 533.A desirable feature of some embodiments is the ability to bypass the processor and communicate directly with the RAM. In effect, the processor is disabled and removed from the circuit. FIG. 6 shows a schematic diagram for one embodiment of the specialized logic block of FIG. 5 having this capability. An enable signal EuP ("enable processor") has either a high or a low value. As illustrated, a low value on the EuP signal couples each RAM block to the general interconnect structure of the FPGA, bypassing the processor. Thus, when the EuP signal is low, the processor is effectively disabled. A high value on the EuP signal couples each RAM block to the processor, and further couples the processor to the general interconnect structure of the FPGA. Thus, when the EuP signal is high, the processor is effectively enabled. A similar embodiment (not shown) uses an active-low enable signal.In some embodiments, the EuP signal is stored in a configuration memory cell of the FPGA. In other embodiments, the EuP signal is a dynamic signal supplied from elsewhere on the FPGA.In the embodiment of FIG. 6, processor 531 is a 16-bit version of the MicroBlaze processor from Xilinx, Inc. The input and output signals of processor 531 are shown in Table 1.<tb><sep>TABLE 1<tb><sep>I_EAdr (0:15)<sep>Extended Instruction Bus, Address<tb><sep>I_EData (0:15)<sep>Extended Instruction Bus, Read Data<tb><sep>I_Adr (0:9)<sep>Local Instruction Bus, Address<tb><sep>I_Data (0:15)<sep>Local Instruction Bus, Read Data<tb><sep>GPIO_1 (0:15)<sep>General Purpose Outputs<tb><sep>GPIO_2 (0:15)<sep>General Purpose Inputs<tb><sep>D_EAdr (0:15)<sep>Extended Data Bus, Address<tb><sep>D_EW (0:15)<sep>Extended Data Bus, Write Data<tb><sep>D_ER (0:15)<sep>Extended Data Bus, Read Data<tb><sep>D_Adr (0:9)<sep>Local Data Bus, Address<tb><sep>D_W (0:15)<sep>Local Data Bus, Write Data<tb><sep>D_R (0:15)<sep>Local Data Bus, Read Data<tb><sep>GPIO_3 (0:15)<sep>General Purpose InputsData interface 534 has a 36-bit data input terminal (i.e., a set of 36 terminals) DDI(0:35) from the general interconnect structure of the FPGA. Twenty DDI(16:35) of the 36 signals DDI(0:35) are passed directly to data RAM 532 at all times. The other 16 bits DDI(0:15) of input data are provided to RAM 532 via multiplexer M2 either by the general interconnect structure or by processor 531, depending on the value of the processor enable signal EuP. Data interface 534 also has a 36-bit data output terminal DDO(0:35) to the general interconnect structure of the FPGA. The 36 data output signals DDO(0:35) are provided to the data output terminals via multiplexer M1 by either data RAM 532 or processor 531, depending on the value of the processor enable signal EuP. 16 bits DDO(0:15) of the data output from data RAM 532 are always provided to processor 531. Data interface 534 also has a 14-bit address input terminal DA(0:13) from the general interconnect structure of the FPGA, which provides addressing information for data RAM 532. Four DA(10:13) of the 14 signals DA(0:13) are passed directly to data RAM 532 at all times. The other ten bits DA(0:9) of address are provided to data RAM 532 via multiplexer M3 either by the general interconnect structure or by processor 531, depending on the value of the processor enable signal EuP.Instruction interface 535 has a 36-bit data input terminal IDI(0:35) from the general interconnect structure of the FPGA. The 36 signals IDI(0:35) are passed directly to processor 531 and to instruction RAM 533 at all times. Instruction interface 535 also has a 36-bit data output terminal IDO(0:35) to the general interconnect structure of the FPGA. The 36 data output signals IDO(0:35) are provided to the data output terminal via multiplexer M4 by either instruction RAM 533 or processor 531, depending on the value of the processor enable signal EuP. 16 bits IDO(0:15) of the data output from instruction RAM 533 are always provided to processor 531. Instruction interface 535 also has a 14-bit address input terminal IA(0:13) from the general interconnect structure of the FPGA, which provides addressing information for instruction RAM 533. Four IA(10:13) of the 14 signals IA(0:13) are passed directly to instruction RAM 533 at all times. The other ten bits IA(0:9) of address are provided to instruction RAM 533 via multiplexer M5 either by the general interconnect structure or by processor 531, depending on the value of the processor enable signal EuP.In another embodiment, processor 531 is a 32-bit version of the MicroBlaze processor from Xilinx, Inc. In one such embodiment, two RAM blocks are included in the specialized block, as shown in FIG. 6. In one such embodiment, the RAM blocks are twice the width of the RAM blocks used with the 16-bit processor. Another embodiment includes four RAM blocks of the same size as those used with the 16-bit processor. The 32-bit MicroBlaze processor and the input and output signals of the processor are described in detail in the "MicroBlaze Processor Reference Guide Embedded Development Kit EDK (v. 3.1.2 EA)", published Nov. 13, 2002 and available from Xilinx, Inc., which is hereby incorporated herein by reference.FIG. 7 shows a system that includes a PLD including specialized logic blocks offering both processor and RAM capabilities. In this type of system, the specialized logic blocks can be used for parallel computing.The system of FIG. 7 includes a system bus 783. System bus 783 can be any type of bus, for example, a PCI bus or a 60X bus such as that associated with the IBM PowerPC processor. Attached to the system bus can optionally be one or more devices communicating with the system bus, such as a card slot 784, an additional processor 785, system memory 786, or other peripherals 787. System bus 783 is also coupled to the IOBs 702 of PLD 700. Coupled to the IOBs 702, e.g., via the general interconnect structure 701, are two or more specialized logic blocks 721a-721n having both processor and RAM capabilities. The PLD also includes other logic blocks, some of which can have specialized functions.Some advanced FPGAs, such as the Virtex-II Pro FPGAs, include several hundred BRAM blocks. If each of these BRAM blocks (or each pair or group of BRAM blocks) is replaced by a specialized block including both RAM and processor functions, highly parallel processing becomes available to the FPGA user. Because parallel processing can be much faster than using a single processor, an FPGA equipped in this fashion can perform some functions much faster than presently available FPGAs.To implement parallel processing using a known FPGA, the design methodology currently requires that the parallel processors be coded into the circuit description, e.g., into the HDL (Hardware Description Language) code describing the circuit. A significant advantage of some embodiments of the invention is that if all BRAM blocks are replaced by specialized blocks including optional processors, compiler code can be developed that automatically takes advantage of these specialized blocks. For example, a design description can be written in "C" code, and a "C" compiler can be provided that automatically implements the code using the parallel processing capabilities of the FPGA.FIG. 8 illustrates the steps of one method of modifying a PLD according to one embodiment of the invention, e.g., to generate one of the PLDs shown and described above. The PLD includes columns of BRAM blocks and columns of programmable logic blocks programmably interconnected by a general interconnect structure. Each BRAM block includes a RAM and a plurality of terminals coupling the RAM to the general interconnect structure. Steps 801-803 are performed for each BRAM block in a column of BRAM blocks within the PLD. The method of FIG. 8 can be used, for example, to modify the PLD of FIG. 1 to generate a PLD such as that shown in FIG. 3.In step 801, one of the BRAM blocks is widened to create a specialized logic block. The specialized logic block is wider than the BRAM block, but has the same height. The locations of the terminals leading to and from the block are retained in corresponding locations. For example, terminals along the left and right edges of the block are not moved vertically. Terminals along the top and bottom edges of the block can be spread apart, if desired, to accommodate the broader width of the specialized logic block. In some embodiments, the specialized logic block is less than ten percent wider than the BRAM block.In step 802, the specialized logic block is modified to add a processor to the block. The processor can be, for example, one of the processors described above.In step 803, the specialized logic block is modified to add a dedicated interface. The dedicated interface is coupled to the RAM (which is present as a result of being in the BRAM block) and to the processor, and further coupled to the general interconnect structure of the PLD through the terminals of the specialized logic block.In some embodiments, the dedicated interface includes an enable terminal providing an enable signal, e.g., as in the specialized logic block shown in FIG. 5. When the enable signal is at a first logic level, the dedicated interface couples the RAM to the general interconnect structure while bypassing the processor. When the enable signal is at a second logic level, the dedicated interface coupled the RAM to the processor and the processor to the general interconnect structure.In step 804, if there remains an unmodified BRAM block in the column, another of the BRAM blocks is selected (step 805) and modified following steps 801-803.FIG. 9 illustrates the steps of another method of modifying a PLD according to one embodiment of the invention, e.g., to generate one of the PLDs shown and described above. The PLD includes columns of BRAM blocks and columns of programmable logic blocks programmably interconnected by a general interconnect structure. Each BRAM block includes a RAM and a plurality of terminals coupling the RAM to the general interconnect structure. Steps 901-904 are performed for each group of BRAM blocks in a column of BRAM blocks within the PLD. The method of FIG. 9 can be used, for example, to modify the PLD of FIG. 1 to generate a PLD such as that shown in FIG. 4.In step 901, a group of adjacent BRAM blocks is selected from a column of BRAM blocks within the PLD. For example, the first N BRAM blocks at one end of the column can be selected, where N is a positive integer. In one embodiment, N is two. The group of adjacent BRAM blocks forms a new specialized logic block.In step 902, the specialized logic block is widened, while maintaining the height of the block. The locations of the terminals leading to and from the block are retained in corresponding locations. For example, terminals along the left and right edges of the block are not moved vertically, while terminals along the top and bottom edges of the block can be spread apart, if desired, to accommodate the broader width of the specialized logic block.In step 903, the specialized logic block is modified to add a processor to the block. The processor can be, for example, one of the processors described above.In step 904, the specialized logic block is modified to add a dedicated interface. The dedicated interface is coupled to the RAMs (which are present as a result of being in the BRAM blocks) and to the processor, and further coupled to the general interconnect structure of the PLD through the terminals of the specialized logic block. In some embodiments, the dedicated interface includes an enable terminal, e.g., as in the specialized logic block shown in FIG. 5.In step 905, if there remain unmodified groups of BRAM blocks in the column, steps 901-904 are repeated for each group.FIG. 10 illustrates the steps of a method of designing a new PLD based on an existing PLD architecture according to one embodiment of the invention, e.g., to generate one of the PLDs shown and described above. The existing PLD architecture includes columns of BRAM blocks included in an array comprising rows and columns of programmable logic blocks programmably interconnected by a general interconnect structure. The method of FIG. 10 can be used, for example, to modify the PLD of FIG. 1 to generate a PLD such as those shown in FIGS. 3 and 4.In step 1001, a column of BRAM blocks is removed from the existing PLD architecture. In step 1002, the space between two columns of programmable logic blocks adjacent to the removed column is widened to created an enlarged area.In step 1003, which can occur in any order relative to steps 1001 and 1002, a specialized logic block is created. The specialized logic block includes a first RAM, a processor, a plurality of terminals, and a dedicated interface. The dedicated interface is coupled to the first RAM and to the processor, and is further coupled to the general interconnect structure via the terminals of the specialized logic block.The height of the specialized logic block is the same as that of a first group of removed BRAM blocks. The terminals of the specialized logic block also have corresponding locations to corresponding terminals in the first group of removed BRAM blocks. For example, terminals along the left and right edges of the block are not moved vertically. However, terminals along the top and bottom edges of the block can be spread apart, if desired, to accommodate the broader width of the specialized logic block.In step 1004, a column of the specialized logic blocks is inserted in the enlarged area resulting from step 1002. Because the terminal locations of the new column of specialized logic blocks correspond to those of the removed column of BRAM blocks, changes to other portions of the PLD are minimized. Because the architecture of the general interconnect structure was not changed, changes to the PLD routing software are also minimized or rendered unnecessary.Those having skill in the relevant arts of the invention will now perceive various modifications and additions that can be made as a result of the disclosure herein. For example, systems, PLDs, FPGAs, BRAM blocks, specialized logic blocks, programmable logic blocks, RAMS, processors, dedicated interfaces, multiplexers, CLEs, CLBs, IOBs, and other components other than those described herein can be used to implement the invention. Active-high signals can be replaced with active-low signals by making straightforward alterations to the circuitry, such as are well known in the art of circuit design.Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance, the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication can often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art.Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents.
Methods of forming a portion of an integrated circuit include forming a patterned mask having an opening and exposing a surface of a semiconductor material, forming a first doped region at a first level of the semiconductor material through the opening, and isotropically removing a portion of the patterned mask to increase a width of the opening. The methods further include forming a second dopedregion at a second level of the semiconductor region through the opening after isotropically removing the portion of the patterned mask, wherein the second level is closer to the surface of the semiconductor material than the first level.
1.A method of forming a portion of an integrated circuit, comprising:Forming a patterned mask having an opening and exposing a surface of the semiconductor material;Forming a first doped region at the first level of the semiconductor material through the opening;Isotropically removing a portion of the patterned mask to increase the width of the opening;After isotropically removing the portion of the patterned mask, forming a second doped region at the second level of the semiconductor region through the opening;Wherein the second level is closer to the surface of the semiconductor material than the first level.2.The method of claim 1 wherein forming the patterned mask comprises forming a patterned photoresist material.3.The method of claim 1 wherein forming the first doped region comprises implanting a first dopant species into the semiconductor material, and wherein forming the second doped region comprises A dopant material is implanted in the semiconductor material.4.The method of claim 3 wherein implanting the second dopant species into the semiconductor material comprises implanting a dopant species that is the same as the first dopant species.5.The method of claim 3 wherein implanting said first dopant species in said semiconductor material and implanting said second dopant species in said semiconductor material comprises using a first implant Inserting the first dopant species into the semiconductor material and implanting the second dopant species with a second implant level that is less than the first implant level In the semiconductor material.6.The method of claim 5 wherein isotropically removing the portion of the patterned mask to increase the width of the opening comprises isotropically etching the patterned mask.7.The method of claim 1 wherein forming the first doped region at the first level of the semiconductor material comprises the semiconductor material having a second conductivity type different than the first conductivity type Forming the first doped region having the first conductivity type therein.8.The method of claim 7, wherein the first doped region having the first conductivity type is formed in the semiconductor material having the second conductivity type different from the first conductivity type Forming the first doped region having the first conductivity type in the semiconductor material having the second conductivity type opposite the first conductivity type.9.The method of claim 8 wherein said first doped region having said first conductivity type is formed in said semiconductor material having said second conductivity type opposite said first conductivity type Forming the first doped region having n-type conductivity in the semiconductor material having p-type conductivity.10.The method of claim 1 further comprising:After forming the second doped region at the second level of the semiconductor region through the opening, isotropically removing an additional portion of the patterned mask to increase the opening Said width; andAfter isotropically removing the additional portion of the patterned mask, additional doped regions are formed through the opening at additional levels of the semiconductor region.11.A method of forming a portion of an integrated circuit, comprising:Forming a patterned mask having an opening and exposing a surface of the semiconductor material;Passing through the opening and implanting a first dopant species into the semiconductor material using a first implant level;Isotropically etching the patterned mask to increase the width of the opening; andAfter isotropically etching the patterned mask, implanting a second dopant species through the opening and using a second implant level that is less than the first implant level In semiconductor materials.12.The method of claim 11 further comprising:After implanting the second dopant species in the semiconductor material, passing the opening and using a third implant level that is less than the second implant level, the third dopant species Implanted in the semiconductor material.13.The method of claim 12 further comprising implanting said third dopant species without isotropic between implanting said second dopant species and implanting said third dopant species The patterned mask is etched.14.The method of claim 11 further comprising:After implanting the second dopant species in the semiconductor material, isotropically etching the patterned mask to increase the width of the opening;After isotropically etching the patterned mask, additional dopant species are implanted into the semiconductor material through the opening and using an additional implant level that is less than the previous implant level.15.The method of claim 14 wherein implanting said additional dopant species in said semiconductor material comprises passing said opening and using said additional implant energy level less than said previously implanted energy level A third dopant species is implanted into the semiconductor material through the opening and using a third implant level that is less than the second implant level.16.The method of claim 11 wherein implanting the first dopant species comprises implanting a dopant species having a conductivity type different from a conductivity type of the semiconductor material.17.The method of claim 11 wherein implanting the first dopant species and implanting the second dopant species comprises implanting the first dopant species having the same conductivity type and a second dopant species, and wherein the first dopant species and the second dopant species comprise the same dopant species or different dopant species.18.The method of claim 11 wherein implanting the first dopant species comprises implanting ions selected from the group consisting of arsenic, antimony, phosphorus, and boron.19.A method of forming a portion of a memory, comprising:Forming a patterned mask having an opening and exposing a surface of the semiconductor material;Forming a first doped region at the first level of the semiconductor material through the opening, wherein the first level extends from a first depth from the surface of the semiconductor substrate to the semiconductor a second depth of the surface of the material, and wherein the second depth is closer to the surface of the semiconductor material than the first depth;Isotropically removing a portion of the patterned mask to increase the width of the opening;After isotropically removing the portion of the patterned mask, a second doped region is formed through the opening at a second level of the semiconductor region, wherein the second level is The second depth from the surface of the semiconductor substrate extends to a third depth from the surface of the semiconductor substrate, and wherein the third depth is closer to the second depth than the second depth Said surface of said semiconductor material;Isotropically removing the second portion of the patterned mask to increase the width of the opening;After isotropically removing the second portion of the patterned mask, a third doped region is formed through the opening at a third level of the semiconductor region, wherein the third a level extending from the third depth from the surface of the semiconductor substrate to a fourth depth from the surface of the semiconductor substrate, and wherein the fourth depth is closer to the third depth On the surface of the semiconductor material;After forming the third doped region, removing the patterned mask; andForming a transistor on the third doped region, wherein the transistor is coupled between an access line of a memory cell block of a memory cell array of the memory and a global access line of the memory.20.The method of claim 19 wherein forming the second doped region comprises forming the second doped region having a width greater than a width of the first doped region.
Method of forming an integrated circuit well structureRelated applicationThe present application claims the benefit of U.S. Provisional Patent Application Serial No. Serial No. No. No. No. No. No. No. Ser. This is incorporated herein by reference.Technical fieldThe present invention relates generally to integrated circuits, and in particular, in one or more embodiments, the present invention relates to methods of forming integrated circuit well structures and memories containing such well structures.Background techniqueIntegrated circuit devices span a wide range of electronic devices. A particular type includes a memory device, often referred to as memory. Memory devices are typically provided as internal semiconductor integrated circuit devices in a computer or other electronic device. There are many different types of memory, including random access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), and flash memory.Flash memory has evolved into a universal source of non-volatile memory for a wide variety of electronic applications. Flash memory typically uses a single transistor memory cell that allows for high memory density, high reliability, and low power consumption. Threshold voltage (Vt) of a memory cell that occurs by programming a charge storage structure (eg, a floating gate or charge trap) (which is commonly referred to as a write) or other physical phenomenon (eg, phase change or polarization) The change determines the data state (eg, data value) of each memory cell. Common uses for flash memory and other non-volatile memories include personal computers, personal digital assistants (PDAs), digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, mobile phones, and A removable memory module, and the use of non-volatile memory continues to expand.NAND flash memory is a common type of flash memory device and is therefore referred to as a logical form of arranging a basic memory cell configuration. Typically, the array of memory cells of a NAND flash memory is arranged such that the control gates of each of the memory cells of the rows of the array are connected together to form an access line, such as a word line. The columns of the array contain a string of memory cells (commonly referred to as NAND strings) connected in series between a pair of select gates (e.g., source select transistors and drain select transistors). Each source select transistor can be connected to a source, and each drain select transistor can be connected to a data line, such as a column bit line. It is known to use more than one selection gate variation between a memory cell string and a source and/or between a memory cell string and a data line.In order to keep memory manufacturers competitive, memory designers are continually striving to increase the density of memory devices. Increasing the density of memory devices typically involves reducing the spacing between circuit elements. However, the reduced spacing of circuit elements can hinder effective isolation of adjacent circuit elements.Summary of the inventionDRAWINGS1 is a simplified block diagram of a memory in communication with a processor as part of an electronic system, in accordance with an embodiment.2A through 2B are schematic diagrams of portions of a memory cell array as used in a memory of the type described with reference to FIG.Figure 3 depicts a related integrated circuit structure.4A through 4H depict integrated circuit structures during various stages of fabrication, in accordance with an embodiment.FIG. 5A is a schematic diagram of a portion of a memory cell array as used in a memory device of the type described with reference to FIG.Figure 5B is a cross-sectional view of the block select transistor of Figure 5A formed on a portion of the integrated circuit structure of Figures 4F through 4H.6 is a flow chart of a method of forming a portion of an integrated circuit device, in accordance with an embodiment.7 is a flow chart of a method of forming a portion of an integrated circuit device, in accordance with an embodiment.8 is a flow chart of a method of forming a portion of an integrated circuit device, in accordance with an embodiment.Detailed waysIn the following detailed description, reference to the drawings in the drawing In the drawings, like reference numerals refer to the substantially Other embodiments may be utilized, and structural, logical, and electrical changes may be made without departing from the scope of the invention. Therefore, the following detailed description should not be taken in a limiting sense.The term "semiconductor" as used herein may refer to, for example, a layer of material, a wafer, or a substrate, and includes any substrate semiconductor structure. "Semiconductor" should be understood to include silicon-on-sapphire (SOS) technology, silicon-on-insulator (SOI) technology, thin film transistor (TFT) technology, doped and undoped semiconductors, and epitaxial silicon supported by a base semiconductor structure. Layers and other semiconductor structures well known to those skilled in the art. Moreover, when referring to a semiconductor in the following description, previous process steps may have been utilized to form regions/junctions in the base semiconductor structure, and the term semiconductor may include underlying layers containing such regions/junctions.The term "conductive" as used herein, as well as its various related forms (eg, conducting, conductively, conducting, conducting, conducting, etc.) ) is electrically conductive unless the context otherwise manifests. Similarly, the term "connecting" as used herein, as well as its various related forms (eg, connected, connected, connected, etc.) refers to electrically connected, unless The context is otherwise expressed.1 is a first form in the form of a memory (eg, memory device) 100 that communicates as part of a third device (in the form of an electronic system) with a second device (eg, in the form of processor 130), in accordance with an embodiment. A simplified block diagram of a device (eg, an integrated circuit device). Some examples of electronic systems include personal computers, personal digital assistants (PDAs), digital cameras, digital media players, digital recorders, games, appliances, vehicles, wireless devices, cellular telephones, and the like. Processor 130 (e.g., a controller external to memory device 100) can be a memory controller or other external host device.Memory device 100 includes a memory cell array 104 that is logically arranged in rows and columns. The memory cells of a logical row are typically connected to the same access line (collectively referred to as a word line), while the memory cells of a logical column are typically selectively connected to the same data line (collectively referred to as a bit line). A single access line can be associated with more than one memory cell logical row, and a single data line can be associated with more than one logical column. At least a portion of the memory cells of memory cell array 104 (not shown in Figure 1) can be programmed to one of at least two data states.Row decoding circuitry 108 and column decoding circuitry 110 are provided to decode the address signals. The address signals are received and decoded to access the memory cell array 104. Memory device 100 also includes input/output (I/O) control circuitry 112 to manage the input of commands, addresses and data to memory device 100, and the output of data and status information from memory device 100. Address register 114 is in communication with I/O control circuit 112 and row decode circuit 108 and column decode circuit 110 to latch the address signal prior to decoding. Command register 124 communicates with I/O control circuit 112 and control logic 116 to latch incoming commands.A controller (eg, control logic 116 internal to memory device 100) controls access to memory cell array 104 in response to a command and generates state information for external processor 130, ie, control logic 116 is configured to be The described embodiments perform access operations (eg, read operations, program operations, and/or erase operations) and other operations. Control logic 116 is in communication with row decode circuit 108 and column decode circuit 110 to control row decode circuit 108 and column decode circuit 110 in response to an address.Control logic 116 may also be in communication with cache register 118. The cache register 118 latches incoming or outgoing data as directed by the control logic 116 to temporarily store data when the memory cell array 104 is busy writing or reading other data, respectively. During a programming operation (eg, a write operation), data is transferred from the cache register 118 to the data register 120 for transfer to the memory cell array 104; the new data from the I/O control circuit 112 is then latched at high speed. Buffer memory register 118. During a read operation, data is transferred from cache register 118 to I/O control circuit 112 for output to external processor 130; new data is then passed from data register 120 to cache register 118. Status register 122 is in communication with I/O control circuit 112 and control logic 116 to latch status information for output to processor 130.Memory device 100 receives control signals from processor 130 at control logic 116 via control link 132. The control signals may include chip enable CE#, command latch enable CLE, address latch enable ALE, write enable WE#, read enable RE#, and write protection WP#. Depending on the nature of the memory device 100, additional or alternative control signals (not shown) may be further received via the control link 132. The memory device 100 receives a command signal (which represents a command), an address signal (which represents an address), and a data signal (which represents data) from the processor 130 via a multiplexed input/output (I/O) bus 134 and via I/ The O bus 134 outputs data to the processor 130.For example, a command can be received at I/O control circuit 112 via input/output (I/O) pins [7:0] of I/O bus 134 and can be written to command register 124. . The address can be received at the I/O control circuit 112 via the input/output (I/O) pins [7:0] of the I/O bus 134 and can be written to the address register 114. It can be received at the I/O control circuit 112 via an input/output (I/O) pin [7:0] of an 8-bit device or an input/output (I/O) pin [15:0] of a 16-bit device. The data can be written to the cache register 118. Data can then be written to data register 120 for programming memory cell array 104. For another embodiment, the cache register 118 can be omitted and data can be written directly into the data register 120. Data can also be output via the input/output (I/O) pins [7:0] of an 8-bit device or the input/output (I/O) pins [15:0] of a 16-bit device. The I/O bus 134 can further include complementary data strobe DQS and DQSN that provide a synchronous reference for data input and output. Although reference may be made to an I/O pin, it may include any conductive node that provides an electrical connection between an external device (e.g., processor 130) and memory device 100, such as a conventional conductive pad or conductive bump.Those skilled in the art will appreciate that additional circuitry and signals may be provided and that the memory device 100 of Figure 1 has been simplified. It will be appreciated that the functionality of the various block components described with reference to Figure 1 need not be isolated into different components or component parts of an integrated circuit device. For example, a single component or component portion of an integrated circuit device can be adapted to perform the functionality of more than one block component of FIG. Alternatively, one or more components or component portions of an integrated circuit device can be combined to perform the functionality of the individual block components of FIG.Additionally, while specific I/O pins are described in terms of common conventions for receiving and outputting various signals, it should be noted that other combinations or numbers of I/O pins may be used in various embodiments.2A is a schematic diagram of a portion of a memory cell array 200A that is used (eg, as part of memory cell array 104) in a memory of the type described with reference to FIG. Memory array 200A includes access lines such as word lines 2020 through 202N and data lines such as bit lines 2040 through 204M. Word line 202 can be connected to a global access line (e.g., global word line) in a many-to-one relationship, not shown in Figure 2A. For some embodiments, memory array 200A can be formed over a semiconductor, which can be doped, for example, in an electrically conductive manner to have, for example, p-type conductivity (eg, to form a p-well) or n-type conductivity ( For example, the type of conductivity used to form the n-well).Memory array 200A can be arranged in rows (each corresponding to word line 202) and columns (each corresponding to bit line 204). Each column may contain a string of memory cells (e.g., non-volatile memory cells) connected in series, such as one of NAND strings 2060 through 206M. Each NAND string 206 can be connected (eg, selectively connected) to a common source (SRC) 216 and can include memory cells 2080 through 208N. Memory unit 208 can represent a non-volatile memory unit for storing data. The memory cells 208 of each NAND string 206 can be connected in series to a select gate 210 (eg, a field effect transistor) (eg, select gates 2100 through 210M (eg, which can be source select transistors, collectively referred to as select gate sources) One of the poles) and one of the select gates 212 (eg, field effect transistors) (eg, select gates 2120 through 212M (eg, which may be drain select transistors, collectively referred to as select gate drains) Between). Select gates 2100 through 210M can be commonly connected to select line 214, such as source select line (SGS), and select gates 2120 through 212M can be commonly connected to select line 215, such as a drain select line (SGD). Although depicted as a conventional field effect transistor, select gates 210 and 212 may utilize a structure similar to (e.g., identical to) memory cell 208. Select gates 210 and 212 may represent a plurality of select gates connected in series, with each select gate being configured in series to receive the same or independent control signals.The source of each select gate 210 can be connected to a common source 216. The drain of each select gate 210 can be connected to a memory cell 2080 of a corresponding NAND string 206. For example, the drain of select gate 2100 can be coupled to memory cell 2080 of corresponding NAND string 2060. Accordingly, each select gate 210 can be configured to selectively connect the corresponding NAND string 206 to the common source 216. The control gate of each select gate 210 can be connected to select line 214.The drain of each select gate 212 can be connected to bit line 204 of a corresponding NAND string 206. For example, the drain of select gate 2120 can be connected to bit line 2040 of corresponding NAND string 2060. The source of each select gate 212 can be coupled to a memory cell 208N of a corresponding NAND string 206. For example, the source of select gate 2120 can be coupled to memory cell 208N of corresponding NAND string 2060. Accordingly, each select gate 212 can be configured to selectively connect a corresponding NAND string 206 to a corresponding bit line 204. The control gate of each select gate 212 can be coupled to select line 215.The memory array of Figure 2A can be a three dimensional memory array, for example, wherein NAND string 206 can be substantially perpendicular to a plane containing a common source 216 and perpendicular to a plane containing a plurality of bit lines 204 (which can be substantially parallel to a common source) Extending the plane of the pole 216).A typical configuration of memory unit 208 includes a data storage structure 234 (e.g., a floating gate, a charge trap, etc.) that can determine the state of the data of the memory cell (e.g., by a change in threshold voltage), and control gate 236, as in Figure 2A. Shown. Data storage structure 234 can include both conductive structures and dielectric structures, while control gate 236 is typically formed from one or more electrically conductive materials. In some cases, memory cell 208 can further have a defined source/drain (eg, source) 230 and a defined source/drain (eg, drain) 232. Memory cell 208 has its control gate 236 connected to (and in some cases formed) word line 202.The columns of memory cells 208 may be NAND strings 206 or a plurality of NAND strings 206 that are selectively coupled to a bit line 204. The rows of memory cells 208 may be memory cells 208 that are commonly connected to a given word line 202. The rows of memory cells 208 may, but need not, include all of the memory cells 208 that are commonly connected to a given word line 202. The rows of memory cells 208 can generally be divided into one or more groups of physical pages of memory cells 208, and the physical pages of memory cells 208 typically include every other memory cell 208 that is commonly connected to a given wordline 202. For example, memory unit 208 that is commonly connected to word line 202N and selectively coupled to even bit line 204 (eg, bit lines 2040, 2042, 2044, etc.) can be one memory unit 208 (eg, even memory unit) physical Pages, while memory unit 208 that is commonly connected to word line 202N and selectively connected to odd bit line 204 (eg, bit lines 2041, 2043, 2045, etc.) can be another memory unit 208 (eg, odd memory unit) physics page. Although bit lines 2043 through 2045 are not explicitly depicted in Figure 2A, it is apparent that bit lines 204 of memory cell array 200A can be consecutively numbered from bit line 2040 to bit line 204M. Other groups of memory cells 208 that are commonly connected to a given word line 202 may also define memory page 208 physical pages. For a particular memory device, all memory cells that are commonly connected to a given word line can be considered a memory cell physical page. Portions of a memory cell physical page (which in some embodiments may still be an entire row) (eg, upper or lower memory cell pages) that are read during a single read operation or programmed during a single program operation may be considered Memory unit logical page. The memory cell blocks can include those memory cells that are configured to be collectively erased, such as all of the memory cells (e.g., all NAND strings 206 that share a common word line 202) that are connected to word lines 2020 through 202N. References to a memory cell page herein refer to a memory cell of a memory cell logical page unless explicitly distinguished.2B is another schematic diagram of a portion of a memory cell array 200B that is used (eg, as part of the memory cell array 104) in a memory of the type described with reference to FIG. The similarly numbered elements in Figure 2B correspond to the description as provided with respect to Figure 2A. Figure 2B provides additional details of one example of a three dimensional NAND memory array structure. The three-dimensional NAND memory array 200B can incorporate a vertical structure that can include a semiconductor pillar, wherein a portion of the pillar can be used as a channel region of a memory cell of the NAND string 206. NAND string 206 may be selectively coupled to bit lines 2040 through 204M and through select transistor 210 (eg, which may be a source) through select transistors 212 (eg, which may be drain select transistors, collectively referred to as select gate drains) The pole select transistors, collectively referred to as select gate sources, are connected to a common source 216. A plurality of NAND strings 206 are selectively connectable to the same bit line 204. A subset of NAND strings 206 can be connected to their respective bit lines 204 by biasing select lines 2150 through 215K to selectively activate particular select transistors 212 that are each located between NAND string 206 and bit line 204. Select transistor 210 can be activated by biasing select line 214. Each word line 202 can be connected to a plurality of memory cell rows of memory array 200B. The rows of memory cells that are commonly connected to each other by a particular word line 202 may be collectively referred to as a slice.Various circuit components can be formed on the well structure of varying types and levels of conductivity. Figure 3 depicts an associated integrated circuit structure demonstrating two adjacent well structures and their limitations. In general, the well structure can be formed in semiconductor material 340 from one or more regions of semiconductor material (eg, doped regions) 346. One of the semiconductor material regions 346 (eg, 3460) is typically formed over the semiconductor material region 338 and is typically in contact with the semiconductor material region 338. The semiconductor material region 338 can have a conductivity type, such as n-type conductivity. The semiconductor material regions 346 (e.g., each of the semiconductor material regions 3460 to 3462) may have the same conductivity type as the semiconductor material region 338, such as n-type conductivity. The semiconductor material 340 can have a conductivity type that is different (eg, opposite) to the conductivity type of the semiconductor material 338, such as p-type conductivity. In combination, semiconductor material regions 3460 through 3462 and semiconductor material region 338 form a continuous structure commonly referred to as a basin portion. The enclosed portion of semiconductor material 340 between the two stacks of semiconductor material regions 3460 to 3462 that are in contact with semiconductor material region 338 (eg, within the basin portion) may represent a conductivity type that is different from the well structure of semiconductor material region 346. The trap.Each of the semiconductor material regions 346 can be formed by implanting a dopant species into the semiconductor material 340. As is well understood in the art, this implantation generally involves ion acceleration for the surface of semiconductor material 340. To produce n-type conductivity, the dopant species may comprise ions of arsenic (As), antimony (Sb), phosphorus (P) or another n-type impurity. To produce p-type conductivity, the dopant species may comprise ions of boron (B) or another p-type impurity.Each of the semiconductor material regions 346 can be formed at different implant levels. Higher implant levels can generally result in deeper doped regions of a given dopant species. For example, the semiconductor material region 3460 can be formed with a first implant level, the semiconductor material region 3461 can be formed smaller than the second implant level of the first implant level, and the semiconductor material region 3462 can be smaller than the second implant The third implant level of the energy level is formed. The semiconductor material region 338 can be similarly formed, for example, by implanting a dopant species into the semiconductor material 340 at an implant level higher than any of the semiconductor material regions 3460 through 3462.While higher energy implantation generally forms a doped region at a deeper level of a given dopant species (eg, farther from the surface of semiconductor material 340), it can also result in increased dopant migration or dispersion. Horizontally, region 3460 can be wider than region 3461, and region 3461 can be wider than region 3462. As the spacing between adjacent well structures narrows, the isolation characteristics can be reduced and can result in punch-through or breakdown between adjacent well structures. Various embodiments may alleviate this widening of the doped regions of the multilayer well structure. Various embodiments may attempt to form a well structure having a vertical or degraded profile.4A through 4H depict integrated circuit structures during various stages of fabrication, in accordance with an embodiment. FIG. 4A depicts semiconductor material 440 over semiconductor material region 438. Semiconductor materials 438 and 440 can each comprise silicon (e.g., single crystal silicon) or other semiconductor materials. Semiconductor material 440 can have a conductivity type that is different (e.g., opposite) to the conductivity type of semiconductor material 438. For example, semiconductor material 438 can have a first conductivity type (e.g., n-type conductivity) and semiconductor material 440 can have a second conductivity type (e.g., p-type conductivity). Semiconductor material region 438 can be formed by implanting a dopant species (e.g., one or more dopant species) into semiconductor material 440. Alternatively, semiconductor material 440 can be formed over semiconductor material 438 and subsequent to formation of semiconductor material 438, for example, by epitaxial growth, chemical vapor deposition, physical vapor deposition, or the like.A patterned mask 4420 can be formed over the semiconductor material 440. The patterned mask 4420 can have an opening 4540 that exposes a portion of the semiconductor material 440 and has a width 4480. The patterned mask 4420 can further have a thickness of 4500. As an example, the thickness 4500 can be 3 to 4 [mu]m, for example, 3.3 [mu]m. Patterned mask 4420 can represent a patterned photoresist material or any other material configured to impede (eg, block) implantation of dopant species.Optical lithography processes are commonly used to define desired patterns in the fabrication of integrated circuits. In the optical lithography process, a photoresist layer can be formed on the surface of the device during the process. The photoresist layer can contain a photopolymer that changes the ease of removal of the photopolymer upon exposure to light or other electromagnetic radiation. To define the pattern, the photoresist layer is selectively exposed to radiation and then developed to expose portions of the underlying layer. In a positive resist system, a portion of the photoresist layer exposed to the radiation is photodissolved and the optical lithography mask is designed to block those portions of the photoresist layer that remain after development. radiation. In a negative resist system, portions of the photoresist layer exposed to radiation are photopolymerized and the optical lithography mask is designed to block those photoresist layers that are to be removed by development. Part of the radiation.In FIG. 4B, dopant species can be accelerated (eg, implanted) into semiconductor material 440 through opening 4540. For example, ion beam 4440 can be directed to the surface of semiconductor material 440 to form a region of semiconductor material 4460 that can contact semiconductor material region 438. The semiconductor material region 4460 can have a first conductivity type. While semiconductor material region 4460 is depicted as having a rectangular profile, those skilled in the art will recognize that the profile shape can generally be more amorphous in nature. A semiconductor material region 4460 can be formed at a first level within the semiconductor material 440 that extends nominally from a first depth 4520 (eg, in contact with the semiconductor material region 438) from the surface of the semiconductor material 440 (eg, Extending to a second depth 4521 from the surface of the semiconductor material 440 (eg, to at least a second depth 4521).In FIG. 4C, the patterned mask 4420 of FIGS. 4A-4B undergoes an isotropic removal process (eg, isotropic wet etching, isotropic dry plasma etching, dry stripping plasma cleaning, etc.) to form The mask 4421 is patterned. The patterned mask 4421 can have an opening 4541 that exposes a portion of the semiconductor material 440 and has a width 4481. The patterned mask 4421 can further have a thickness of 4501. The isotropic removal process generally removes material (e.g., uniformly) in all contact directions, e.g., reduces the thickness of the removed surface material and widens the opening. Thus, the width 4481 can be larger (eg, wider) than the width 4480, while the thickness 4501 can be smaller (eg, narrower) than the thickness 4500. As an example, the thickness 4501 may be 1.5 to 2.5 μm, for example, 2 μm.In FIG. 4D, dopant species can be accelerated (eg, implanted) into semiconductor material 440 through opening 4541. For example, ion beam 4441 can be directed to the surface of semiconductor material 440 to form a region of semiconductor material 4461 that can be in contact with semiconductor material region 4460. The semiconductor material region 4461 can have a first conductivity type. While semiconductor material region 4461 is depicted as having a rectangular outline, those skilled in the art will recognize that the contour shape can be generally more amorphous in nature. A semiconductor material region 4461 can be formed at a second level within the semiconductor material 440, the semiconductor material region 4461 being nominally extended (eg, extended) from the second depth 4521 (eg, from at least a second depth 4521) to the semiconductor material 440 The third depth of the surface is 4522 (eg, to at least a third depth 4522).The dopant species used to form the semiconductor material region 4461 can be the same or different than the dopant species used to form the semiconductor material region 4460 while having the same conductivity type. For example, the dopant species used to form the semiconductor material region 4460 and the semiconductor material region 4461 can both be phosphorous to form a region of n-type conductivity. Alternatively, the dopant species used to form the semiconductor material region 4460 can be phosphorous to form a region of n-type conductivity, and the dopant species used to form the semiconductor material region 4461 can be arsenic to also form n-type conductivity. Area.In FIG. 4E, the patterned mask 4421 of FIGS. 4C through 4D is subjected to an isotropic removal process (eg, isotropic wet etching, isotropic dry plasma etching, dry strip plasma cleaning, etc.) to form The mask 4422 is patterned. Patterned mask 4422 can have an opening 4542 that exposes a portion of semiconductor material 440 and has a width 4482. The patterned mask 4422 can further have a thickness of 4502. The width 4482 can be larger (eg, wider) than the width 4481, while the thickness 4502 can be smaller (eg, narrower) than the thickness 4501. As an example, the thickness 4502 can be 0.5 to 1.0 μm, for example, 0.8 μm.In FIG. 4F, dopant species can be accelerated (eg, implanted) into semiconductor material 440 through opening 4542. For example, ion beam 4442 can be directed to the surface of semiconductor material 440 to form semiconductor material region 4462 that can be in contact with semiconductor material region 4461. The semiconductor material region 4462 can have a first conductivity type. While semiconductor material region 4462 is depicted as having a rectangular outline, those skilled in the art will recognize that the contour shape may generally be more amorphous in nature. A semiconductor material region 4462 can be formed at a third level within the semiconductor material 440, the semiconductor material region 4462 being nominally extended (eg, extended) from the third depth 4522 (eg, from at least a third depth 4522) to the semiconductor material 440 The fourth depth 4523 of the surface (which may coincide with the surface of the semiconductor material 440). The dopant species used to form the semiconductor material region 4462 can be the same or different than the dopant species used to form the semiconductor material region 4461 while having the same conductivity type.Although FIGS. 4A through 4F depict a single stacked well structure of semiconductor material regions 446, such well structures will typically be used to form a basin, for example, enclosing a second conductive region within a basin having a first conductivity type of material. A portion (eg, a well) of semiconductor material 440 of the type. 4G through 4H each depict a well 456 in a portion of semiconductor material 440 that is isolated from adjacent portions of semiconductor material 440 by semiconductor material regions 438 and 446 having a first conductivity type. A substantially vertical profile of the semiconductor material region 446 can be created by isotropic removal between the regions of the semiconductor material 446 formed at adjacent levels using a patterned mask, such as depicted in Figure 4G, wherein each resulting semiconductor material region The width is similar (for example, the same). Additionally, since the punch-through risk can be more severe at lower levels of the semiconductor material region 446, a degraded profile of the semiconductor material region 446 can be created, such as depicted in Figure 4H, for example, by reducing the semiconductor at the lower level. The resulting width of material region 446 increases the spacing between adjacent semiconductor material regions 446 at lower levels without affecting the spacing between adjacent semiconductor material regions near (eg, at) the surface of semiconductor material 440. Although each semiconductor material region 446 formed at one level (eg, for a corresponding depth range from the surface of semiconductor material 440) is depicted in FIG. 4H as being at a lower level (eg, for a surface from semiconductor material 440) Each semiconductor material region 446 formed at a different, different corresponding depth range is wider, although other options may be used. For example, semiconductor material regions 4461 and 4462 can have similar widths, such as shown and described with respect to FIG. 4G, and semiconductor material region 4460 can have a width that is less than the width of semiconductor material region 4461, such as shown and described with respect to FIG. 4H. .By characterizing the implantation of the desired dopant species at different levels of the semiconductor material 440 as experimentally, empirically, or by simulation, the desired width of the opening 454 can be determined for each desired level to produce the desired profile. . Similarly, by characterizing the isotropic removal of the patterned mask 442 as experimentally, empirically, or by simulation, it can be determined that each of the subsequent desired widths of the opening 454 will be permitted to be simultaneously The desired initial thickness of the thickness sufficient to hinder the implantation of the dopant species (where the dopant species are not desired) is maintained. Although three levels of semiconductor material regions 446 are shown and described with respect to Figures 4A through 4H, fewer or more levels of semiconductor material regions 446 may be used in accordance with an embodiment.Different types of circuitry may be formed against the well 456 above the semiconductor material region 446 (eg, semiconductor material region 4462). For example, a p-type field effect transistor (pFET) can be formed in adjacent semiconductor region 4462 (eg, as part of a circuit to select different memory cell blocks of the memory cell array for access), while at 456 An n-type field effect transistor (nFET) is formed in the middle. Figures 5A through 5B provide examples of the use of semiconductor material regions 446 in a memory.As mentioned with respect to FIG. 2A, local access lines (eg, word lines 202) may be connected to the global access lines in a many-to-one relationship. 5A is a schematic diagram of a portion of a memory cell array as used in a memory device of the type described with reference to FIG. 1, and depicting local access lines (eg, word lines 202) and global access lines (eg, global words) This many-to-one relationship between line 502).As depicted in FIG. 5A, a plurality of memory cell blocks 562 can have their local access lines (eg, word lines 202) collectively selectively coupled to a plurality of global access lines (eg, global word lines 502). Memory cell block 562 can include a plurality of NAND strings 206 that are commonly coupled to a particular set of word lines 202. For example, NAND strings 2060 through 206M of FIG. 2A or some portion thereof may represent memory cell block 562. Although FIG. 5A depicts only memory cell blocks 5620 and 5621 (Block 0 and Block 1), additional memory cell block 562 can have its word lines 202 connected in common to global word line 502 in a similar manner. Similarly, although FIG. 5A depicts only four word lines 202, memory cell block 562 can include fewer or more word lines 202. When the structure of FIG. 5A is applied to the array structure of FIGS. 2A through 2B, it will be apparent that there will be N + 1 global word lines 502, ie, GWLs 5020 through 502N.To facilitate memory access operations to a particular set of memory cell blocks 562 that are commonly coupled to a given set of global word lines 502, each memory cell block 562 can have a corresponding correspondence of block select transistors 558 in a one-to-one relationship with its word line 202. set. The control gates for the set of block select transistors 558 of a given memory cell block 562 can have their control gates commonly connected to corresponding block select lines 560. For example, for memory cell block 5620, word line 20200 can be selectively coupled to global word line 5020 by block select transistor 55800, which can be selectively coupled to global word line 5021 by word select transistor 55801, word line 20202 can be selectively coupled to global word line 5022 by block select transistor 55802, and word line 20203 can be selectively coupled to global word line 5023 by block select transistor 55803, while block select transistors 55800 through 55403 are responsive to the block select line Control signals received on the 5600 (eg, common control signals).The block select transistor can be a high voltage device. Such switching devices may require increased isolation. FIG. 5B is a cross-sectional view of block select transistor 558 having control gate 566 and source/drain regions 564 with control gate 566 connected to block select line 560. Block select transistor 558 can be formed, for example, in semiconductor material region 446 (e.g., semiconductor material region 4462 of Figure 4F) after removal of patterned mask 442. For high voltage pFETs, semiconductor material region 446 can have an N-conducting level to provide a high breakdown voltage, for example, greater than about 30V.6 is a flow chart of a method of forming a portion of an integrated circuit device, in accordance with an embodiment. At 671, a patterned mask having an opening and exposing a semiconductor material (eg, a portion of a surface of the semiconductor material) can be formed. For example, a patterned mask can be formed over (e.g., above) the surface of the semiconductor material. At 673, a first doped region can be formed through the opening at a first level of semiconductor material. At 675, a portion of the patterned mask can be isotropically removed to increase the width of the opening. Also at 677, a second doped region can then be formed at the second level of semiconductor material through the opening.For some embodiments, additional doped regions may be formed at additional levels of semiconductor material. Accordingly, the process can proceed to 679 where an additional portion of the patterned mask (eg, the second portion) can be isotropically removed to increase (eg, further increase) the width of the opening. Subsequently, at 681, an additional doped region (e.g., a third doped region) may be formed at an additional level (e.g., a third level) of semiconductor material through the opening. This process can be repeated for one or more additional doped semiconductor material regions.7 is a flow chart of a method of forming a portion of an integrated circuit device, in accordance with an embodiment. At 781, a patterned mask having an opening and exposing a semiconductor material (eg, a portion of a surface of the semiconductor material) can be formed. For example, a patterned mask can be formed over (e.g., above) the surface of the semiconductor material. At 783, the first dopant species can be implanted into the semiconductor material through the opening and using the first implant level. For example, the first dopant species can be phosphorus and the first implant level can be approximately 100 KeV, for example, 960 KeV.At 785, the patterned mask can be isotropically etched to increase the width of the opening. And at 787, the second dopant species can be implanted into the semiconductor material through the opening and using a second implant level that is less than the first implant level. The second dopant species can be the same or different than the first dopant species. The second dopant species can provide the same conductivity type as the first dopant species. For example, the second dopant species can be phosphorus and the second implant level can be from about 300 to 400 KeV, for example, 320 KeV.For some embodiments, additional dopant species can be implanted at different implant levels. Accordingly, the process can proceed to 789 where the patterned mask can be isotropically etched again to increase (e.g., further increase) the width of the opening. At 791, additional dopant species can be implanted into the semiconductor material through the opening and using an additional (eg, second) implant level that is less than the previous (eg, first) implant level. The additional dopant species may be the same or different from the prior (e.g., second) dopant species. The additional dopant species can provide the same conductivity type as the previous dopant species. For example, the additional dopant species can be phosphorus and the additional implant energy level can be approximately 100 to 200 KeV, for example, 150 KeV. This process can be repeated for one or more additional doped semiconductor material regions. An implant level selected for implanting a dopant species at a surface of the semiconductor material can be selected in response to a desired electrical property of a circuit formed in the region of the semiconductor material.For some embodiments, dopant species can be implanted at more than one implant level through openings of a particular width, for example, to increase the depth range of the resulting region of doped semiconductor material. 8 is a flow diagram of a method of forming a portion of an integrated circuit device (as an extension of the method of FIG. 7) in accordance with this embodiment. For example, proceeding from 787 of Figure 7, a third dopant species can be implanted into the semiconductor material through the opening and using a third implant level that is less than the second implant level. The third dopant species can be the same or different than the second dopant species. The third dopant species can provide the same conductivity type as the second dopant species. For example, the third dopant species can be phosphorus and the second implant level can be approximately 100 to 200 KeV, for example, 150 KeV. One or more additional dopant species can be implanted at a continuously reduced energy implantation level prior to isotropic etching of the patterned mask. The process then optionally proceeds to 789 of FIG.to sum upAlthough specific embodiments have been illustrated and described herein, it will be understood by those skilled in the art Many variations of the embodiments will be apparent to those skilled in the art. Accordingly, this application is intended to cover any modifications or variations of the embodiments.
Systems, apparatuses, and methods for performing split-workgroup dispatch to multiple compute units are disclosed. A system includes at least a plurality of compute units, control logic, and a dispatch unit. The control logic monitors resource contention among the plurality of compute units and calculates a load-rating for each compute unit based on the resource contention. The dispatch unit receives workgroups for dispatch and determines how to dispatch workgroups to the plurality of compute units based on the calculated load-ratings. If a workgroup is unable to fit in a single compute unit based on the currently available resources of the compute units, the dispatch unit divides the workgroup into its individual wavefronts and dispatches wavefronts of the workgroup to different compute units. The dispatch unit determines how to dispatch the wavefronts to specific ones of the compute units based on the calculated load-ratings.
WHAT IS CLAIMED IS1. A processor compri sing :a plurality of compute units comprising circuitry configured to execute instructions; and a dispatch unit comprising circuitry configured to dispatch workgroups to the plurality of compute units;wherein the processor is configured to:divide a workgroup into individual wavefronts for dispatch from the dispatch unit to separate compute units, responsive to determining that the workgroup does not fit within a single compute unit based on currently available resources of the plurality of compute units; anddetermine a process for dispatching individual wavefronts of the workgroup to the plurality of compute units based on reducing resource contention among the plurality of compute units.2. The processor as recited in claim 1, wherein dividing the workgroup into individualwavefronts for dispatch from the dispatch unit to separate compute units comprises:dispatching a first wavefront of the workgroup to a first compute unit; anddispatching a second wavefront of the workgroup to a second compute unit, wherein the second wavefront is different from the first wavefront, and wherein the second compute unit is different from the first compute unit.3. The processor as recited in claim 1, wherein the processor further comprises a scoreboard, and wherein the processor is further configured to:allocate an entry in the scoreboard to track wavefronts of the workgroup;track, in the entry, a number of wavefronts which have reached a given barrier; and send a signal to two or more compute units to allow wavefronts to proceed when the number of wavefronts which have reached the given barrier is equal to a total number of wavefronts in the workgroup.4. The processor as recited in claim 3, wherein the two or more compute units are identified by a compute unit mask field in the entry.5. The processor as recited in claim 1, wherein the processor is further configured to:monitor a plurality of performance counters to track resource contention among theplurality of compute units; calculate a load-rating for each compute unit and each resource based on the plurality of performance counters; anddetermine how to allocate wavefronts of the workgroup to the plurality of compute units based on calculated load-ratings.6. The processor as recited in claim 5, wherein the processor is further configured to select a first compute unit as a candidate for dispatch responsive to determining the first compute unit has a lowest load-rating among the plurality of compute units for a first resource.7. The processor as recited in claim 5, wherein the plurality of performance counters track two or more of vector arithmetic logic unit (VALU) execution bandwidth, scalar ALU (SALU) execution bandwidth, local data share (LDS) bandwidth, Load Store Bus bandwidth, Vector Register File (VRF) bandwidth, Scalar Register File (SRF) bandwidth, cache subsystem capacity, cache bandwidth, and translation lookaside buffer (TLB) bandwidth.8. A method comprising:dividing a workgroup into individual wavefronts for dispatch to separate compute units responsive to determining that the workgroup does not fit within a single compute unit based on currently available resources of a plurality of compute units; and determine a process for dispatching individual wavefronts of the workgroup to theplurality of compute units based on reducing resource contention among the plurality of compute units.9. The method as recited in claim 8, wherein dividing the workgroup into individual wavefronts for dispatch to separate compute units comprises:dispatching a first wavefront of the workgroup to a first compute unit; anddispatching a second wavefront of the workgroup to a second compute unit, wherein the second wavefront is different from the first wavefront, and wherein the second compute unit is different from the first compute unit.10. The method as recited in claim 8, further comprising:allocating an entry in a scoreboard to track wavefronts of the workgroup;tracking, in the entry, a number of wavefronts which have reached a given barrier; and sending a signal to two or more compute units to allow wavefronts to proceed when the number of wavefronts which have reached the given barrier is equal to a total number of wavefronts in the workgroup.11. The method as recited in claim 8, wherein the two or more compute units are identified by a compute unit mask field in the entry.12. The method as recited in claim 8, further comprising:monitoring a plurality of performance counters to track resource contention among the plurality of compute units;calculating a load-rating for each compute unit and each resource based on the plurality of performance counters; anddetermining how to allocate wavefronts of the workgroup to the plurality of compute units based on calculated load-ratings.13. The method as recited in claim 12, select a first compute unit as a candidate for dispatchresponsive to determining the first compute unit has a lowest load-rating among the plurality of compute units for a first resource.14. The method as recited in claim 12, wherein the plurality of performance counters track two or more of vector arithmetic logic unit (VALU) execution bandwidth, scalar ALU (SALU) execution bandwidth, local data share (LDS) bandwidth, Load Store Bus bandwidth, Vector Register File (VRF) bandwidth, Scalar Register File (SRF) bandwidth, cache subsystem capacity, cache bandwidth, and translation lookaside buffer (TLB) bandwidth.15. A system comprising:a memory;a processor coupled to the memory;wherein the processor is configured to:divide a workgroup into individual wavefronts for dispatch to separate compute units responsive to determining that the workgroup does not fit within a single compute unit based on currently available resources of the plurality of compute units; anddetermine a process for dispatching individual wavefronts of the workgroup to the plurality of compute units based on reducing resource contention among the plurality of compute units.16. The system as recited in claim 15, wherein dividing the workgroup into individual wavefronts for dispatch to separate compute units comprises:dispatching a first wavefront of the workgroup to a first compute unit; and dispatching a second wavefront of the workgroup to a second compute unit, wherein the second wavefront is different from the first wavefront, and wherein the second compute unit is different from the first compute unit.17. The system as recited in claim 15, wherein the processor further comprises a scoreboard, and wherein the processor is further configured to:allocate an entry in the scoreboard to track wavefronts of the workgroup;track, in the entry, a number of wavefronts which have reached a given barrier; and send a signal to two or more compute units to allow wavefronts to proceed when the number of wavefronts which have reached the given barrier is equal to a total number of wavefronts in the workgroup.18. The system as recited in claim 17, wherein the two or more compute units are identified by a compute unit mask field in the entry.19. The system as recited in claim 15, wherein the processor is further configured to:monitor a plurality of performance counters to track resource contention among theplurality of compute units;calculate a load-rating for each compute unit and each resource based on the plurality of performance counters; anddetermine how to allocate wavefronts of the workgroup to the plurality of compute units based on calculated load-ratings.20. The system as recited in claim 19, wherein the processor is further configured to select a first compute unit as a candidate for dispatch responsive to determining the first compute unit has a lowest load-rating among the plurality of compute units for a first resource.
FEEDBACK GUIDED SPLIT WORKGROUP DISPATCH FOR GPUSBACKGROUND[0001] This invention was made with Government support under the PathForward Project with Lawrence Livermore National Security, Prime Contract No. DE-AC52-07NA27344, Subcontract No. B620717 awarded by the United States Department of Energy. The United States Government has certain rights in this invention.Description of the Related Art[0002] A graphics processing unit (GPU) is a complex integrated circuit that performs graphics-processing tasks. For example, a GPU executes graphics-processing tasks required by an end-user application, such as a video-game application. GPUs are also increasingly being used to perform other tasks which are unrelated to graphics. In some implementations, the GPU is a discrete device or is included in the same device as another processor, such as a central processing unit (CPU).[0003] In many applications, such as graphics processing in a GPU, a sequence of work- items, which can also be referred to as threads, are processed so as to output a final result. In one implementation, each processing element executes a respective instantiation of a particular work- item to process incoming data. A work-item is one of a collection of parallel executions of a kernel invoked on a compute unit. A work-item is distinguished from other executions within the collection by a global ID and a local ID. As used herein, the term“compute unit” is defined as a collection of processing elements (e.g., single-instruction, multiple-data (SIMD) units) that perform synchronous execution of a plurality of work-items. The number of processing elements per compute unit can vary from implementation to implementation. A subset of work-items in a workgroup that execute simultaneously together on a compute unit can be referred to as a wavefront, warp, or vector. The width of a wavefront is a characteristic of the hardware of the compute unit. As used herein, a collection of wavefronts are referred to as a“workgroup”.GPUs dispatch work to the underlying compute resources at the granularity of a workgroup.Typically, a workgroup is dispatched when all of the resources for supporting the full workgroup are available on a single compute unit. These resources include at least vector and scalar registers, wavefront slots, and local data share (LDS) space. Current GPU hardware does not allow dispatching a workgroup to a given compute unit if the given compute unit does not have the resources required by all of the wavefronts in the workgroup. This leads to an increase in workgroup stalls due to resource unavailability. This also has a direct impact on the forward progress made by the application and reduces the wavefront level parallelism (WLP) and thread level parallelism (TLP) of the GPU.BRIEF DESCRIPTION OF THE DRAWINGS[0004] The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:[0005] FIG. l is a block diagram of one implementation of a computing system.[0006] FIG. 2 is a block diagram of another implementation of a computing system.[0007] FIG. 3 is a block diagram of one implementation of a GPU.[0008] FIG. 4 is a block diagram of one implementation of a split dispatch of a workgroup in a GPU.[0009] FIG. 5 is a block diagram of one implementation of workgroup dispatch control logic.[0010] FIG. 6 is a block diagram of one implementation of a feedback-based compute unit selection mechanism.[0011] FIG. 7 is a block diagram of one implementation of a GPU with a split workgroup scoreboard connected to compute units.[0012] FIG. 8 is a block diagram of one implementation of scoreboard tracking of a split dispatch workgroup.[0013] FIG. 9 is a generalized flow diagram illustrating one implementation of a method for performing a split workgroup dispatch.[0014] FIG. 10 is a generalized flow diagram illustrating one implementation of a method for using a scoreboard to track progress of the wavefronts of a split workgroup.[0015] FIG. 11 is a generalized flow diagram illustrating one implementation of a method for selecting compute units for allocating wavefronts of a split workgroup.[0016] FIG. 12 is a generalized flow diagram illustrating one implementation of a method for dispatching wavefronts of a workgroup so as to minimize resource contention among compute units.DETAILED DESCRIPTION OF IMPLEMENTATIONS[0017] In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various implementations may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.[0018] Various systems, apparatuses, methods, and computer-readable mediums for performing a“split” (or alternatively“divided”) workgroup dispatch to multiple compute units are disclosed herein. A processor (e.g., graphics processing unit (GPU)) includes at least a plurality of compute units, control logic, and a dispatch unit. The dispatch unit dispatches workgroups to the compute units of the GPU. Typically, a workgroup is dispatched when all of the resources for supporting the full workgroup are available on a single compute unit. These resources include at least vector and scalar registers, wavefront slots, and local data share (LDS) space. However, the hardware executes threads at the granularity of a wavefront, where a wavefront is a subset of the threads in a workgroup. Since the unit of hardware execution is smaller than the unit of dispatch, it is common for the hardware to deny a workgroup dispatch request while it would still be possible to support a subset of the wavefronts forming that workgroup. This discrepancy between dispatch and execution granularity limits the achievable TLP and WLP on the processor for a particular application.[0019] In one implementation, the control logic monitors resource contention among the plurality of compute units and calculates a load-rating for each compute unit based on the resource contention. The dispatch unit receives workgroups for dispatch and determines how to dispatch workgroups to the plurality of compute units based on the calculated load-ratings. If a workgroup is unable to fit in a single compute unit based on the currently available resources of the compute units, the dispatch unit splits the workgroup into its individual wavefronts and dispatches wavefronts of the workgroup to different compute units. In one implementation, the dispatch unit determines how to dispatch the wavefronts to specific ones of the compute units based on the calculated load-ratings.[0020] In one implementation, the control logic is coupled to a scoreboard to track the execution status of the wavefronts of the split workgroup. The control logic allocates a new entry in the scoreboard for a workgroup which has been divided into wavefronts dispatched to multiple compute units. When any wavefront reaches a barrier instruction, the corresponding compute unit sends an indication to the control logic. In response to receiving this indication, the control logic sets the barrier taken flag field in the corresponding scoreboard entry. Then, the compute units send signals when the other wavefronts reach the barrier. The control logic increments a barrier taken count in the corresponding scoreboard entry, and when the barrier taken count reaches the total number of wavefronts for the workgroup, the control logic sends signals to the compute units to allow the wavefronts to proceed. In one implementation, the scoreboard entry includes a compute unit mask to identify which compute units execute the wavefronts of the split workgroup.[0021] Referring now to FIG. 1, a block diagram of one implementation of a computing system 100 is shown. In one implementation, computing system 100 includes at least processors 105A-N, input/output (I/O) interfaces 120, bus 125, memory controller(s) 130, network interface 135, and memory device(s) 140. In other implementations, computing system 100 includes other components and/or computing system 100 is arranged differently. Processors 105A-N are representative of any number of processors which are included in system 100.[0022] In one implementation, processor 105 A is a general purpose processor, such as a central processing unit (CPU). In this implementation, processor 105N is a data parallel processor with a highly parallel architecture. Data parallel processors include graphics processing units (GPUs), digital signal processors (DSPs), field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and so forth. In some implementations, processors 105B-N include multiple data parallel processors. Each data parallel processor is able to divide workgroups for dispatch to multiple compute units. Each data parallel processor is also able to dispatch workgroups to multiple compute units so as to minimize resource contention among the compute units. Techniques for implementing these and other features are described in more detail in the remainder of this disclosure.[0023] Memory controller(s) 130 are representative of any number and type of memory controllers accessible by processors 105A-N and EO devices (not shown) coupled to I/O interfaces 120. Memory controller(s) 130 are coupled to any number and type of memory devices(s) 140. Memory device(s) 140 are representative of any number and type of memory devices. For example, the type of memory in memory device(s) 140 includes Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), NAND Flash memory, NOR flash memory, Ferroelectric Random Access Memory (FeRAM), or others.[0024] I/O interfaces 120 are representative of any number and type of EO interfaces (e.g., peripheral component interconnect (PCI) bus, PCI-Extended (PCI-X), PCIE (PCI Express) bus, gigabit Ethernet (GBE) bus, universal serial bus (USB)). Various types of peripheral devices (not shown) are coupled to I/O interfaces 120. Such peripheral devices include (but are not limited to) displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth. Network interface 135 is used to receive and send network messages across a network. [0025] In various implementations, computing system 100 is a computer, laptop, mobile device, game console, server, streaming device, wearable device, or any of various other types of computing systems or devices. It is noted that the number of components of computing system 100 varies from implementation to implementation. For example, in other implementations, there are more or fewer of each component than the number shown in FIG. 1. It is also noted that in other implementations, computing system 100 includes other components not shown in FIG. 1. Additionally, in other implementations, computing system 100 is structured in other ways than shown in FIG. 1.[0026] Turning now to FIG. 2, a block diagram of another implementation of a computing system 200 is shown. In one implementation, system 200 includes GPU 205, system memory 225, and local memory 230. System 200 also includes other components which are not shown to avoid obscuring the figure. GPU 205 includes at least command processor 235, control logic 240, dispatch unit 250, compute units 255A-N, memory controller 220, global data share 270, level one (Ll) cache 265, and level two (L2) cache 260. In other implementations, GPU 205 includes other components, omits one or more of the illustrated components, has multiple instances of a component even if only one instance is shown in FIG. 2, and/or is organized in other suitable manners. For example, in another implementation, GPU 205 includes multiple dispatch units.[0027] In various implementations, computing system 200 implements any of various types of software applications. Command processor 240 receives commands from a host CPU (not shown) and uses dispatch unit 250 to issue commands to compute units 255A-N. Threads within kernels executing on compute units 255A-N read and write data to global data share 270, Ll cache 265, and L2 cache 260 within GPU 205. Although not shown in FIG. 2, compute units 255A-N also include one or more caches and/or local memories within each compute unit 255A- N.[0028] Command processor 240 performs a variety of tasks for GPU 205. For example, command processor 240 schedules compute tasks, data movement operations through direct memory access (DMA), and various post-kernel clean-up activities. Control logic 240 monitors resource contention among the resources of GPU 205 and helps dispatch unit 250 determine how to dispatch wavefronts to compute units 255A-N to minimize resource contention. In one implementation, control logic 240 includes scoreboard 245 and performance counters (PCs) 247 A-N for monitoring the resource contention among compute units 255A-N. Performance counters 247A-N are representative of any number of performance counters for monitoring resources such as vector arithmetic logic unit (VALU) execution bandwidth, scalar ALU (SALU) execution bandwidth, local data share (LDS) bandwidth, Load Store Bus bandwidth, Vector Register File (VRF) bandwidth, Scalar Register File (SRF) bandwidth, the cache subsystem capacity and bandwidth including the Ll, L2, and L3 caches and TLBs, and other resources.[0029] Control logic 240 uses scoreboard 245 to monitor the wavefronts of split workgroups that are dispatched to multiple compute units 255A-N. For example, scoreboard 245 tracks the different wavefronts of a given split workgroup that are executing on multiple different compute units 255A-N. Scoreboard 245 includes an entry for the given split workgroup, and the entry identifies the number of wavefronts of the given split workgroup, the specific compute units on which the wavefronts are executing, the workgroup ID, and so on. The scoreboard entry also includes a barrier sync enable field to indicate when any wavefront has reached a barrier. When a wavefront reaches a barrier, the compute unit will cause the wavefront to stall. The scoreboard entry includes a barrier taken count to track the number of wavefronts that have reached the barrier. When the barrier taken count reaches the total number of wavefronts of the given workgroup, control logic 240 notifies the relevant compute units that the wavefronts are now allowed to proceed.[0030] In one implementation, system 200 stores two compiled versions of kernel 227 in system memory 225. For example, one compiled version of kernel 227 A includes barrier instructions and uses scoreboard 245 as a central mechanism to synchronize wavefronts that are executed on separate compute units 255A-N. A second compiled version of kernel 227B uses global data share 270 instructions or atomic operations to memory to synchronize wavefronts that are executed on separate compute units 255A-N. Both kernel 227 A and kernel 227B are available in the application’s binary and available at runtime. Command processor 235 and control logic 240 decides at runtime which kernel to use when dispatching the wavefronts of the corresponding workgroup to compute units 255A-N. The decision on which kernel (227A or 227B) to use is made based on one or more of power consumption targets, performance targets, resource contention among compute units 255A-N, and/or other factors.[0031] Referring now to FIG. 3, a block diagram of one implementation of a GPU 300 is shown. For GPU 300, a dispatch of workgroup (WG) 302 is being considered. It should be understood that the example of workgroup 302 having four wavefronts is shown merely for illustrative purposes. In other implementations, workgroups have other numbers of wavefronts besides four. Additionally, the example of GPU 300 having four compute units 310-313 is also intended merely as an illustration of one implementation. In other implementations, GPU 300 includes other numbers of compute units besides four. [0032] The current allocation of compute units 310-313 is shown on the right-side of FIG. 3. Occupied resource holes are shown as shaded rectangles while available resource holes are shown as clear rectangles. As is seen from the current allocation of compute units 310-313, none of the individual compute units 310-313 has enough available resources to fit all four wavefronts of workgroup 302. Accordingly, for a traditional GPU 300, dispatch of workgroup 302 would be stalled.[0033] Turning now to FIG. 4, a block diagram of one implementation of a split dispatch of a workgroup 402 in GPU 400 is shown. GPU 400 is similar to GPU 300 shown in FIG. 3, with GPU 400 including four compute units 410-413 and workgroup 402 including four wavefronts. However, it should be understood that this example is merely indicative of one implementation. In other implementations, GPU 400 and/or workgroup 402 have other numbers of components.[0034] However, in contrast to the example shown in FIG. 3, GPU 400 is able to divide workgroup 402 into different combinations of wavefront splits and allocate wavefronts to different compute units 410-413 to enable the immediate dispatch of workgroup 402. It should be understood that GPU 400 is able to divide workgroup 402 into different combinations of wavefront splits, with the actual wavefront split varying from implementation to implementation. In one implementation, wavefronts 0-2 of workgroup 402 are allocated to compute unit 412 and wavefront 3 of workgroup 402 is allocated to compute unit 410. By dividing the allocation of workgroup 402, this allows workgroup 402 to be allocated immediately rather than waiting until a compute unit has enough resources to allocate the entirety of workgroup 402. This results in higher wavefront level parallelism (WLP) and thread level parallelism (TLP) for applications executing on GPU 400.[0035] Referring now to FIG. 5, a block diagram of one implementation of workgroup dispatch control logic 500 is shown. In one implementation, the components of workgroup dispatch control logic 500 are included in control logic 240 of GPU 205 (of FIG. 2). Workgroup dispatch control logic 500 includes split workgroup dispatcher 506. Split workgroup dispatcher 506 includes scoreboard checker 508 which receives a trigger from workgroup allocation fail detection unit 502 if a workgroup is unable to fit inside a single compute unit. Scoreboard checker 508 queries fits detection unit 510 to determine if there are enough available resources in the compute units of the host GPU to fit the wavefronts of a given workgroup. In one implementation, fits detection unit 510 uses a maximum fit (Max Fit) 510A approach, an equal fit (Equal Fit) 510B approach, or a programmable fit 510C approach for fitting wavefronts to compute units. The maximum fit 510A approach attempts to fit wavefronts to the fewest number of compute units by allocating multiple wavefronts to the same compute unit. The equal fit 510B approach attempts to spread wavefronts equally across all of the compute units. The programmable fit 510C approach attempts to fit wavefronts to compute units according to a programmable policy. In other implementations, fits detection unit 510 uses other approaches for fitting wavefronts to compute units. If there are enough available resources within multiple compute units, then compute unit selector 512 determines how to allocate the given workgroup to the available compute units based on a feedback mechanism. This feedback mechanism uses the values of performance counters to track the resource utilization and resource contention on the available compute units of the GPU.[0036] In one implementation, compute unit selector 512 includes a selection mechanism as shown in block 516. Compute unit (CU) selector 512 selects one of multiple selection algorithms to use in determining how to allocate the wavefronts of a given workgroup to the available compute units. For example, as shown in block 516, compute unit selector 512 selects from three separate algorithms to determine how to allocate wavefronts of the given workgroup to the available compute units. In other implementations, compute unit selector 512 selects from other numbers of algorithms besides three.[0037] A first algorithm is a round-robin, first-come first serve (RR-FCFS) algorithm for choosing compute units to allocate the wavefronts of the given workgroup. A second algorithm is a least compute stalled compute unit algorithm (FB-COMP) which uses feedback from performance counters to determine which of the compute units are the least stalled out of all of the available compute units. Split workgroup dispatcher 506 then allocates wavefronts to the compute units identified as the least stalled. A third algorithm attempts to allocate wavefronts of the given workgroup to the least memory stalled compute units (FB-MEM). Split workgroup dispatcher 506 uses the performance counters to determine which compute units are the least memory stalled, and then split workgroup dispatcher 506 allocates wavefronts to these identified compute units. In other implementations, other types of algorithms are employed.[0038] Depending on the implementation, the type of algorithm that is used is dynamically adjusted by software and/or hardware. In one implementation, an administrator selects the type of algorithm that is used. In another implementation, a user application selects the type of algorithm that is used. The user application has a fixed policy of which algorithm to select, or the user application dynamically adjusts the type of algorithm based on operation conditions. In a further implementation, the operating system (OS) or a driver selects the type of algorithm that is used for allocating wavefronts of workgroups to the available compute units. In other implementations, other techniques of selecting the split workgroup dispatch algorithm are possible and are contemplated. [0039] After a given workgroup has been divided into individual wavefronts and allocated to multiple compute units, split workgroup dispatcher 506 allocates an entry in scoreboard 514 for the given workgroup. Split workgroup dispatcher 506 then uses the scoreboard entry to track the execution progress of these wavefronts on the different compute units. The scoreboard 514 has any number of entries depending on the implementation. For example, one example of a scoreboard entry is shown in box 504. For example, in one implementation, each scoreboard entry of scoreboard 514 includes a virtual machine identifier (VMID) field, a global workgroup ID field, a workgroup dimension field, a number of wavefronts in the workgroup field, a compute unit mask field to identify which compute units have been allocated wavefronts from the workgroup, a barrier count field to track the number of wavefronts that have reached a barrier, and a barrier synchronization enable field to indicate that at least one wavefront has reached a barrier. In other implementations, scoreboard entries includes other fields and/or is organized in other suitable manners.[0040] Turning now to FIG. 6, a block diagram of one implementation of a feedback-based compute unit (CU) selection mechanism 610 is shown. Feedback-based CU selector 610 includes workgroup (WG) allocation request buffer 615 and performance monitor module 620. In other implementations, feedback-based CU selector 610 includes other components and/or is structured in other suitable manners. WG allocation request buffer 615 stores received workgroups for allocation. WG allocation request buffer 615 conveys an allocation request to performance monitor module 620 which determines how to allocate the wavefronts of a given workgroup to the available compute units (CUs) of the GPU. Performance monitor module 620 conveys a fit or no-fit indication to WG allocation request buffer 615 based on the number of wavefronts of the given workgroup and the currently available CU resources. When WG allocation request buffer 615 receives a fit indication from performance monitor module 620, WG allocation request buffer 615 sends an allocation confirmation to the dispatch unit (not shown).[0041] In one implementation, performance monitor module 620 collects values from various performance counters and implements CU level and SIMD level tables to track these values. In various implementations, the performance counters monitor resources such as vector arithmetic logic unit (VALU) execution bandwidth, scalar ALU (SALU) execution bandwidth, local data share (LDS) bandwidth, Load Store Bus bandwidth, Vector Register File (VRF) bandwidth, Scalar Register File (SRF) bandwidth, and the cache subsystem capacity and bandwidth including the Ll, L2, and L3 caches and TLBs. CU performance (perf) comparator 635 includes logic for determining the load-rating of each CU for the given dispatch-ID (e.g., kernel-ID), and CU perf comparator 635 selects a preferred CU destination based on the calculated load-ratings. In one implementation, the load-rating is calculated as a percentage of the CU that is currently occupied or a percentage of a given resource that is currently being used or is currently allocated. In one implementation, the load-ratings of the different resources of the CU are added together to generate a load-rating for the CU. In one implementation, different weighting factors are applied to the various load-ratings of the different resources to generate a load-rating for the CU as a whole. Shader processor input resource allocation (SPI-RA) unit 640 allocates resources for a given workgroup on the preferred CU(s).[0042] Referring now to FIG. 7, a block diagram of one implementation of a GPU 700 with a split workgroup (WG) scoreboard 720 connected to compute units 710A-N is shown. In one implementation, GPU 700 includes at least compute units 710A-N and split WG dispatcher 715. GPU 700 also includes any number of various other components which are not shown to avoid obscuring the figure. Compute units 710A-N are representative of any number and type of compute units. Each compute unit 710A-N includes a plurality of single-instruction, multiple- data (SIMD) units for executing work-items in parallel. In one implementation, each compute unit 710A-N includes four SIMD units. In other implementations, each compute unit 710A-N includes other numbers of SIMD units besides four.[0043] When a given workgroup is divided for dispatch on multiple compute units 710A-N, split WG dispatcher 715 uses split WG scoreboard 720 to track the execution of the wavefronts of the given workgroup on compute units 710A-N. A compute unit 710 sends the barrier sync enable message to scoreboard 720 when any wavefront of a split WG reaches a barrier instruction. The barrier taken count field of the scoreboard entry is incremented for each wavefront that reaches this barrier. When all waves of a split WG have reached the barrier, scoreboard 720 informs each compute unit 710A-N identified in the split scoreboard compute unit mask to allow the waves to continue execution. In one implementation, scoreboard 720 informs each compute unit 710A-N by sending a barrier taken message.[0044] Turning now to FIG. 8, a block diagram of one implementation of scoreboard tracking of a split dispatch workgroup is shown. In one implementation, a workgroup 805 with six wavefronts is dispatched to multiple compute units of compute units 810A-H of GPU 800. It is noted that the example of workgroup 805 having six wavefronts is merely indicative of one implementation. In other implementations, workgroup 805 has other numbers of wavefronts. It is also noted that the example of GPU 800 having eight compute units 810A-H is merely indicative of one implementation. In other implementations, GPU 800 has other numbers of compute units. [0045] As the six wavefronts of workgroup 805 are unable to fit on any single compute unit of compute units 810A-H, these six wavefronts are divided and allocated to multiple compute units. As shown in FIG. 8, wavefronts 0-1 are allocated to compute unit 810A, wavefronts 2-3 are allocated to compute unit 81 OF, and wavefronts 4-5 are allocated to compute unit 810G. In other implementations, the wavefronts of workgroup 805 are allocated in other suitable manners to compute units 810A-H such that all six wavefronts of workgroup are allocated to available resource holes of compute units 810A-H.[0046] In one implementation, scoreboard 820 is used to track the execution of wavefronts of workgroup 805 on the compute units 810A-H. The scoreboards 820 A-F shown on the bottom of FIG. 8 are meant to represent scoreboard 820 at six different points in time, with scoreboard 820A representing an earlier point in time and scoreboard 820F representing a later point in time and the other scoreboards 820B-E representing points in time in between. Scoreboard 820A represents the status of workgroup 805 immediately after allocation of the wavefronts to compute units 810A-H. The virtual machine identifier (VMID) field specifies the unique application context identifier as specified by the operating system. Multiple applications are able to execute on a GPU with distinct VMIDs. The workgroup (WG) ID and global WG ID fields identify the workgroup 805, while the number of waves field specifies the number of wavefronts in workgroup 805. The compute unit mask indicates which compute units 810A-H have been allocated wavefronts from workgroup 805. The barrier count field indicates how many wavefronts have reached a barrier when the barrier sync enable field is set. Scoreboard 820A indicates that a barrier has not yet been reached by any of the wavefronts of workgroup 805.[0047] Scoreboard 820B indicates that a first wavefront has reached a barrier instruction. As a result of a wavefront reaching a barrier instruction, a corresponding SIMD unit of the compute unit sends a barrier sync enable indication to scoreboard 820B. In response to receiving the barrier sync enable indication, the barrier sync enable field of scoreboard 820B is set. Scoreboard 820C represents a point in time subsequent to the point in time represented by scoreboard 820B. It is assumed for the purposes of this discussion that wavefronts wvO and wvl have hit the barrier by this subsequent point in time. In response to wavefronts wvO and wvl hitting the barrier, SIMD units in compute unit 810A send barrier count update indications to scoreboard 820C. As a result of receiving the barrier count update indications, scoreboard 820C increments the barrier count to 2 for workgroup 805.[0048] Scoreboard 820D represents a point in time subsequent to the point in time represented by scoreboard 820C. It is assumed for the purposes of this discussion that wavefronts wv4 and wv5 have hit the barrier by this subsequent point in time. In response to wavefronts wv4 and wv5 hitting the barrier, SIMD units in compute unit 810G send barrier count update indications to scoreboard 820D. Scoreboard 820D increments the barrier count to 4 for workgroup 805 after receiving the barrier count update indications from compute unit 810G.[0049] Scoreboard 820E represents a point in time subsequent to the point in time represented by scoreboard 820D. It is assumed for the purposes of this discussion that wavefronts wv2 and wv3 have hit the barrier by this subsequent point in time. In response to wavefronts wv2 and wv3 hitting the barrier, SIMD units in compute unit 81 OF send barrier count update indications to scoreboard 820E. As a result of receiving the barrier count update indications, scoreboard 820E increments the barrier count to 6 for workgroup 805.[0050] Scoreboard 820F represents a point in time subsequent to the point in time represented by scoreboard 820E. Since all of the waves have hit the barrier by this point in time, as indicated by the barrier count field equaling the number of wavefronts, control logic signals all of the compute units identified in the compute unit mask field that the barrier has been taken for all wavefronts of workgroup 805. Consequently, the SIMD units in these compute units are able to let the wavefronts proceed with execution. The barrier count field and barrier sync enable field are cleared after the barrier has been taken by all wavefronts of workgroup 805, as shown in the entry for scoreboard 820F.[0051] Referring now to FIG. 9, one implementation of a method 900 for performing a split workgroup dispatch is shown. For purposes of discussion, the steps in this implementation and those of FIG. 10-12 are shown in sequential order. However, it is noted that in various implementations of the described methods, one or more of the elements described are performed concurrently, in a different order than shown, or are omitted entirely. Other additional elements are also performed as desired. Any of the various systems or apparatuses described herein are configured to implement method 900.[0052] A dispatch unit of a GPET receives a workgroup for dispatch that will not fit into the available resources of a single compute unit (block 905). Next, the dispatch unit determines if the individual wavefronts of the workgroup are able to fit into multiple compute units if the workgroup is divided (conditional block 910). If the workgroup are not able to fit in the available compute unit resources despite the workgroup being divided into individual wavefronts (conditional block 910,“no” leg), then the dispatch unit waits until more compute unit resources become available (block 915). After block 915, method 900 returns to conditional block 910. If the workgroup are able to fit in the available compute unit resources by being divided (conditional block 910,“yes” leg), then the dispatch unit splits allocation of the workgroup across multiple compute units (block 920). [0053] After block 920, the GPU tracks progress of the wavefronts of the split workgroup using a scoreboard (block 925). One example of using a scoreboard to track progress of the wavefronts of a split workgroup is described in more detail below in the discussion regarding method 1000 (of FIG. 10). After block 925, method 900 ends.[0054] Turning now to FIG. 10, one implementation of a method 1000 for using a scoreboard to track progress of the wavefronts of a split workgroup is shown. A dispatch unit allocates a new scoreboard entry for a workgroup that has been divided and allocated as individual wavefronts to multiple compute units of a GPU (block 1005). Also, control logic sets a compute unit mask field of the scoreboard entry to indicate the compute units to which the individual wavefronts of the workgroup have been allocated (block 1010). Then, execution of the individual wavefronts of the workgroup is initiated (block 1015). If the wavefronts of the workgroup need to be synchronized (conditional block 1017,“yes” leg), then the compute units monitor whether their wavefronts have reached a barrier (conditional block 1020). If the wavefronts of the workgroup do not need to be synchronized (conditional block 1017,“no” leg), then the control logic and compute units allow the wavefronts to execute independently of each other (block 1018). After block 1018, method 1000 ends.[0055] If any of the wavefronts have reached a barrier (conditional block 1020,“yes” leg), then a corresponding compute unit sends an indication to the scoreboard (block 1025). In response to receiving the indication, the barrier sync enable field of the scoreboard entry is set (block 1030). If none of the wavefronts have reached a barrier (conditional block 1020,“no” leg), then the GPU continues to monitor the progress of the wavefronts (block 1035). After block 1035, method 1000 returns to conditional block 1020.[0056] After block 1030, the control logic increments the barrier taken count in the scoreboard entry for each wavefront that hits the given barrier (block 1040). Next, the control logic determines if the barrier taken count has reached the total number of wavefronts in the workgroup (conditional block 1045). If the barrier taken count has reached the total number of wavefronts in the workgroup (conditional block 1045,“yes” leg), then the control logic resets the barrier taken count and barrier sync enable fields of the scoreboard entry and signals to the compute units identified by the compute mask field to allow the wavefronts to proceed (block 1050). If the barrier taken count has not reached the total number of wavefronts in the workgroup (conditional block 1045,“no” leg), then method 1000 returns to conditional block 1020. If the final barrier has been reached (conditional block 1055,“yes” leg), then method 1000 ends. Otherwise, if the final barrier has not been reached (conditional block 1055,“no” leg), then method 1000 returns to conditional block 1020. [0057] Referring now to FIG. 11, one implementation of a method 1100 for selecting compute units for allocating wavefronts of a split workgroup is shown. Control logic monitors resource utilization and contention across the compute units of a GPU (block 1105). The resources that are being monitored include at least vector arithmetic logic unit (VALU) execution bandwidth, scalar ALU (SALU) execution bandwidth, local data share (LDS) bandwidth, Load Store Bus bandwidth, Vector Register File (VRF) bandwidth, Scalar Register File (SRF) bandwidth, and the cache subsystem capacity and bandwidth including the Ll, L2, and L3 caches and TLBs. In one implementation, the control logic uses performance counters to monitor the resource utilization and contention across the compute units of the GPU.[0058] Next, a dispatch unit receives a workgroup for dispatch (block 1110). The dispatch unit determines how to dispatch the wavefronts of the workgroup to the various compute units based on the resource contention and the predicted behavior of the workgroup (block 1115). Depending on the implementation, the dispatch unit uses any of various policies for determining how to dispatch the wavefronts of the workgroup to the compute units of the GPU. Depending on the implementation, the dispatch unit decides to perform a conventional dispatch, single unit workgroup dispatch, or a split-workgroup dispatch. Additionally, policies which are used include (but are not limited to) a maximum-fit policy, equal-fit policy, and programmable-fit policy. The maximum-fit policy assigns waves to the minimum number of compute units. The equal-fit policy tries to spread the wavefronts of the split-workgroup equally among candidate compute units. The programmable-fit policy spreads wavefronts of the split-workgroup across compute units so as to minimize the load-rating across the compute units. After block 1115, method 1100 ends.[0059] Turning now to FIG. 12, one implementation of a method for dispatching wavefronts of a workgroup so as to minimize resource contention among compute units is shown. Control logic of a GPU determines which resources to track and configures these resources via a programmable context setting register (block 1205). In one implementation, the programmable context setting register is updated by a kernel mode driver. Next, the control logic tracks resource utilization with a plurality of program counters (block 1210). Examples of resources being monitored by program counters include at least VALU execution bandwidth, SALU execution bandwidth, LDS bandwidth, Load Store Bus bandwidth, VRF bandwidth, SRF bandwidth, and the cache subsystem capacity and bandwidth including the Ll, L2, and L3 caches and TLBs.[0060] Then, the control logic samples the program counters at programmable intervals (block 1215). Next, the control logic calculates a load-rating of each compute unit for each selected resource (block 1220). In one implementation, the control logic calculates a load-rating of each compute unit for each selected resource per dispatch ID. In one implementation, a dispatch ID is a monotonically increasing number which identifies the kernel dispatched for execution by the command processor. In one implementation, each application context or VMID has its own monotonically increasing dispatch ID counter. Then, a dispatch unit detects a new workgroup to dispatch to the compute units of the GPU (block 1225). The dispatch unit checks the load-rating of each compute unit (block 1230). Next, the dispatch unit selects the compute unit(s) with the lowest load-rating for each selected resource as candidate(s) for dispatch (block 1235). Then, the dispatch unit dispatches the wavefronts of the workgroup to the selected compute unit(s) (block 1240). After block 1240, method 1200 ends.[0061] In various implementations, program instructions of a software application are used to implement the methods and/or mechanisms described herein. For example, program instructions executable by a general or special purpose processor are contemplated. In various implementations, such program instructions are represented by a high level programming language. In other implementations, the program instructions are compiled from a high level programming language to a binary, intermediate, or other form. Alternatively, program instructions are written that describe the behavior or design of hardware. Such program instructions are represented by a high-level programming language, such as C. Alternatively, a hardware design language (HDL) such as Veri!og is used. In various implementations, the program instructions are stored on any of a variety of non-transitory computer readable storage mediums. The storage medium is accessible by a computing system during use to provide the program instructions to the computing system for program execution. Generally speaking, such a computing system includes at least one or more memories and one or more processors configured to execute program instructions.[0062] It should be emphasized that the above-described implementations are only non limiting examples of implementations. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Techniques are described to transmit commands to a display device. The commands can be transmitted in header byte fields of secondary data packets. The commands can be used to cause a target device to capture a frame, enter or exit self refresh mode, or reduce power use of a connection. In addition, a request to exit main link standby mode can cause the target enter training mode without explicit command to exit a main link standby mode.
1.An apparatus for forming a secondary data packet, comprising:Logic for forming at least a first bit and a second bit in the secondary data packet, wherein the first bit is used to indicate whether a self-refresh is entered, and the second bit is used to indicate whether to be used for display The frames are stored into a frame buffer, and wherein the secondary data packets conform to the DisplayPort specification.2.The apparatus of claim 1 wherein:The first bit is used to indicate whether to enter a self-refresh or to remain in normal operation.3.The apparatus of claim 1 wherein:The second bit is used to indicate whether the transmitted frame for display is the same as the previously transmitted frame for display.4.The apparatus of claim 1 comprising:The logic for setting the first bit to indicate a self-refresh based on at least one of: a change in a system timer period, a triangle or polygon rendering, an arbitrary processor core not in a low power mode, any mouse Action, use vertical blanking interrupt, or enable override.5.The apparatus of claim 1 comprising:Logic for powering down a component, the component comprising one or more of the following: a display phase locked loop (PLL), a display panel, a display tube, or a display interface.6.The apparatus of claim 1 comprising:Communicatingly coupled to the logical interface to form at least a first bit and a second bit in a secondary data packet;Communicatingly coupled to a display controller on the interface;A frame buffer, the display controller causing frames for display to be stored in the frame buffer in response to an indication of entering a self-refresh and an indication to store a frame for display to the frame buffer.7.The apparatus of claim 6 comprising:Communicatingly coupled to a display on the frame buffer, the display for displaying at least one frame for display from the frame buffer.8.The apparatus of claim 1 wherein said DisplayPort specification comprises DisplayPort Specification Version 1.1a.9.The apparatus of claim 1 wherein each of said logic comprises any one or combination of the following: one or more integrated circuits, hardwired logic, software executed by a microprocessor, or Field programmable gate array.10.A computer implemented method for forming a secondary data packet, the method comprising:Forming at least a first bit and a second bit in the secondary data packet, wherein the first bit is used to indicate whether to enter a self-refresh, and the second bit is used to indicate whether to store a frame for display to a frame In the buffer, and wherein the secondary data packet conforms to the DisplayPort specification.11.The computer implemented method of claim 10 wherein:The first bit is used to indicate whether to enter a self-refresh or to remain in normal operation.12.The computer implemented method of claim 10 wherein:The second bit is used to indicate whether the transmitted frame for display is the same as the previously transmitted frame for display.13.The computer implemented method of claim 10 comprising:The first bit is set to indicate a self-refresh based on at least one of: a change in system timer period, a triangle or polygon rendering, an arbitrary processor core not in a low power mode, any mouse action, use Vertical blanking interrupt, or override enabled.14.The computer implemented method of claim 10 comprising:The component is powered down, the component comprising one or more of the following: a display phase locked loop (PLL), a display panel, a display tube, or a display interface.15.The computer implemented method of claim 10 wherein the DisplayPort specification comprises DisplayPort Specification Version 1.1a.16.At least one machine readable medium for forming a secondary data packet, the at least one medium comprising code, when executed, causing the machine to perform the method of claims 10-15.17.Means for processing a portion of a secondary data packet, the device comprising:a controller for receiving at least a first bit and a second bit from the secondary data packet, wherein the first bit is used to indicate whether a self-refresh is entered, and the second bit is used to indicate whether to be used for display The frames are stored into a frame buffer, and wherein the secondary data packets conform to the DisplayPort specification.18.The apparatus of claim 17 comprising:A frame buffer, the controller causing a frame for display to be stored into the frame buffer in response to an indication of entering a self-refresh and an indication to store a frame for display to the frame buffer.19.The apparatus of claim 18 comprising:Communicatingly coupled to a display on the frame buffer, the display for displaying at least one frame for display from the frame buffer.20.The apparatus of claim 17 comprising:Communicatingly coupled to an interface on the controller, the interface for receiving the secondary data packet.21.The apparatus of claim 17 wherein:The first bit is used to indicate whether to enter a self-refresh or to remain in normal operation.22.The apparatus of claim 17 wherein:The second bit is used to indicate whether the transmitted frame for display is the same as the previously transmitted frame for display.23.The apparatus of claim 17 wherein said DisplayPort specification comprises DisplayPort Specification Version 1.1a.24.The apparatus of claim 17 wherein said controller comprises any one or combination of the following: one or more integrated circuits, hardwired logic, software executed by a microprocessor, or field programmable Door array.25.A computer implemented method for processing a portion of a secondary data packet, the method comprising:Receiving at least a first bit and a second bit from the secondary data packet, wherein the first bit is used to indicate whether a self-refresh is entered, and the second bit is used to indicate whether to store a frame for display in a frame buffer And wherein the secondary data packet conforms to the DisplayPort specification.26.The method of claim 25 wherein:The first bit is used to indicate whether to enter a self-refresh or to remain in normal operation.27.The method of claim 25 wherein:The second bit is used to indicate whether the transmitted frame for display is the same as the previously transmitted frame for display.28.The method of claim 25 wherein said DisplayPort specification comprises DisplayPort Specification Version 1.1a.29.At least one machine readable medium for forming a secondary data packet, the at least one medium comprising code that, when executed, causes the machine to perform the method of claims 25-28.
For sending the technology of order to target deviceThe divisional application that the application is submission on Dec 30th, 2011, application number is 201180002742.2, name is called the application of " for sending the technology of order to target device ".Technical fieldPut it briefly, theme disclosed herein relates to the technology for adjusting power consumption.Background technologyMultimedia operations in computer system is very general.Such as, personal computer is often used to process and display video.The power consumption of computing machine is a problem needing to pay close attention to.Wish the power consumption adjusting personal computer.Accompanying drawing explanationShow embodiments of the invention by way of example instead of by the mode of restriction in the accompanying drawings, and in the accompanying drawings, identical reference number represents similar element.Figure 1A shows the system according to embodiment.Figure 1B shows can the example of assembly of controlled host computer system according to embodiment power consumption.Fig. 1 C shows the high level block diagram of the time schedule controller of the display device according to embodiment.Fig. 2 shows the example format of the signal sent by multiple paths of DisplayPort interface.Fig. 3 shows and carrys out the exemplary approach of the number of transmissions according to grouping (secondary data packet) by one or more paths of DisplayPort interface.Fig. 4 shows the example of the sequence of events entering main link standby mode.Fig. 5 shows the example of the sequence of events exited from main link standby mode.EmbodimentRun through this instructions and mention that " embodiment " or " embodiment " refer to that special characteristic, structure or the characteristic in conjunction with the embodiments described comprises at least one embodiment of the present invention.Thus, run through this instructions and occur that in each place phrase " in one embodiment " or " embodiment " may not all refer to identical embodiment.In addition, described special characteristic, structure or characteristic can be combined in one or more embodiments.Figure 1A shows the system 100 according to embodiment.System 100 can comprise source device and the target device 150 of such as host computer system 102.Host computer system 102 can comprise processor 110, primary memory 112, storage facilities 114 and the graphics subsystem 115 with one or more core core.Chipset 105 can equipment communicatedly in couple host system 102.Graphics subsystem 115 can process Audio and Video.System 100 can be implemented in handheld personal computer, mobile phone, Set Top Box or any computing equipment.The user interface of any type is all available, such as keyboard, mouse and/or touch-screen.According to each embodiment, processor 110 can executive software driver (not shown), wherein this software driver determines whether that (1) target device 150 is caught image and repeatedly shows the image of catching, (2) to the assembly power-off of image subsystems 115, and (3) are to the assembly power-off of target device 150.This driver at least can determine whether initiation action (1), (2) or (3) based on the following: the change in system timer cycle, triangle or polygon rendering, random processor core core are not in low-power mode, arbitrary mouse action, employ vertical blanking interrupt (verticalblanking interrupt) and/or enable covering.Such as, can relate to assembly power-off voltage stabilizer is reduced to minimum operational voltage level.Such as, when processor 110 runs the operating system with MicrosoftWindows compatibility, this driver can be kernel-mode driver.Such as, host computer system 102 can use interface 145 to send order to target device 150.In certain embodiments, interface 145 can comprise primary link and AUX passage, and both is all described in VESA (VESA) DisplayPort Standard Edition 1, revision 1a (2008) and revision thereof and distortion.In various embodiments, host computer system 102 (such as, graphics subsystem 115) can at least be formed by the mode described by the co-pending U.S.Patent application (Proxy Signature Scheme number is P27579) that the sequence number submitted to about on September 29th, 2008 is 12/286,192, name is called " Protocol Extensions in a Display Port Compatible Interface ", invention people is Kwa etc. and send communication to target device 150.Target device 150 can be capable display of visually content and/or the display device presenting video content.Such as, target device 150 can comprise steering logic, the register of the time schedule controller (TCON) such as controlled the write of pixel and the operation of instructing target device 150.Target device 150 can access storer or frame buffer, wherein reads the frame for showing from this storer or frame buffer.Each embodiment comprise by interface 145 to target device 150 transmission times according to grouping ability.Secondary data grouping may be used for order target device 150.Figure 1B shows the example of the assembly can being controlled the host computer system 102 of (such as reduce or increase power consumption) according to embodiment power consumption.These assemblies can be arranged in chipset, processor or graphics subsystem.Such as, display phaselocked loop (PLL) 160, display board 162, display tube 164 and the display interface 166 in main frame 102 can be carried out power-off or is powered on.PLL 160 can be the system clock of display board 162, display tube 164 and/or display interface 166.Such as, display board 162 can comprise data buffer and RGB color mapped device, its by the data transformation from impact damper to RGB.Display board 162 can comprise the Memory Controller and storer I/O (I/O) (not shown) that are associated, also can carry out power management to them.Pipe 164 can comprise the mixer, X, Y-coordinate rasterizer and the interface protocol packetizer that multiple image layer are mixed into combination image.Interface protocol packetizer at least can meet DisplayPort or low-voltage differential signal (LVDS), and it can obtain from ANSI/TIA/EIA-644-A (2001) and distortion thereof.Display interface 166 can comprise and exports (PISO) interface with the interface of DisplayPort or LVDS compatibility and parallel in serial.Fig. 1 C shows the high level block diagram of the time schedule controller of the display device according to embodiment.Time schedule controller 180 have the ability to from main process equipment, the instruction that enters self-refresh display (SRD) pattern responds, this can comprise and carries out power-off to assembly and/or catch image and repeatedly export the image of catching to display.In response to the signal SRD_ON carrying out from host, frame caught by SDR controll block Active Frame impact damper, and SRD controll block control multiplexer (MUX) transmits the frame of catching to output port.After frame buffer catches frame, main frame can read by the register in counter plate, and wherein the instruction of this register there occurs and to have caught and time schedule controller shows the image of catching.After signal SRD_ON is separated activation, SRD controll block is carried out solution to frame buffer and the logic be associated and is activated, and makes MUX will be sent to output port (TX) from the video that enters of input port (being RX in this case).Time schedule controller 180 can use less power, this is because close frame buffer when exiting self-refresh display mode and logical timer by gate.In various embodiments, SRD_ON and SRD_STATUS can be signal in register or can be configured in a register.Fig. 2 show with the example format of signal that sent by multiple path on the interface of DisplayPort compatibility.Particularly, Fig. 2 has reproduced the Fig. 2-14 in VESA (VESA) DisplayPort Standard Edition 1, revision 1a (2008) (hereinafter referred to as DP1.1a specification).But embodiments of the invention can be used in any version of DisplayPort and distortion and other standard.DisplayPort defines the availability of the secondary data grouping sending information according to the wish of manufacturer.Expanded packet specific to supplier is the secondary data grouping that one may be used for being controlled by embedded DisplayPort (eDP) display self-refresh function.In the table 2-33 that the 2.2.5 of DP1.1a specification saves, describe the basic structure of the header information of these secondary data grouping, reproduce this basic structure in Table 1 below.Table 1Fig. 3 shows the exemplary approach of the one or more tunnel secondary data groupings by DisplayPort.Particularly, Fig. 3 has reproduced the Fig. 2-24 in DP1.1a specification.As directed, secondary data grouping can comprise byte of header, parity byte and data byte.According to each embodiment, following table provides the example of the order that can send in the byte of header of secondary data grouping according to each embodiment.Order can be performed by the target device of the display performing self-refresh display of such as having the ability and so on.Table 2Each embodiment provides control in the position 0-2 of byte of header HB2.Table 3 describes the exemplary command in the position 0,1 and 2 in byte of header HB2.Table 3Position B0 instruction will send to the frame of target device whether to there occurs change with sending to the former frame of target device.Whether position B0 target device will enter image stores in a buffer.Target device can be have the ability to enter self-refresh display mode and the display shown from the image of impact damper.When the image on refresh display is wanted in an application, a B0 can be used.Can carry out upgrading to wake panel up, and inform that panel will send the frame of one or more amendment to display and store these frames.After these frames of storage, display and display system can be back to low power state, and display system can use the frame of renewal to carry out self-refresh display.Position B1 target device enters self-refresh display mode or remains in normal running.Position B1 also indicates whether normal Graphics Processing occurs and whether the link between source device and target device remains on normal Activate state.Position B2 indicates whether primary link power-off.Such as, primary link can be differential pair of lines d+ and d-with connector.This link can send the content of perhaps other type in RGB.This link can be de-energized or enter lower power mode.The embedded DisplayPort of standard realizes support two kinds of Link States: (1) standard-sized sheet (" normal running "), wherein send video data to panel, and (2) entirely disconnected (" ML forbidding "), wherein because do not need video, so the lid on closedown laptop computer, and close display interface.The embedded DP of standard realize also supporting middle one group with train relevant transition state.SRD increases extra state: " ML is standby ".State " ML is standby " makes receiver can realize extra power management techniques, to obtain extra power reduction.Such as, receiver biasing circuit and PLL can be closed.Such as, lower power rating can be entered about the assembly described by Figure 1B maybe can close.State " ML is standby " can close display interface and display link, but uses the image stored in panel to carry out SRD.Fig. 4 shows the example of the sequence of events entering ML standby mode.DisplayPort primary link can be used to send signal X, Y and Z.In certain embodiments, byte of header HB2 can be used to send signal X, Y and Z.Signal X represents relative to the frame sent before to be amendment or not to revise present frame, wherein currently will send after VBI.In this example, the value of signal X can indicate relative to the frame amendment sent before or not revise present frame.In this example, it is all indifferent for revising frame or not revising frame.Signal Y indicates that SRD opens or closedown.In this case, signal Y indicates SRD state to open.Signal Z indicates whether to occur that link is standby enters.In this case, will to enter link standby in signal Z instruction.In certain embodiments, byte of header HB2 can be used to send signal X, Y and Z.In order to send X, Y and Z, scheme below can be used: position B0 represents X, position B1 represents Y and position B2 represents Z.Section " activation " can comprise the RGB color data being sent to display.Section " BS " can the beginning of vertical blanking interval in indication mechanism.The delay of section " BS is to standby " instruction between the beginning and the beginning of standby mode of vertical blanking interval.Fig. 5 shows the example of the sequence of events exited from ML standby mode.Particularly, the state of primary link and accessory channel is described.Status of primary link is in state " standby ".By using AUX passage transmission write operation to initiate, ML is standby to be exited in source.Can write to register address position 00600h by utility command WR, with wake up target equipment, and make target device exit ML standby mode.Other register address position can be used.Target device monitoring location 00600h also wakes up when reading wake command in this position.After some postpone, target device uses AUX passage to send order ACK to main frame, receives WR order to indicate confirmation.The length receiving the delay between WR and transmission ACK can be defined by DisplayPort specification.Detect write event time, target device powers on to primary link receiver, and again enter physical training condition with prepare carry out link training.Correspondingly, as directed, primary link gets the hang of " training ".After exiting standby mode, again entering physical training condition when not having implicit commands, providing synchronous faster.Complete in source after transmission writes affairs, link training can be initiated in source.Transmitter can initiate the full training that describes in DP specification or rapid link training.Target device can be closed and lose the consciousness needing training when waking up it.Make target device exit standby after carry out immediately training allow complete power-off is carried out to DP receiver.Figure described herein and/or video processing technique can realize with various hardware structure.Such as, figure and/or video capability can be integrated in chipset.Alternatively, discrete figure and/or video processor can be used.As another embodiment, can by comprising the general processor of multi-core processor to realize figure and/or video capability.In another embodiment, these functions can be realized in the electronic equipment of user.Embodiments of the invention can be embodied as any one in the following or combination: the one or more microchip using mainboard to carry out interconnecting or integrated circuit, firmware hardwired logic, memory device for storing and the software performed by microprocessor, firmware, special IC (ASIC) and/or field programmable gate array (FPGA).For example, term " logic " can comprise the combination of software or hardware and/or software and hardware.Such as, embodiments of the invention can be provided as computer program, computer program can comprise the one or more machine readable medias it storing machine-executable instruction, when described machine-executable instruction is performed by one or more machine (such as computing machine, computer network or other electronic equipment), described one or more machine execution can be caused according to the operation of the embodiment of the present invention.Machine readable media can include but not limited to: the medium/machine readable media being suitable for storing machine executable instruction of floppy disk, CD, CD-ROM (compact disc read-only memory) and magneto-optic disk, ROM (ROM (read-only memory)), RAM (random access memory), EPROM (EPROM (Erasable Programmable Read Only Memory)), EEPROM (EEPROM (Electrically Erasable Programmable Read Only Memo)), magnetic or optical card, flash memory or other type.Accompanying drawing and description above give example of the present invention.Although be described to multiple different functional item, one of skill in the art will appreciate that and one or more in these elements can be combined into individual feature element well.Alternatively, some element divisions can be become multiple function element.Element from an embodiment can be added to another embodiment.Such as, the order of process described herein can change, and is not limited to mode described herein.In addition, the action of arbitrary process flow diagram need not realize according to shown order; Do not need to perform whole actions yet.And, do not rely on other action those actions can with other action executed in parallel.But scope of the present invention is never limited by these object lessons.Whether no matter clearly provide in the description, a large amount of changes of difference of such as structure, size and materials'use and so on are all possible.Scope of the present invention is at least equally wide in range with given by following claim.
Systems, methods, and computer programs are disclosed for method for reducing memory subsystem power. In an exemplary method, a system resource manager provides memory performance requirements for a plurality of memory clients to a double data rate (DDR) subsystem. The DDR subsystem and the system resource manager reside on a system on chip (SoC) electrically coupled to a dynamic random access memory (DRAM). A cache hit rate is determined of each of the plurality of memory clients associated with a system cache residing on the DDR subsystem. The DDR subsystem controls a DDR clock frequency based on the memory performance requirements received from the system resource manager and the cache hit rates of the plurality of memory clients.
1.A method for reducing power of a memory subsystem, the method comprising:The system resource manager provides a memory performance requirement for a plurality of memory clients to a double data rate (DDR) subsystem, the DDR subsystem and the system resource manager being electrically coupled to a dynamic random access memory (DRAM) On-chip system;Determining a cache hit ratio for each of the plurality of memory clients associated with a system cache present on the DDR subsystem;The DDR subsystem adjusts access to the DRAM based on the memory performance requirements received from the system resource manager and the cache hit ratio of the plurality of memory clients.2.The method of claim 1 wherein the adjusting the access to the DRAM by the DDR subsystem comprises adjusting a system cache prefetch size based on a DDR clock frequency.3.The method of claim 2 wherein adjusting the access to the DRAM by the DDR subsystem comprises increasing a system cache prefetch size to mitigate latency when the DDR clock frequency reaches a programmable threshold.4.The method of claim 1, wherein the adjusting, by the DDR subsystem, access to the DRAM comprises: speeding up a system based on the cache hit ratio of at least one of the plurality of memory clients The cache prefetch size is adjusted.5.The method of claim 1 further comprising:Instructing the DRAM to enter a self-refresh mode;The duration of the self-refresh mode is extended by using a write sub-cache that is present in the system cache.6.The method of claim 1 wherein said memory client comprises one of a central processing unit (CPU), a graphics processing unit (GPU), and a digital signal processor (DSP) that are electrically coupled via a SoC bus. Or multiple.7.The method of claim 1 wherein said DDR subsystem further comprises one or more performance monitors for determining said cache hit ratio for each of said plurality of memory clients.8.A system for reducing the power of a memory subsystem, the system comprising:A unit for providing a dual data rate (DDR) subsystem with memory performance requirements for a plurality of memory clients, the DDR subsystem being present on a system on chip (SoC) that is electrically coupled to a dynamic random access memory (DRAM) ;Means for determining a cache hit ratio for each of the plurality of memory clients associated with a system cache present on the DDR subsystem;Means for adjusting access to the DRAM based on the memory performance requirements received from the system resource manager and the cache hit ratio of the plurality of memory clients.9.The system of claim 8 wherein said means for adjusting access to said DRAM comprises means for adjusting a system cache prefetch size based on a DDR clock frequency.10.The system of claim 9 wherein said means for adjusting access to said DRAM comprises: increasing a system cache prefetch size to mitigate when said DDR clock frequency reaches a programmable threshold The unit of delay.11.The system of claim 8 wherein said means for adjusting access to said DRAM comprises: said cache hit ratio based on said at least one of said plurality of memory clients The unit that adjusts the system cache prefetch size.12.The system of claim 8 further comprising:a unit for placing the DRAM in a self-refresh mode;A means for extending the duration of the self-refresh mode by using a write sub-cache that is present in the system cache.13.The system of claim 8 wherein said memory client comprises one of a central processing unit (CPU), a graphics processing unit (GPU), and a digital signal processor (DSP) that are electrically coupled via a SoC bus. Or multiple.14.The system of claim 8 wherein said DDR subsystem further comprises one or more performance monitors for determining said cache hit ratio for each of said plurality of memory clients.15.A computer program embodied in a memory and executable by a processor to implement a method for reducing power of a memory subsystem, the method comprising:Providing a dual data rate (DDR) subsystem with memory performance requirements for a plurality of memory clients, the DDR subsystem being present on a system on chip (SoC) that is electrically coupled to a dynamic random access memory (DRAM);Determining a cache hit ratio for each of the plurality of memory clients associated with a system cache present on the DDR subsystem;Access to the DRAM is adjusted based on the memory performance requirements received from the system resource manager and the cache hit ratio of the plurality of memory clients.16.The computer program of claim 15 wherein said adjusting access to said DRAM comprises adjusting a system cache prefetch size based on a DDR clock frequency.17.The computer program of claim 15 wherein said adjusting access to said DRAM comprises increasing a system cache prefetch size to mitigate latency when said DDR clock frequency reaches a programmable threshold.18.The computer program of claim 15 wherein said adjusting access to said DRAM comprises: precaching a system cache based on said cache hit ratio of at least one of said plurality of memory clients Take the size to adjust.19.The computer program of claim 15 wherein the method further comprises:Instructing the DRAM to enter a self-refresh mode;The duration of the self-refresh mode is extended by using a write sub-cache that is present in the system cache.20.The computer program of claim 15 wherein said memory client comprises one of a central processing unit (CPU), a graphics processing unit (GPU), and a digital signal processor (DSP) that are electrically coupled via a SoC bus. Item or multiple items.21.The computer program of claim 15 wherein said DDR subsystem further comprises one or more performance monitors for determining said cache hit ratio for each of said plurality of memory clients .22.A system for reducing the power of a memory subsystem, the system comprising:Dynamic random access memory (DRAM);Electrically coupled to a system-on-a-chip (SoC) of the DRAM via a DDR bus, the SoC comprising:a plurality of memory clients electrically coupled via a SoC bus;a system resource manager configured to determine a memory performance requirement of the plurality of memory clients;a DDR subsystem including a system cache, the DDR subsystem configured to: determine a cache hit ratio for each of the plurality of memory clients associated with the system cache; And adjusting access to the DRAM based on the memory performance requirements received from the system resource manager and the cache hit ratio of the plurality of memory clients.23.The system of claim 22 wherein said DDR subsystem adjusts access to said DRAM by adjusting system cache prefetch size based on DDR clock frequency.24.The system of claim 23 wherein said DDR subsystem adjusts access to said DRAM by increasing system cache prefetch size to mitigate when said DDR clock frequency reaches a programmable threshold Delay.25.The system of claim 22 wherein said DDR subsystem adjusts access to said DRAM by: based on said cache hit ratio of at least one of said plurality of memory clients Adjust the system cache prefetch size.26.The system of claim 22 wherein said DDR subsystem is further configured to perform the following operations:Instructing the DRAM to enter a self-refresh mode;The duration of the self-refresh mode is extended by using a write sub-cache that is present in the system cache.27.The system of claim 22 wherein said memory client comprises one of a central processing unit (CPU), a graphics processing unit (GPU), and a digital signal processor (DSP) that are electrically coupled via a SoC bus. Or multiple.28.The system of claim 22 wherein said DDR subsystem further comprises one or more performance monitors for determining said cache hit ratio for each of said plurality of memory clients.29.The system of claim 22 is incorporated into a portable computing device.30.The system of claim 29 wherein said portable computing device comprises a smart phone or tablet type device.
Power down memory subsystem with system cache and local resource managementBackground techniquePortable computing devices (eg, cell phones, smart phones, tablet computers, portable digital assistants (PDAs), portable game consoles, wearables, and other battery-powered devices) and other computing devices continue to offer an expanding family Features and services, and provide users with unprecedented levels of access to information, resources, and communications. In order to keep up with the pace of these services, such devices have become more powerful and complex. Portable computing devices now typically include a system on chip (SoC) that includes multiple memory clients (eg, one or more central processing units (CPUs), graphics processing units (GPUs), digital signals embedded on a single substrate Processor (DSP), etc.). The memory client can read data from and store data from a dynamic random access memory (DRAM) memory system that is electrically coupled to the SoC via a double data rate (DDR) bus.DDR system power is increasingly becoming a significant part of total battery usage. Much of the power of the DDR system is generated by the read/write service to the DRAM. As systems become more complex, they require higher traffic bandwidth, and business models become more complex and random, resulting in increased energy expenditures. Consolidating the last-level system cache can reduce the amount of DDR traffic. However, even with the benefits provided by the system cache, it is still possible to waste power because the DDR subsystem must operate with the worst case voltage and frequency required to service the unpredictable service. Existing solutions attempt to save DDR power by using open loop adjustments to the DDR clock frequency. However, because the adjustments must be conservative in order to avoid performance degradation, these solutions are sub-optimal. Although there are some power savings, further adjustments can compromise the end user experience.Accordingly, there is a need for improved systems and methods for reducing memory subsystem power.Summary of the inventionSystems, methods, and computer programs for reducing the power of a memory subsystem are disclosed. In an exemplary method, a system resource manager provides a memory performance requirement for a plurality of memory clients to a double data rate (DDR) memory subsystem. The DDR subsystem and the system resource manager are present on a system on chip (SoC) that is electrically coupled to a dynamic random access memory (DRAM). A cache hit ratio is determined for each of the plurality of memory clients associated with a system cache present on the DDR subsystem. The DDR subsystem adjusts access to the DRAM based on the memory performance requirements received from the system resource manager and the cache hit ratio of the plurality of memory clients.Another embodiment of a system for reducing power to a memory subsystem includes dynamic random access memory (DRAM) and a system on chip (SoC) that is electrically coupled to the DRAM via a DDR bus. The SoC includes a plurality of memory clients, system resource managers, and DDR subsystems that are electrically coupled via a SoC bus. The system resource manager is configured to determine a memory performance requirement of the plurality of memory clients. The DDR subsystem includes a system cache. The DDR subsystem is configured to: determine a cache hit ratio for each of the plurality of memory clients associated with the system cache; and based on from the system resource manager The memory performance requirements received and the cache hit ratios of the plurality of memory clients adjust access to the DRAM.DRAWINGSIn the figures, like reference characters refer to the For a label having an alphabetic character name such as "102A" or "102B", the alphabetic character name can distinguish between two similar parts or elements appearing in the same figure. When the intended label includes all parts of the same drawing having the same reference numerals, the alphabetic character names of the labels may be omitted.1 is a block diagram of one embodiment of a system for reducing the power of a memory subsystem.2 is a flow chart illustrating one embodiment of a method for reducing power consumption of the DDR subsystem of FIG.3 is a table illustrating one exemplary embodiment of a method for adjusting a prefetch size based on a predetermined use case.4 is a block/flow diagram illustrating a combination of one embodiment of a system cache in the DDR subsystem of FIG. 1.5 is a flow diagram illustrating one embodiment of the operation of a write sub-cache in the system cache of FIG.6 is a flow diagram illustrating another embodiment of the operation of a write sub-cache in the system cache of FIG.7 is a block/flow diagram illustrating a combination of one embodiment of the operation of the QoS algorithm implemented in the scheduler of FIG.8 is a block diagram of one embodiment of a portable communication device for merging the system of FIG. 1.Detailed waysThe word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other aspects.In the present specification, the term "application" may also include files having executable content such as object code, scripts, bytecodes, markup language files, and patches. Additionally, an "application" as referred to herein may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.The term "content" may also include files having executable content such as object code, scripts, bytecode, markup language files, and patches. Additionally, "content" as referred to herein may also include files that are not executable in nature, such as documents that may need to be opened or other data files that need to be accessed.As used in this specification, the terms "component", "database", "module", "system" and the like are intended to mean a computer-related entity, whether it is hardware, firmware, a combination of hardware and software, software or execution. software. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and a computing device can be a component. One or more components can reside within a process and/or executed thread, and a component can be confined to a computer and/or distributed between two or more computers. Additionally, these components can execute from various computer readable media having various data structures stored thereon. A component may be, for example, a component that interacts with other systems based on having one or more data packets (eg, from interacting with another component in a local system, a distributed system, and/or across a network such as the Internet) The data) communicates via local and/or remote processes.In this specification, the terms "communication device", "wireless device", "wireless phone", "wireless communication device" and "wireless cell phone" are used interchangeably. With the advent of third-generation ("3G") wireless technologies and fourth-generation ("4G"), greater bandwidth availability has enabled more portable computing devices to have more wireless capabilities. Thus, the portable computing device can include a cellular telephone, a pager, a PDA, a smart phone, a navigation device, or a handheld computer with a wireless connection or link.FIG. 1 illustrates one embodiment of a system 100 for reducing memory subsystem power. System 100 can be a battery powered wearable device including a personal computer, workstation, server, or portable computing device (PCD) such as a cellular telephone, smart phone, portable digital assistant (PDA), portable game console, tablet computer, or other. ) is implemented in any computing device.As illustrated in FIG. 1, system 100 includes a system on chip (SoC) 102 that is electrically coupled to a memory system via a memory bus. In the embodiment of FIG. 1, the memory system includes a dynamic random access memory (DRAM) 104 coupled to the SoC 102 via a random access memory (RAM) bus 148 (eg, a double data rate (DDR) bus). The SoC 102 includes various on-chip components including a plurality of memory clients, a memory device controller 140, a system resource manager 120, and a memory subsystem (eg, DDR subsystem 122) that are interconnected via a SoC bus 118. The memory client may include one or more processing units (eg, central processing unit (CPU) 106, graphics processing unit (GPU) 108, digital signal processor (DSP) 110, or other that request read/write access to the memory system. Memory client). System 100 further includes a high level operating system (HLOS) 116.The storage device controller 140 controls the transfer of data to a storage device such as the non-volatile memory device 136. As further illustrated in FIG. 1, SoC 102 can include on-chip memory devices such as static random access memory (SRAM) 112 and read only memory (ROM) 114.As described in greater detail below, system 100 provides for enabling DDR subsystem 122 to integrate complex memory traffic modes and control DDR clock frequencies and/or voltage levels to provide overall performance requirements for various use cases. Improved architecture. In operation, each memory client in the memory client can send a performance level vote to system resource manager 120 via interface 142. As is known in the art, the client votes to indicate the desired memory bandwidth capability to the DDR subsystem 122. System resource manager 120 includes various functional blocks for managing system resources such as, for example, clocks, voltage regulators, bus frequencies, quality of service, priority, and the like. System resource manager 120 enables each component in system 100 to vote on the status of system resources. The system resource manager 120 can combine all client votes in the client vote and estimate the use case performance requirements. System resource manager 120 can forward use case performance requirements to DDR subsystem 122 via interface 144. The DDR subsystem 122 controls its own clock frequency and voltage based on various local conditions under its control.As illustrated in FIG. 1, DDR subsystem 122 includes a local DDRSS resource manager 126 that receives use case performance requirements from system resource manager 120. In one embodiment, local DDRSS resource manager 126 includes a hardware clock located within the DDR subsystem. It should be appreciated that the local DDRSS resource manager 126 can include one or more functions implemented by, for example, one or more blocks located within the DDR subsystem 122. The local DDRSS resource manager 126 can control the DDR frequency via the DDR clock controller 124. The local DDRSS resource manager 126 can control the DDR voltage by communicating with a power manager integrated circuit (IC) 138 that is electrically coupled to the SoC 102 via a connection 146. In one embodiment, the DDR frequency and/or voltage can be adjusted based on the performance requirements of the memory client and the cache hit ratio of each client.As illustrated in FIG. 1, DDR subsystem 122 may include system cache 128, one or more cache performance monitors 134, and DRAM controller 130. System cache 128 is electrically coupled to DRAM controller 130. DRAM controller 130 is electrically coupled to off-chip DRAM 104 via RAM bus 148. System cache 128 may include a shared or last level cache with write buffer 132. System cache 128 is a component that stores data so that future requests for that data can be serviced more quickly. In one embodiment, system cache 128 may be externally external to SoC 102 and connected to SoC 102 via an I/O bus.In another embodiment, DDR subsystem 122 may control the DDR voltage and/or frequency based on the client-specific performance monitor 134 and/or the cache hit ratio of each memory client. It will be appreciated that the clock frequency and voltage of the DDR and system cache 128 can be individually controlled. As described in more detail below, DDR subsystem 122 may use write buffer 132 to aggregate and more efficiently schedule write accesses to DRAM 104. In addition, DDR subsystem 122 may apply various cache prefetch policies to aggregate read latency and mitigate latency when DRAM 104 is operating at a relatively low frequency (eg, below a predetermined or programmable level).It should be appreciated that the DDR subsystem 122 can use the cache hit ratio as well as client voting as an input element for calculating the DDR clock frequency, and thus maintain the DDR clock frequency at a more optimized level. In addition, lowering the clock frequency also allows the voltage to be reduced, which results in a total energy savings.2 is a flow diagram illustrating one embodiment of a method 200 implemented in system 100 for adjusting DDR frequencies. Method 200 can be performed for a variety of different use cases. At block 202, the use case can be initiated or resumed. At block 204, system 100 may initially have default frequency and/or voltage levels for system cache 128 and DRAM 104. System resource manager 120 receives an initial performance level vote from a memory client (e.g., CPU 106, GPU 108, DSP 110, etc.). At block 206, system resource manager 120 determines a memory performance requirement based on a performance level vote received from the memory client. In one embodiment, system resource manager 120 estimates performance requirements. The system resource manager 120 updates the performance estimates using quality of service (QoS) information. At block 208, the memory performance requirements are sent to the DDR subsystem 122. The memory performance requirements can be forwarded to the local DDRSS resource manager 126 via interface 144 (FIG. 1). At block 210, the local DDRSS resource manager 126 can calculate and adjust the frequency and/or voltage for the DDR clock controller 124 and the system cache 128 based on the cache hit rate of the memory client. As mentioned above, the cache hit ratio can be determined by one or more performance monitors. The local DDRSS resource manager 126 can also adjust the system cache prefetch size based on cache hit ratio and memory performance requirements. In an exemplary embodiment, a predetermined weight (Wn) may be assigned to a clock frequency of each of a plurality of memory clients (eg, client 0 - client n). The predetermined weight (Wn) can be pre-calibrated for each client with a unique bandwidth requirement (BWn) and hit rate (HRn). The total bandwidth requirement for system cache 128 includes the sum of the bandwidth requirements (BWn) of all clients normalized by the weight of the corresponding client and the cache hit ratio. The system cache frequency can be determined by dividing the total bandwidth requirement of the system cache by the width of the data path of the system cache.The DDR frequency can be calculated by the cache miss rate in a similar manner in place of the cache hit rate as shown in Equations 1 and 2 below.F_cache=sum(W0*BW0*HR0+W1*BW1*HR1+...+Wn*BWn*HRn)/interfeace_widthEquation 1F_ddr=sum(W0*BW0*(1-HR0)+W1*BW1*(1-HR1)+...+Wn*BWn*(1-HRn))/interface_widthEquation 2In Equations 1 and 2, "Wn" represents the relative importance or weight of each client at the time of frequency decision making. For example, a larger value of Wn may result in a particular client's bandwidth requirement having a greater frequency impact than other clients with a lower value of Wn. The sum of all Wn values is 100%. In one embodiment, for a given type of use case, the weighted distribution across all clients may follow a predetermined profile. Changes between use case types may involve adjustments to the weighted distribution. "BWn" represents the unique bandwidth requirement of the client, which can be measured in bytes per second. Bandwidth requirements can be continuously changed even within the same use case. "HRn" represents the system cache hit rate measured for this client and can take values from 0 to 100%. The hit rate can be changed even within the same use case. The value of (1-HRn) includes the system cache miss rate for the client. "interface_width" represents the number of bytes transmitted in a single clock cycle of 1/F_cache or 1/F_ddr.In this simplified design, the DDR frequency can include fixed values regardless of multiple memory client bandwidth requirements and cache hit ratios. A fixed value can be selected from a set of predefined frequencies that assume a worst case cache hit ratio and concurrency conditions.It should be appreciated that the DDR subsystem 122 can adjust the memory access mode to the DRAM 104 based on, for example, the cache hit ratio of the plurality of memory clients, the DDR operating frequency, and the access mode of the client. Referring to FIG. 3, in one embodiment, DDR subsystem 122 may adjust the system cache prefetch size based on one or more use cases. Table 300 illustrates three exemplary use cases in column 302. The first use case 310 includes a camera use case. The second use case 312 includes a GPU use case. The third use case includes a CPU use case. The use case person can be identified by a sub-cache partition or by a primary ID from each transaction. The DDR subsystem 122 can then prefetch based on the business model of the use case. Columns 304, 306, and 308 illustrate corresponding values for prefetch weighting, access mode, and delay tolerance, respectively. It should be appreciated that these may be configurable parameters in the DDR subsystem 122 that may be set to pre-tuned values for each use case. The prefetch weights include values that represent how aggressively the prefetch will be applied to the cache prefetch function when the access mode is detected (eg, the value "0" is least aggressive, and the value "10" is the most aggressive. ).Access mode 306 (Fig. 3) defines the service request mode from the corresponding use case. This represents the mode that is used to detect and enable prefetching and the mode that will be used to prefetch data. For camera use case 310, the prefetch mode is for each 256-byte address granule (ie, pre-fetching the first 64 bytes, skipping the next 192 bytes, and then prefetching the first 64 from the next 256-byte unit) byte). Delay tolerance 308 represents how long a time window can be handled by a use case in order to request a response. This information can be helpful in guiding the DDR subsystem 122 (eg, controlling how much I/O requests can be stuck to the DRAM and keeping the DRAM and I/O in a low power state for as long as possible) of. Once a prefetch action is required for the use case, the prefetch size for the use case can be determined according to Equation 3.Prefetch_size_n=Size_pfch_max*Wn*(F_ddr_max-F_ddr)/((F_ddr_max-F_ddr_min)*10)Equation 3For each client "n", Wn is the relative importance or weight of the client as a percentage. For example, a larger value of Wn would give the client a greater impact on the prefetch length than other clients with a lower value of Wn. "Size_pfch_max" is the programmable maximum prefetch size that will be issued from the DDR subsystem 122. "Prefetch_size_n" is the prefetch size for the use case in a particular use case, which may be based on a minimum granularity such as a cache operation.In another embodiment, DDR subsystem 122 can adjust the system cache prefetch size based on the DDR frequency. For example, the prefetch size can be increased to reduce latency when the DRAM 104 is at a relatively lower frequency. The prefetch function can be disabled when the DDR frequency reaches a predetermined or programmable threshold. Moreover, DDR subsystem 122 can pre-configure the prefetch size based on, for example, a minimum size (eg, 256 bytes) and a maximum size (eg, 4K bytes). The DDR subsystem 122 can control the prefetch size based on the traffic mode, DDR frequency, and prefetch hit rate.In a further embodiment, DDR subsystem 122 can adjust the system cache prefetch size, the cache hit rate of the memory client. The DDR subsystem 122 can keep track of the cache lines allocated by the prefetch engine. When monitoring the use case hit ratio, the DDR subsystem 122 can provide separate counters for cache lines that are allocated by prefetching and non-prefetching. The prefetch size can be increased when the cache hit rate for the prefetched row is increased.4 is a block/flow diagram illustrating a combination of one embodiment of system cache 128. System cache 128 includes a write sub-cache 402 (e.g., exemplary write buffer 132), a write request pipeline 403, and a read miss pipeline 404. Write sub-cache 402 and write request pipeline 403 are coupled to write queue 406. Read miss block 404 is coupled to read queue 408. Write queue 406 and read queue 408 are in communication with memory scheduler 410 that is coupled to DRAM controller 130. It should be appreciated that Figure 4 illustrates the flow within a frequency point. In the case where the DRAM 104 supports only one frequency point, the DRAM 104 can be switched between the active mode and the self-refresh mode. The DRAM controller 130 can switch between the clock being gated and unstrobed.FIG. 5 illustrates a method 500 implemented in the system 400 of FIG. During boot, at block 502, the DDR subsystem 122 can be configured to include a default condition of default frequency and default timeout. The default condition can have zero occupancy in the write subcache 402. At block 504, the DDR subsystem 122 can adjust the DDR frequency and/or prefetch size based on the use case requirements. At blocks 506 and 508, a timer can be used to place DRAM 104 in a self-refresh mode in the absence of read/write activity. At block 516, for a write request, the write sub-cache 402 can be used to extend the duration of maintaining the DRAM 104 in the self-refresh mode to conserve power. Write requests can wait in the write queue until the queue is full, and then, these requests can be issued together. At blocks 510, 512, and 514, if there is a cache hit, system 400 can service the read request from system cache 128. When a read miss occurs, the request can be temporarily placed in the read request queue. In order to aggregate read accesses, requests can wait in the queue until the queue is full, and then, these read requests can be issued together. However, it should be appreciated that when a request has a QoS priority that is, for example, above a configurable threshold, the request can be served without waiting in the read queue. At block 518, DRAM 104 may be brought out of self-refresh mode to service requests (such as read misses) or may be flushed in read queues and write queues when write subcache 402 can no longer accommodate write requests. request.FIG. 6 is a flow diagram illustrating another embodiment of the operation of system cache 128. Figure 6 shows details of how to service a write request in a frequency point or when the DRAM 104 supports only one frequency point in order to keep the DRAM 104 in self-refresh mode as long as practical. The write sub-cache 402 includes system cache operations. A cache line that is not used as the write sub-cache 402 can be used as a normal cache for use cases. Blocks 602, 604, 606, 608, and 610 illustrate non-cacheable requests that are received while the DRAM 104 is in active mode or self-refresh. If the write queue 406 is not full, the request can be placed in a queue to be scheduled for DDRI/O. If the DRAM 104 is in self-refresh mode or the queue is full, it will check the write sub-cache 402 at decision block 620. The cacheable write request can be processed by a normal cache operation, which can have different operating conditions than the write buffer subcache. At blocks 614, 616, and 626, the system determines if there is a cache hit or if other cache lines need to be replaced. At block 618, if there is a cache miss and the victim is a dirty row, it goes to the same path as above to decide whether to send the request to the write sub-cache 402 or directly to the write queue 406. Blocks 620, 622, 624, and 626 illustrate a flow for allocating cache lines in system cache 128 as write sub-cache 402 to temporarily buffer data before it is written to DRAM 104. At blocks 620 and 624, DRAM 104 is brought out of self-refresh mode when the buffer is full or no more cache lines are available for being allocated for write sub-cache 402.7 is a block/flow diagram illustrating a combination of one embodiment of the operation of the QoS algorithm implemented in the scheduler of FIG. As illustrated in FIG. 7, read request 701 and write request 703 may arrive at DDR subsystem 122 with a QoS indication, such as a priority level. The request can be serviced from the system cache 128, in which case no transactions to the DDRI/O are generated. If there is a miss, the request is forwarded to DDR IO. Requests can be allocated in system cache 128. Dirty cache lines can be evicted (line 702) to different addresses to DDR I/O. Thus, the address of the request input to the DDR subsystem 122 may not be exactly the address on the DDR I/O. The addresses in the input read/write requests 701 and 703 may not be the correct information to be used to schedule traffic to the DRAM CTRL 130. In this example, system cache controller 710 receives a write request 703 with address 0x1234ABCD. System cache controller 710 can determine that a new cache line will be allocated for the write request. As illustrated at line 702, system cache controller 710 can select the victim cache line to be evicted based on the alternate policy. Block 704 shows a victim cache line having an address different from the original write request (eg, 0x5555FFFF will be evicted to DDR). At block 132, the eviction generates a write request for address 0x5555FFFF. The request is either placed in the write queue 406 (if the queue is not full) or placed in the write subcache 402. Due to the write sub-cache, the DDR traffic scheduler 410 now has a larger pool to be scheduled including, for example, the following: (a) all requests in the read queue with the requested QoS priority level; (b) All requests in the read queue with the requested QoS priority level; and (c) all requests in the write buffer. Having visibility to more incoming traffic allows the scheduler to perform better scheduling, where the scheduling criteria consider the requested QoS priority level, DRAM page open and blank hits, read priority over write priority, etc., and The request is sent to the DRAM controller 130. In this way, it should be appreciated that the system can have better QoS and DDR usage by providing the scheduler with a larger pool for scheduling.As mentioned above, system 100 can be incorporated into any desired computing system. FIG. 8 illustrates a system 100 that is incorporated in an exemplary portable computing device (PCD) 800. It should be readily appreciated that particular components of system 100 (e.g., RPM 116) are included on SoC 322 (Fig. 8), while other components (e.g., DRAM 104) are components that are coupled external to SoC 322. The SoC 322 can include a multi-core CPU 802. The multi-core CPU 802 can include a zeroth core 810, a first core 812, and an Nth core 814. One of these cores may include, for example, a graphics processing unit (GPU), while one or more of the other cores includes a CPU.Display controller 328 and touch screen controller 330 can be coupled to CPU 802. In turn, touch screen display 606 located external to system on chip 322 can be coupled to display controller 328 and touch screen controller 330.FIG. 8 further illustrates that video encoder 334 (eg, a progressive phase inversion (PAL) encoder, a sequential color memory (SECAM) encoder, or a National Television System Committee (NTSC) encoder) is coupled to multi-core CPU 802. Further, video amplifier 336 is coupled to video encoder 334 and touch screen display 806. In addition, video port 338 is coupled to video amplifier 336. As shown in FIG. 8, a universal serial bus (USB) controller 340 is coupled to the multi-core CPU 802. Additionally, USB port 342 is coupled to USB controller 340. Memory 104 and Subscriber Identity Module (SIM) card 346 may also be coupled to multi-core CPU 802.Further, as shown in FIG. 8, digital camera 348 can be coupled to multi-core CPU 802. In an exemplary aspect, digital camera 348 is a charge coupled device (CCD) camera or a complementary metal oxide semiconductor (CMOS) camera.As further illustrated in FIG. 8, a stereo audio encoder-decoder (codec) 350 can be coupled to the multi-core CPU 802. Additionally, audio amplifier 352 can be coupled to stereo audio codec 350. In an exemplary aspect, first stereo speaker 354 and second stereo speaker 356 are coupled to audio amplifier 352. FIG. 8 shows that microphone amplifier 358 can also be coupled to stereo audio codec 350. Additionally, microphone 360 can be coupled to microphone amplifier 358. In a specific aspect. A frequency modulated (FM) radio tuner 362 can be coupled to the stereo audio codec 350. In addition, FM antenna 364 is coupled to FM radio tuner 362. Further, stereo headset 366 can be coupled to stereo audio codec 350.FIG. 8 further illustrates that a radio frequency (RF) transceiver 368 can be coupled to the multi-core CPU 802. RF switch 370 can be coupled to RF transceiver 368 and RF antenna 372. Keypad 204 can be coupled to multi-core CPU 802. Additionally, a mono headset with a microphone 376 can be coupled to the multi-core CPU 802. Further, vibrator device 378 can be coupled to multi-core CPU 802.FIG. 8 also shows that power supply 380 can be coupled to system on chip 322. In a specific aspect, power source 380 is a direct current (DC) power source that supplies power to various components of PCD 800 that require power. Further, in a specific aspect, the power source is a rechargeable DC battery or a DC power source that is derived from an AC connected to an alternating current (AC) power source to a DC transformer.Figure 8 further indicates that PCD 800 can further include a network card 388 that can be used to access a data network (e.g., a local area network, a personal area network, or any other network). Network card 388 may be a Bluetooth network card, a WiFi network card, a personal area network (PAN) card, a personal area network ultra low power technology (PeANUT) network card, a television/cable/satellite tuner, or any other network card known in the art. Further, the network card 388 can be incorporated into the chip, ie, the network card 388 can be a complete solution in the chip and may not be a separate network card 388.As depicted in FIG. 8, touch screen display 806, video port 338, USB port 342, camera 348, first stereo speaker 354, second stereo speaker 356, microphone 360, FM antenna 364, stereo headset 366, RF switch 370, The RF antenna 372, keypad 374, mono headset 376, vibrator 378, and power source 380 may be external to the system on chip 322.It should be appreciated that one or more of the method steps described herein can be stored in a memory as computer program instructions, such as the modules described above. These instructions may be executed by any suitable processor or in response to corresponding modules to perform the methods described herein.In order for the invention to function as described, the specific steps in the processes or process flows described in this specification naturally occur before other steps. However, the invention is not limited to the order of the steps described (if such order or order does not change the function of the invention). That is, it is to be understood that some of the steps may be performed before, after, or in parallel with other steps (sequentially in parallel) without departing from the scope and spirit of the invention. In some cases, specific steps may be omitted or not performed without departing from the invention. Further, terms such as "subsequent", "then", "next", etc. are not intended to limit the order of the steps. These terms are used simply to guide the reader through the description of the exemplary methods.Additionally, those skilled in the programming arts will be able to write computer code or identify suitable hardware and/or circuitry to implement the disclosed invention without difficulty, for example, based on the flowcharts and associated descriptions in this specification.Accordingly, the disclosure of program code instructions or a detailed collection of detailed hardware devices is not considered necessary to sufficiently understand how to make and use the present invention. The inventive features of the claimed computer implemented processes are set forth in detail in the above description and in conjunction with the drawings which illustrate various process flows.In one or more exemplary aspects, the functions described can be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored or transmitted as one or more instructions or code on a computer readable medium. Computer readable media includes both computer storage media and communication media including any medium that facilitates transmission of a computer program from one place to another. The storage medium can be any available medium that can be accessed by a computer. By way of example and not limitation, such computer readable medium may include RAM, ROM, EEPROM, NAND flash, NOR flash, M-RAM, P-RAM, R-RAM, CD-ROM or other optical disk storage, disk storage Or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.Also, any connection is properly termed a computer-readable medium. For example, if you use a coaxial cable, fiber optic cable, twisted pair cable, digital subscriber line ("DSL"), or wireless technology such as infrared, radio, and microwave to send software from a website, server, or other remote source, then Axis cables, fiber optic cables, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of the medium.Disk and disc as used herein includes compact disc ("CD"), laser disc, optical disc, digital versatile disc ("DVD"), floppy disk and blu-ray disc, where the disc typically magnetically replicates data while the disc utilizes laser light Optically replicate data. Combinations of the above should also be included in the scope of computer readable media.Alternative embodiments will become apparent to those skilled in the art without departing from the scope of the invention. Accordingly, while the selected aspects have been shown and described in detail, it is understood that various modifications and changes can be made therein without departing from the spirit and scope of the invention as defined by the following claims.
An integrated circuit structure may include a transistor on a front-side semiconductor layer supported by an isolation layer. The transistor is a first source/drain/body region. The integrated circuitstructure may also include a raised source/drain/body region coupled to a backside of the first source/drain/body region of the transistor. The transistor is a raised source/drain/body region extending from the backside of the first source/drain/body region toward a backside dielectric layer supporting the isolation layer. The integrated circuit structure may further include a backside metallization coupled to the raised source/drain/body region.
1. An integrated circuit structure, including:A transistor including a front-side semiconductor layer on a buried oxide BOX layer, the transistor including a first source/drain region and a body region in the front-side semiconductor layer;a raised source/drain region coupled to a backside of the first source/drain region of the transistor, the raised source/drain region extending from the first source/drain region said backside of a region extends toward a backside dielectric layer supporting said BOX layer;a raised body region coupled to a back side of the body region of the transistor, the raised body region extending from the back side of the body region through the BOX layer to the back side in the dielectric layer; andBackside metallization coupled to the raised source/drain regions and/or the raised body regions of the transistor.2. The integrated circuit structure of claim 1, wherein the raised source/drain regions are composed of epitaxially grown backside semiconductor material.3. The integrated circuit structure of claim 1, further comprising a front-side metallization coupled to a second source/drain region of the transistor, the front-side metallization being a distance from the back-side metallization remote.4. The integrated circuit structure of claim 3, wherein the front-side metallization includes a back-end process BEOL interconnect coupled to the second source/drain of the transistor Front contact on the area where the BEOL interconnect is located within the front dielectric layer.5. The integrated circuit structure of claim 1, wherein the transistor includes a radio frequency (RF) switch.6. The integrated circuit structure of claim 1, wherein the raised source/drain regions are doped with a different dopant than the first source/drain regions of the transistor. clutter.7. The integrated circuit structure of claim 1, wherein the raised source/drain regions are self-aligned with the first source/drain regions of the transistor.8. The integrated circuit structure of claim 1, wherein the raised source/drain regions of the transistor extend through the BOX layer into the backside dielectric layer.9. The integrated circuit structure of claim 1, integrated into a radio frequency RF front-end module incorporated into at least one of the following: a music player, a video player, an entertainment unit, a navigation device , communication equipment, personal digital assistants (PDAs), fixed location data units, mobile phones, and portable computers.10. A method of constructing an integrated circuit structure, comprising:fabricating a transistor using a front-side semiconductor layer on an isolation layer, the transistor including a gate, source/drain regions, and a body region;implanting ions into at least a first backside dielectric layer supporting the isolation layer using the gate as a mask, wherein the implantation is performed from a front side of the integrated circuit structure;The first back dielectric layer is patterned based on implanted defects in the first back dielectric layer proximate the source/drain regions and the body region of the transistor. back;exposing the backside of the source/drain regions and the body region through the first backside dielectric layer and the isolation layer;fabricating a raised source/drain region coupled to the backside of the source/drain region of the transistor, the raised source/drain region extending from the source/drain region The backside extends toward the first backside dielectric layer;Fabricating a raised body region coupled to the back side of the body region of the transistor from the back side of the body region toward the first back dielectric layer extension; andA backside metallization coupled to the raised source/drain regions and/or the raised body region of the transistor is fabricated.11. The method of claim 10, wherein fabricating the raised source/drain regions includes selectively growing a backside semiconductor on the backside of the source/drain regions of the transistor layer.12. The method of claim 11, further comprising annealing the backside semiconductor layer to form the raised source/drain regions.13. The method of claim 10, further comprising:depositing backside silicide on the raised source/drain regions; andA second backside dielectric layer is deposited over the backside suicide and first backside dielectric layer.14. The method of claim 10, wherein fabricating the raised source/drain regions includes depositing a backside semiconductor layer on the exposed portion of the backside of the source/drain regions.15. The method of claim 10, further comprising integrating the integrated circuit structure into a radio frequency RF front-end module incorporated into at least one of the following: a music player, a video player , entertainment units, navigation equipment, communication equipment, personal digital assistants (PDAs), fixed location data units, mobile phones, and portable computers.16. An integrated circuit structure, comprising:A transistor including a front-side semiconductor layer on a buried oxide BOX layer, the transistor including a first source/drain region and a body region;means for extending a backside of the first source/drain region of the transistor from the BOX layer toward a backside dielectric layer supporting the BOX layer;means for extending the backside of the body region of the transistor through the BOX layer into the backside dielectric layer; andBackside metallization coupled to the backside of the first source/drain region via the extension features for the first source/drain region, and/or via a backside for the transistor The extension member of the body region is coupled to the back surface of the body region.17. The integrated circuit structure of claim 16, further comprising a front-side metallization coupled to a second source/drain region of the transistor, the front-side metallization being a distance from the back-side metallization remote.18. The integrated circuit structure of claim 17, wherein the front-side metallization includes a back-end process BEOL interconnect coupled to the second source/drain of the transistor Front contact on the area where the BEOL interconnect is located within the front dielectric layer.19. The integrated circuit structure of claim 16, wherein the transistor includes an RF switch.20. The integrated circuit structure of claim 16, wherein the extension features for the first source/drain regions are self-aligned with the first source/drain regions of the transistor.21. The integrated circuit structure of claim 16, integrated into a radio frequency RF front-end module incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device , communication equipment, personal digital assistants (PDAs), fixed location data units, mobile phones, and portable computers.22. A radio frequency RF front-end module, including:Integrated RF circuit structure, the integrated RF circuit structure includes a switching transistor, the switching transistor includes a front-side semiconductor layer located on a buried oxide BOX layer, the switching transistor includes: a body region in the front-side semiconductor layer and a third a source/drain region; a raised source/drain region coupled to a backside of the first source/drain region of the switching transistor, wherein the raised source/drain region Extending from the back side of the first source/drain region toward a back dielectric layer supporting the BOX layer; a raised body region coupled to the back side of the body region of the switching transistor, the raised body region extending from the back side of the body region through the BOX layer into the back dielectric layer; and a backside metallization coupled to the raised side of the switching transistor source/drain regions and/or the raised body region; andAntenna, coupled to the output of the switching transistor.23. The RF front-end module of claim 22, wherein the raised source/drain regions are composed of epitaxially grown backside semiconductor material.24. The RF front-end module of claim 22, wherein the raised source/drain regions are doped with a different dopant than the first source/drain regions of the switching transistor. Adulterants.25. The RF front-end module of claim 22, wherein the raised source/drain regions of the switching transistors extend through the BOX layer into the backside dielectric layer.26. The RF front-end module of claim 22, incorporated into at least one of: a music player, a video player, an entertainment unit, a navigation device, a communication device, a personal digital assistant (PDA), a fixed location data unit, mobile phones, and laptop computers.
Backside semiconductor growthTechnical fieldThe present disclosure relates generally to integrated circuits (ICs). More specifically, the present disclosure relates to methods and apparatus for backside semiconductor growth.Background techniqueMobile radio frequency (RF) chip designs including high-performance duplexers (e.g., mobile RF transceivers) have migrated to deep submicron process nodes due to cost and power considerations. The design of such mobile RF transceivers becomes complex at this deep sub-micron process node. The design complexity of these mobile RF transceivers is further complicated by added circuit functionality to support communication enhancements such as carrier aggregation. Further design challenges for mobile RF transceivers include analog/RF performance considerations, which include mismatch, noise, and other performance considerations. The design of these mobile RF transceivers includes the use of additional passive components, for example, to suppress resonances, and/or to perform filtering, bypassing, and coupling.The design of these mobile RF transceivers may include the use of silicon-on-insulator (SOI) technology. SOI technology utilizes a layered silicon-insulator-silicon substrate to replace the conventional silicon substrate to reduce parasitic device capacitance and improve performance. SOI-based devices differ from conventional silicon-based devices because the silicon junction is located above an electrical isolator, which is typically a buried oxide (BOX) layer. However, the reduced thickness of the BOX layer may not be sufficient to reduce the parasitic capacitance caused by the proximity of the active devices on the silicon layer to the substrate supporting the BOX layer.Active devices on the SOI layer may include complementary metal oxide semiconductor (CMOS) transistors. Unfortunately, successful fabrication of transistors using SOI technology may involve the use of raised source/drain regions. Conventionally, raised source/drain electrodes are specified to enable contact between the raised source/drain regions and subsequent metallization layers. Additionally, the raised source/drain regions provide pathways for carriers to travel. As a result, conventional transistors with raised source/drain regions often suffer from the raised source/drain region problem. Source/drain region problems are characterized by unwanted parasitic capacitance in the form of fringing and overlapping capacitances between the gate and source/drain regions of the transistor.Contents of the inventionAn integrated circuit structure may include transistors on a front side semiconductor layer supported by an isolation layer. The transistor includes a first source/drain/body region. The integrated circuit structure may also include a raised source/drain/body region coupled to a backside of the first source/drain/body region of the transistor. The raised source/drain/body region may extend from a backside of the first source/drain/body region toward a backside dielectric layer supporting the isolation layer. The integrated circuit structure may further include backside metallization coupled to the raised source/drain/body regions.One method of constructing an integrated circuit structure may include fabricating a transistor using a front-side semiconductor layer supported by an isolation layer. The transistor includes a first source/drain/body region. The method may also include exposing a backside of the first source/drain/body region. The method may further include fabricating a raised source/drain/body region coupled to a backside of the first source/drain/body region of the transistor. The raised source/drain/body region may extend from a backside of the first source/drain/body region toward the first backside dielectric layer supporting the isolation layer. The method may also include fabricating a backside metallization coupled to the raised source/drain/body regions.An integrated circuit structure may include transistors on a front side semiconductor layer supported by an isolation layer. The transistor includes a first source/drain/body region. The integrated circuit structure may also include means for extending a backside of the first source/drain/body region of the transistor from the isolation layer toward a backside dielectric layer supporting the isolation layer. The integrated circuit structure may further include a backside metallization coupled to a backside of the first source/drain/body region via the extension feature.A radio frequency (RF) front-end module may include an integrated RF circuit structure. Integrated RF circuit structures may include switching transistors on a front-side semiconductor layer supported by an isolation layer. The switching transistor includes a first source/drain/body region and a raised source/drain/body region coupled to the first source/drain/body region of the switching transistor. Backside of drain/body area. A raised source/drain/body region extends from the backside of the first source/drain/body region toward the backside dielectric layer supporting the isolation layer. The switching transistor also includes backside metallization coupled to the raised source/drain/body regions. The RF front-end module may further include an antenna coupled to the output of the switching transistor.This has summarized rather broadly the features and technical advantages of the disclosure so that the detailed description that follows may be better understood. Additional features and advantages of the present disclosure will be described below. It should be appreciated by those skilled in the art that the present disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art should further realize that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features believed to be characteristic of the present disclosure, as well as further objects and advantages, both with respect to its organization and method of operation, will be better understood from the ensuing description when considered in relation to the accompanying drawings. It will be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the disclosure.Description of the drawingsFor a more complete understanding of the present disclosure, reference is now made to the following description taken in conjunction with the accompanying drawings.FIG. 1A is a schematic diagram of a radio frequency (RF) front-end (RFFE) module employing duplexers in accordance with an aspect of the present disclosure.FIG. 1B is a schematic diagram of a radio frequency (RF) front-end (RFFE) module employing duplexers for a chipset to provide carrier aggregation in accordance with aspects of the present disclosure.Figure 2A is a diagram of a duplexer design in accordance with one aspect of the present disclosure.2B is a diagram of a radio frequency (RF) front-end module in accordance with an aspect of the present disclosure.3A-3E illustrate cross-sectional views of integrated radio frequency (RF) circuit structures during a layer transfer process in accordance with aspects of the present disclosure.4 is a cross-sectional view of an integrated radio frequency (RF) circuit structure fabricated using a layer transfer process in accordance with aspects of the present disclosure.5A and 5B illustrate an integrated circuit structure in which a post-layer transfer process forms backside raised source/drain regions of an active device in accordance with aspects of the present disclosure.6A-6E are cross-sectional views illustrating a process for fabricating an integrated circuit structure including backside raised source/drain regions in accordance with aspects of the present disclosure.7A-7E are cross-sectional views illustrating a process for fabricating an integrated circuit structure including a backside extending source/drain/body region in accordance with aspects of the present disclosure.8A-8E are cross-sectional views illustrating a process for a source extending between a source/drain/body region of an active device and a backside of the active device in accordance with aspects of the present disclosure. /Self-alignment between drain/body regions.9 is a process flow diagram illustrating a method of constructing an integrated circuit structure including an active device having a backside extending source/drain/body region, in accordance with an aspect of the present disclosure.10 is a block diagram illustrating an exemplary wireless communications system in which configurations of the present disclosure may be advantageously employed.11 is a block diagram illustrating a design workstation for circuit, layout, and logic design of semiconductor components, according to one configuration.Detailed waysThe detailed description set forth below with respect to the appended drawings is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described herein may be practiced. This detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts. As described herein, use of the term "and/or" is intended to mean an "inclusive or," and use of the term "or" is intended to mean an "exclusive or."Mobile radio frequency (RF) chip designs (e.g., mobile RF transceivers) have migrated to deep submicron process nodes due to cost and power consumption considerations. The design complexity of mobile RF transceivers is further complicated by added circuit functionality to support communication enhancements such as carrier aggregation. Further design challenges for mobile RF transceivers include analog/RF performance considerations, which include mismatch, noise, and other performance considerations. The design of these mobile RF transceivers includes the use of passive components, for example, to suppress resonances, and/or to perform filtering, bypassing, and coupling.The successful manufacturing of modern semiconductor chip products involves the interaction between the materials and the processes employed. In particular, forming conductive material plating for semiconductor manufacturing in back-end-of-line (BEOL) processes is an increasingly challenging part of the process flow. This is especially true in maintaining small feature sizes. The same challenges of maintaining small feature sizes also apply to passive-on-glass (POG) technology, where high-performance components such as inductors and capacitors are built on highly insulating substrates that can also have very low losses to Supports mobile RF transceiver designs.The design of these mobile RF transceivers may include the use of silicon-on-insulator (SOI) technology. SOI technology utilizes a layered silicon-insulator-silicon substrate to replace the conventional silicon substrate to reduce parasitic device capacitance and improve performance. SOI-based devices differ from conventional silicon-based devices because the silicon junction is located above an electrical isolator, which is typically a buried oxide (BOX) layer, where the thickness of the BOX layer can be reduced. However, the reduced thickness of the BOX layer may not be sufficient to reduce the parasitic capacitance caused by the proximity of the active devices on the silicon layer to the substrate supporting the BOX layer. Additionally, active devices on the SOI layer may include complementary metal oxide semiconductor (CMOS) transistors.Unfortunately, successful fabrication of transistors using SOI technology may involve the use of raised source/drain regions. Conventionally, raised source/drain electrodes enable contact between the raised source/drain regions and subsequent metallization layers. Additionally, the raised source/drain regions provide pathways for carriers to travel. Conventional transistors with raised source/drain regions often suffer from the raised source/drain region problem. The raised source/drain region problem is characterized by unwanted parasitic capacitance in the form of edge capacitance and overlap capacitance between the gate and source/drain regions. Additionally, conventional CMOS technology is limited to epitaxial growth on the front side of active devices. As a result, aspects of the present disclosure include post-layer transfer processes to enable backside semiconductor deposition/growth to eliminate raised source/drain region issues.Various aspects of the present disclosure provide techniques for integrated circuit structures including transistors having backside extended (raised) source/drain/body regions. Process flows for semiconductor fabrication of integrated circuit structures may include front-end-of-line (FEOL) processes, middle-of-line (MOL) (also known as mid-end-of-line (MEOL)) processes, and back-end of line (BEOL) processes. Front-end processing may include a collection of process steps to form active devices such as transistors, capacitors, and diodes. The FEOL process includes ion implantation, annealing, oxidation, chemical vapor deposition (CVD) or atomic layer deposition (ALD), etching, chemical mechanical polishing (CMP), and epitaxy. The intermediate process may include a set of process steps that enable the connection of transistors to BEOL interconnects. These steps include siliconization and contact formation as well as stress introduction. Back-end processing may include a set of process steps that form interconnects connecting individual transistors and form circuits. Currently, copper and aluminum provide the interconnects, but as technology develops further, other conductive materials could be used.It will be understood that the term "layer" includes films and is not to be interpreted as indicating vertical or horizontal thickness unless stated otherwise. As described herein, the term "substrate" may refer to a substrate of a cut wafer, or may refer to a substrate of an uncut wafer. Similarly, the terms chip and die can be used interchangeably unless such interchange would exhaust trust.Aspects of the present disclosure describe integrated circuit structures including transistors with backside raised source/drain/body regions that may be used as integrated radio frequency (RF) circuits for high quality (Q) factor RF applications. Antenna switching transistor in structure. In one configuration, a post-layer transfer process forms raised source/drain/body regions on the backside of the transistor. A post-layer transfer process may form a backside semiconductor layer on the backside of the source/drain regions of the transistor. The backside semiconductor layer may extend from a first surface of the isolation layer supporting the transistor to a second surface.In this configuration, the post-layer transfer process may include a post-layer deposition process or a post-layer growth process for forming a backside semiconductor layer on the backside of the source/drain regions of the transistor. The raised source/drain/body regions are composed of epitaxially grown backside semiconductor material. Alternatively, the raised source/drain regions may be formed using chemical vapor deposition (CVD), atomic layer deposition (ALD), or other similar front-end manufacturing processes. In this configuration, the rear raised source/drain regions of the transistor can reduce the parasitic capacitance associated with the front raised source/drain regions fabricated using conventional CMOS processes. That is, extending the source/drain regions to the backside of the transistor helps prevent the formation of parasitic capacitances between the body of the transistor and conventional front-side raised source/drain regions.One goal driving the wireless communications industry is to deliver increased bandwidth to consumers. The use of carrier aggregation in current generation communications offers one possible solution for achieving this goal. By using two frequencies simultaneously for a single communication stream, carrier aggregation enables wireless carriers with licenses for two frequency bands (eg, 700 MHz and 2 GHz) to maximize bandwidth in a specific geographic area. As increasing amounts of data are provided to end users, carrier aggregation implementations are complicated by noise that is generated at harmonic frequencies due to the frequencies used for data transmission. For example, a 700MHz transmission may produce harmonics at 2.1GHz that interfere with data broadcasts at the 2GHz frequency.For wireless communications, passive components are used to process signals in carrier aggregation systems. In a carrier aggregation system, signals utilize both high frequency bands and low frequency bands to communicate. In chipsets, passive components (eg, duplexers) are often inserted between the antenna and tuner (or radio frequency (RF) switch) to ensure high performance. Typically, a duplexer design includes an inductor and a capacitor. Duplexers can achieve high performance by using inductors and capacitors with high quality (Q) factors. High-performance duplexers can also be obtained by reducing electromagnetic coupling between components, which can be achieved through the arrangement of component geometry and orientation.1A is a schematic diagram of a radio frequency (RF) front-end (RFFE) module 100 employing a duplexer 200 in accordance with an aspect of the present disclosure. The RF front-end module 100 includes a power amplifier 102, a duplexer/filter 104, and a radio frequency (RF) switch module 106. Power amplifier 102 amplifies the signal(s) to a certain power level for transmission. Duplexer/filter 104 filters input/output signals based on various parameters including frequency, insertion loss, rejection, or other similar parameters. Additionally, the RF switch module 106 may select certain portions of the input signal for delivery to the remainder of the RF front-end module 100 .RF front-end module 100 also includes tuner circuitry 112 (eg, first tuner circuitry 112A and second tuner circuitry 112B), diplexer 200, capacitor 116, inductor 118, ground terminal 115, and antenna 114. Tuner circuitry 112 (eg, first tuner circuitry 112A and second tuner circuitry 112B) includes, for example, a tuner, a portable data entry terminal (PDET), and a house keeping analog-to-digital converter (HKADC). class components. Tuner circuitry 112 may perform impedance tuning (eg, voltage standing wave ratio (VSWR) optimization) for antenna 114 . The RF front-end module 100 also includes a passive combiner 108 coupled to a wireless transceiver (WTR) 120 . Passive combiner 108 combines the detected power from first tuner circuitry 112A and second tuner circuitry 112B. Wireless transceiver 120 processes information from passive combiner 108 and provides the information to modem 130 (eg, a mobile station modem (MSM)). Modem 130 provides digital signals to application processor (AP) 140.As shown in FIG. 1A , duplexer 200 is located between the tuner components of tuner circuitry 112 and capacitor 116 , inductor 118 and antenna 114 . Diplexer 200 may be placed between antenna 114 and tuner circuitry 112 to provide high system performance from the RF front-end module 100 to the chipset including wireless transceiver 120, modem 130, and applications processor 140. The duplexer 200 also performs frequency domain multiplexing on both high-band frequencies and low-band frequencies. After duplexer 200 performs its frequency multiplexing function on the input signal, the output of duplexer 200 is fed to an optional LC (inductor/capacitor) network including capacitor 116 and inductor 118 . The LC network can provide additional impedance matching components for antenna 114 when needed. Then, signals with specific frequencies are transmitted or received by antenna 114. Although a single capacitor and inductor are shown, multiple components are also contemplated.1B is a wireless local area network (WLAN) (eg, WiFi) module 170 including a first duplexer 200 - 1 and a second duplexer 200 for use with a chipset 160 to provide carrier aggregation in accordance with an aspect of the present disclosure. -2 Schematic diagram of the RF front-end module 150. WiFi module 170 includes a first duplexer 200-1 that communicatively couples antenna 192 to a wireless local area network module (eg, WLAN module 172). RF front-end module 150 includes a second duplexer 200 - 2 that communicatively couples antenna 194 to wireless transceiver (WTR) 120 through duplexer 180 . The wireless transceiver 120 and the WLAN module 172 of the WiFi module 170 are coupled to a modem (MSM, eg, a baseband modem) 130 that is powered by a power supply 152 through a power management integrated circuit (PMIC) 156 . Chipset 160 also includes capacitors 162 and 164, and inductor(s) 166 to provide signal integrity. PMIC 156 , modem 130 , wireless transceiver 120 and WLAN module 172 each include capacitors (eg, 158 , 132 , 122 , and 174 ) and operate according to clock 154 . The geometry and arrangement of the various inductor and capacitor components in chipset 160 can reduce electromagnetic coupling between components.Figure 2A is a diagram of a duplexer 200 in accordance with an aspect of the present disclosure. Duplexer 200 includes a high band (HB) input port 212, a low band (LB) input port 214, and an antenna 216. The high-band path of duplexer 200 includes high-band antenna switch 210-1. The low-band path of duplexer 200 includes low-band antenna switch 210-2. Wireless devices including RF front-end modules may use antenna switches 210 and duplexers 200 to enable a wide range of frequency bands for the RF input and RF output of the wireless device. Additionally, antenna 216 may be a multiple-input multiple-output (MIMO) antenna. Multiple-input multiple-output antennas will be widely used in the RF front-end of wireless devices to support features such as carrier aggregation.Figure 2B is a diagram of an RF front-end module 250 in accordance with an aspect of the present disclosure. The RF front-end module 250 includes an antenna switch (ASW) 210 and a diplexer 200 (or triplexer) to enable the wide range of frequency bands indicated in Figure 2A. Additionally, RF front-end module 250 includes filter 230, RF switch 220, and power amplifier 218 supported by substrate 202. Filter 230 may include various LC filters with inductors (L) and capacitors (C) arranged along substrate 202 for forming diplexers, triplexers, low pass filters, balun filtering and/or notch filters to block higher harmonics in the RF front-end module 250. Duplexer 200 may be implemented as a surface mount device (SMD) on system board 201 (eg, a printed circuit board (PCB) or packaging substrate). Alternatively, duplexer 200 may be implemented on substrate 202.In this configuration, the RF front-end module 250 is implemented using silicon-on-insulator (SOI) technology, which helps reduce higher harmonics in the RF front-end module 250 . SOI technology utilizes a layered silicon-insulator-silicon substrate to replace the conventional silicon substrate to reduce parasitic device capacitance and improve performance. SOI-based devices differ from conventional silicon-based devices because the silicon junction is located above an electrical insulator, typically a buried oxide (BOX) layer. However, the reduced thickness of the BOX layer may not be sufficient to reduce the parasitic capacitance caused by the proximity between the active device (on the silicon layer) and the substrate supporting the BOX layer. As a result, aspects of the present disclosure include a layer transfer process to further separate the active device from the substrate, as shown in Figures 3A-3E.3A-3E illustrate cross-sectional views of an integrated radio frequency (RF) circuit structure 300 during a layer transfer process in accordance with aspects of the present disclosure. As shown in Figure 3A, an RF silicon-on-insulator (SOI) device includes an active device 310 located on a buried oxide (BOX) layer 320 supported by a sacrificial substrate 301 (eg, a bulk wafer) . The RF SOI device also includes interconnect 350 coupled to active device 310 within first dielectric layer 306 . As shown in Figure 3B, handle substrate 302 is bonded to first dielectric layer 306 of the RF SOI device. Additionally, the sacrificial substrate 301 is removed. Removal of sacrificial substrate 301 using a layer transfer process enables high performance, low parasitic RF devices by increasing dielectric thickness. That is, the parasitic capacitance of the RF SOI device is proportional to the dielectric thickness, which determines the distance between the active device 310 and the processing substrate 302.As shown in Figure 3C, once the handle substrate 302 is secured and the sacrificial substrate 301 is removed, the RF SOI device is flipped. As shown in Figure 3D, a post-layer transfer metallization process is performed using, for example, a common complementary metal oxide semiconductor (CMOS) process. As shown in Figure 3E, the integrated RF circuit structure 300 is completed by depositing a passivation layer, opening the bond pads, depositing a redistribution layer, and forming conductive bumps/pillars such that the integrated RF circuit structure 300 is integrated into the system. Bonding of boards, such as printed circuit boards (PCBs), is possible.Referring again to FIG. 3A , the RF SOI device may include a trap-rich layer between the sacrificial substrate 301 and the BOX layer 320 . Additionally, the sacrificial substrate 301 may be replaced with a processing substrate, and the thickness of the BOX layer 320 may be increased to improve harmonics. While this arrangement of RF SOI devices may provide improved harmonics relative to pure silicon or SOI implementations, RF SOI devices are limited by nonlinear responses from the processing substrate, especially when silicon processing substrates are used. That is, in Figure 3A, the increased thickness of BOX layer 320 does not provide sufficient distance between active device 310 and sacrificial substrate 301 relative to the configuration shown in Figures 3B-3E. Additionally, the body of the active device 310 in the RF SOI device may not be connected.4 is a cross-sectional view of an integrated RF circuit structure 400 fabricated using a layer transfer process in accordance with aspects of the present disclosure. Representatively, integrated RF circuit structure 400 includes active device 410 having gate, body, and source/drain regions formed on isolation layer 420 . In silicon-on-insulator (SOI) implementations, isolation layer 420 is a buried oxide (BOX) layer, and the body and source/drain regions are formed from an SOI layer that includes shallow trenches supported by the BOX layer Isolation (STI) area.Integrated RF circuit structure 400 also includes mid-end-of-line (MEOL)/back-end-of-line (BEOL) interconnects coupled to the source/drain regions of active device 410 . As described herein, the MEOL layer/BEOL layer is referred to as the front side layer. In contrast, the layer supporting isolation layer 420 may be referred to herein as a backside layer. According to this nomenclature, front-side interconnect 450 is coupled to the source/drain regions of active device 410 through front-side contact 412 and is disposed in front-side dielectric layer 406 . Additionally, handle substrate 402 is directly coupled to front side dielectric layer 406 . In this configuration, backside dielectric 440 is adjacent to and may support isolation layer 420 . Additionally, backside metallization 430 is coupled to frontside interconnect 450 .As shown in FIG. 4 , the layer transfer process provides increased separation between the active device 410 and the handle substrate 402 to improve the harmonics of the integrated RF circuit structure 400 . Although the layer transfer process enables high performance, low parasitic RF devices, the integrated RF circuit structure 400 may suffer from the floating body effect. Accordingly, the performance of the integrated RF circuit structure 400 may be further improved by using post-transfer metallization to provide access to the backside of the active device 410 to bond the body region of the active device 410 .Various aspects of the present disclosure provide techniques for layer transfer post-deposition/growth processes on the backside of active devices integrating radio frequency (RF) integrated structures. In contrast, access to active devices formed during the front-end of line (FEOL) process is conventionally provided during mid-end of line (MEOL) processing, which provides the gate and source/source of the active device. The contact between the drain region and the back-end-of-line (BEOL) interconnect layers (e.g., M1, M2, etc.). Aspects of the present disclosure relate to post-layer transfer growth/deposition processes for forming backside extended (raised) source/drain/body regions of transistors that may be used for high quality (Q) factor RF applications. Antenna switching transistors in applied integrated radio frequency (RF) circuit structures. Other applications include active devices in low power amplifier modules, low noise amplifiers and antenna diversity switches.5A is a cross-sectional view of an integrated circuit structure 500 in which a post-layer transfer process is performed on the backside of a source/drain (S/D) region of an active device (eg, a transistor) in accordance with aspects of the present disclosure. Representatively, integrated circuit structure 500 includes active device 510 having gate, body, and source/drain (S/D) regions formed on isolation layer 520 . For silicon-on-insulator (SOI) implementations, isolation layer 520 may be a buried oxide (BOX) layer, with the body and source/drain regions formed from the SOI layer. In this configuration, the shallow trench isolation (STI) area is also supported by the BOX layer.Integrated RF circuit structure 500 includes front-side metallization 570 (eg, first BEOL interconnect (M1)) disposed in front-side dielectric layer 506. The front side metallization is coupled to the third portion 550 - 3 of the back side metallization 550 , which is disposed in the back side dielectric layer 540 , via via 560 . Additionally, the gate of active device 510 includes gate contact 512, which may consist of a front-side suicide layer. Additionally, handle substrate 502 is coupled to front side dielectric layer 506 . Backside dielectric layer 540 is adjacent to and may support isolation layer 520 . In this configuration, a layer transfer post-metallization process forms backside metallization 550 .In aspects of the present disclosure, a post-layer transfer process is used to provide a backside semiconductor layer on the backside of the source/drain regions of active device 510 . In aspects of the present disclosure, the backside semiconductor layer may be deposited as an amorphous semiconductor layer. Alternatively, the backside semiconductor layer may be grown epitaxially as part of a post-layer transfer growth process. Once formed, the backside semiconductor layer may optionally undergo a post-deposition annealing process (eg, low temperature or short localized laser annealing) to form raised source/drain (S/D) regions 530 . In this configuration, backside raised source/drain regions 530 extend from the backside of the source/drain regions of active device 510 into isolation layer 520 . Once formed, a backside contact 532 (eg, a backside suicide layer) may be deposited on the backside raised source/drain region 530 distal from the front side of the source/drain region. A post-layer transfer metallization process is then performed to couple the first portion 550 - 1 and the second portion 550 - 2 of the backside metallization 550 to the backside of the backside raised source/drain region 530 of the active device 510 Contact portion 532. As shown in FIG. 5A , front metallization 570 is disposed distally from back metallization 550 .5B is a cross-sectional view of an integrated circuit structure 580 with post-layer transfer processing also on the backside of source/drain (S/D) regions 516 of active devices 510 (eg, transistors) in accordance with aspects of the present disclosure. execute on. As will be appreciated, the configuration of integrated circuit structure 580 is similar to the configuration of integrated circuit structure 500 of Figure 5A. However, in the configuration shown in FIG. 5B , the active device 510 includes only one of the backside raised source/drain regions 530 . Alternatively, backside contact 582 is located directly on the backside of source/drain region 516 of active device 510 . Additionally, the second portion 550 - 2 of the backside metallization 550 is coupled to the backside contact 582 of the source/drain region 516 of the active device 510 .Referring again to FIG. 5A , backside raised source/drain regions 530 are provided in isolation layer 520 and arranged to enable contact with backside metallization 550 . The extension of the source/drain regions of active device 510 helps prevent parasitic capacitance from forming between the body of active device 510 and conventional front-side raised source/drain regions. In this configuration, the post-layer transfer process may include a post-layer deposition process or a post-layer growth process for forming the backside raised source/drain regions 530 . In this configuration, backside raised source/drain regions 530 may reduce the parasitic capacitance associated with raised source/drain regions fabricated using conventional CMOS processes.In accordance with aspects of the present disclosure, processing substrate 502 may be composed of a semiconductor material, such as silicon. In this configuration, processing substrate 502 may include at least one other active device. Alternatively, the processing substrate 502 may be a passive substrate to further improve harmonics by reducing parasitic capacitance. In this configuration, processing substrate 502 may include at least one other passive device. As described herein, the term "passive substrate" may refer to a substrate of a diced wafer or panel, or may refer to a substrate of an undiced wafer/panel. In one configuration, the passive substrate is composed of glass, air, quartz, sapphire, high resistivity silicon, or other similar passive materials. The passive substrate may also be a coreless substrate.6A-6E are cross-sectional views illustrating a process for fabricating an integrated circuit structure including backside extending source/drain regions in accordance with aspects of the present disclosure. As shown in Figure 6A, integrated circuit structure 600 is shown in a configuration similar to the configuration of integrated circuit structure 500 shown in Figure 5A. However, in the configuration shown in FIG. 6A , after active devices 510 (510-1 and 510-2) are formed, a layer transfer process is performed to bond handle substrate 502 to front-side dielectric layer 506. As shown in Figure 6B, the post-layer transfer process begins with the deposition of backside dielectric layer 540. Although a single layer is shown, it will be appreciated that multiple dielectric layers may be deposited.As shown in FIG. 6C , the post-layer transfer process continues with patterning and etching of backside dielectric layer 540 and isolation layer 520 to expose the backside of the source/drain regions of active device 510 . In FIG. 6D , a post-layer transfer deposition/growth process is performed to create backside raised source/drain regions 530 . In FIG. 6E , a post-layer transfer metallization process is performed to couple backside metallization 550 to backside raised source/drain regions 530 through backside contacts 532 . Additionally, fifth portion 550 - 5 of backside metallization 550 is coupled to frontside metallization 570 via via 560 . In this configuration, the third portion 550 - 3 of the backside metallization 550 is coupled to the backside contact 532 of one of the backside raised source/drain regions 530 , and the fourth portion 550 of the backside metallization 550 -4 is coupled to the backside contact 532 of one of the backside raised source/drain regions 530 of the second active device 510-2.Different materials can be used to stress the active devices during the growth process. For example, PFET devices can be stressed using germanium growth, up to 40% in one configuration. NMOS devices may be stressed using, for example, carbon-doped silicon, with the percentage of carbon being no more than 3 to 4 percent. This percentage of carbon blocks dislocations in silicon. It should be appreciated that the raised body region may also include stress sources.7A-7E are cross-sectional views illustrating a process for fabricating an integrated circuit structure including a backside extending source/drain/body region in accordance with aspects of the present disclosure. As shown in Figure 7A, integrated circuit structure 700 is shown in a configuration similar to that of integrated circuit structure 500 shown in Figure 5A. However, in the configuration shown in FIG. 7A , after active devices 510 (510-1 and 510-2) are formed, a layer transfer process is performed to bond handle substrate 502 to front-side dielectric layer 506. Additionally, first portion 570-1 of front-side metallization 570 couples front-side contact 514 of the source/drain regions of first active device 510-1 to gate contact 512 of second active device 510-2 . Additionally, the second portion 570 - 2 of the front surface metallization 570 couples the front contact 514 of the source/drain region of the second active device 510 - 2 to the via 560 .As shown in Figure 7B, the post-layer transfer process also begins with the deposition of backside dielectric layer 540. As shown in Figure 7C, the post-layer transfer process also continues with the patterning and etching of the backside dielectric layer 540 and isolation layer 520 to expose the backside of the source/drain regions of the first active device 510-1. In this aspect of the disclosure, the post-layer transfer process exposes the body of the second active device 510-2. In FIG. 7D , a post-layer transfer deposition/growth process is performed to create backside raised source/drain regions 530 and backside raised body regions 590 .In FIG. 7E , a post-layer transfer metallization process is performed to couple backside metallization 550 to backside raised source/drain regions 530 through backside contacts 532 . Additionally, the fourth portion 550 - 4 of the backside metallization 550 is coupled to the second portion of the frontside metallization 570 via via 560 . In this configuration, the third portion 550 - 3 of the backside metallization 550 is coupled to the backside contact 592 of the backside raised body region 590 . In this aspect of the disclosure, backside raised body region 590 is doped with a different dopant than backside raised source/drain region 530 . Additionally, the back raised body region 590 of the first active device 510 - 1 is doped with a different dopant than the dopant of the back raised body region 590 of the second active device 510 - 2 .8A-8E are cross-sectional views illustrating processes for source/drain/body regions of an active device and backside-extending source/drain regions of the active device in accordance with aspects of the present disclosure. Self-alignment between drain/body regions. As shown in Figure 8A, integrated circuit structure 800 is shown in a configuration similar to the configuration of integrated circuit structure 700 shown in Figure 7A. However, in the configuration shown in Figure 8A, the layer transfer process that bonds handle substrate 502 to front dielectric layer 506 after forming active devices 510 (510-1 and 510-2) is not shown. Additionally, the configuration of the integrated circuit structure shown in FIG. 8D also includes coupling the front contact 514 of the source/drain regions of the first active device 510-1 to the gate contact of the second active device 510-2. The first portion 570 - 1 of the front metallization portion 570 of the portion 512 . Additionally, the second portion 570 - 2 of the front surface metallization 570 couples the front contact 514 of the source/drain region of the second active device 510 - 2 to the via 560 .As shown in FIG. 8B , an ion implantation process is performed to implant impurities into the back dielectric layer 540 by implanting ions in the back dielectric layer 540 and the isolation layer 520 . The implant is performed from the front side of the integrated circuit structure 800 . Specific dopants (eg, high doses of boron) can be used to destroy the buried oxide layer (create defects in it). As shown in FIG. 8C , the ion implantation process is blocked by the gate of active device 510 . As a result, injected defects are typically limited to areas within the backside dielectric layer 540 and isolation layers that are close to the source/drain regions of the active device 510 .As shown in Figure 8D, a post-layer transfer masking process is performed by depositing photoresist 594 and exposing implanted defects, such as within an under-etched semiconductor (eg, silicon (Si)) layer. As shown in Figure 8E, the process continues with etching of the backside dielectric layer 540 and isolation layer 520 to expose the backside of the source/drain regions of the first active device 510-1 and the second active device 510 -2 on the backside of the source/drain areas. In this aspect of the disclosure, the implanted defects enable self-alignment between the source/drain/body regions of active device 510 and the backside extending source/drain/body regions. That is, the backside etch does not reach the gate. Alternatively, the implanted defects can provide an etch stop layer and reduce the etch rate to support the backside raised source/drain/body regions.9 is a process flow diagram illustrating a method 900 of constructing an integrated circuit structure including an active device having a backside extending source/drain/body region, in accordance with an aspect of the present disclosure. In block 902, a transistor is fabricated using a front-side semiconductor layer supported by an isolation layer. For example, as shown in Figure 6A, active device 310 is fabricated using a front-side semiconductor layer (eg, a silicon-on-insulator (SOI) layer) supported by an isolation layer (eg, a buried oxide (BOX) layer). In the configuration shown in Figures 6A-6E, the front-side metallization is fabricated in the front-side dielectric layer on the active device. For example, as shown in FIG. 6A , front-side metallization 570 is coupled to front-side via 560 that extends through a shallow trench isolation (STI) region and isolation layer 520 . This part of the process for fabricating the transistors is performed before the layer transfer process.For example, a layer transfer process is performed in which handle substrate 502 is bonded to front dielectric layer 506, as shown in Figure 6A. The layer transfer process also includes removal of the sacrificial substrate. As shown in Figure 3B, the layer transfer process includes removal of sacrificial substrate 301. In this aspect of the disclosure, fabrication of raised backside source/drain/body regions is performed as part of a post-layer transfer process.Referring again to Figure 9, in block 904, the backside of the first source/drain/body region of the transistor is exposed. For example, as shown in FIG. 6B , the post-layer-transferred raised source/drain/body formation process may begin with the deposition of backside dielectric layer 540 over isolation layer 520 . As shown in Figure 6C, the backside of the source/drain regions of active device 510 is exposed. In block 906, raised source/drain/body regions are fabricated. For example, as shown in FIG. 6D , a raised source/drain (S/D) region is coupled to the backside of the source/drain region of active device 510 . The raised source/drain regions may extend from the backside of the source/drain regions toward the backside dielectric layer 540 supporting the isolation layer 520 . Alternatively, the backside of the second source/drain/body region may be exposed to enable the formation of another raised source/drain/body region.In accordance with aspects of the present disclosure, raised source/drain/body regions may be epitaxially grown or fabricated as part of an amorphous deposition process. For example, as shown in FIG. 6D , the epitaxial growth process may include selectively growing a backside semiconductor layer on the exposed backside of the raised source/drain regions of active device 510 . The epitaxial growth process also includes subjecting the backside semiconductor layer to an annealing process to form raised source/drain regions. Once the raised source/drain regions are formed, etching of the surface of backside dielectric layer 540 and/or the raised source/drain regions of active device 510 is performed. By providing backside raised source/drain regions that extend away from the front side of integrated circuit structure 500, parasitic capacitances between the transistor gates and conventional raised source/drain regions are avoided.In accordance with aspects of the present disclosure, a post-layer transfer growth/deposition process is described for forming backside raised source/drain/body regions. The layer transfer post-growth process can involve a pre-cleaning portion, a growth portion, and a post-deposition anneal. Post-deposition annealing may be a low temperature anneal (eg, below 350°) or a short localized laser anneal. Additionally, the backside raised source/drain/body regions may or may not have a single crystal structure. For example, backside raised source/drain/body regions can be formed by fully amorphous deposition, followed by solid phase epitaxial annealing to form a single crystal structure. Alternatively, where single crystalline material is not required, polysilicon, silicon alloy, or other similar semiconductor compounds may be deposited to provide the backside semiconductor layer.When an epitaxial growth process is used to form the backside semiconductor layer, low-temperature epitaxial growth can be performed using trisilane. Due to the specific growth mechanism for enhanced H (hydrogen) desorption, trisilane can allow the growth of the backside semiconductor layer (eg, silicon) at lower temperatures below 350°C. In contrast, conventional semiconductor layers grown at temperatures below 500°C are defective regardless of the carrier gas, pressure and precursor flow used. Additionally, the thickness of the epitaxially grown backside semiconductor layer may be higher or lower than the surface of the wafer on which the layer is grown.In block 908 of Figure 9, backside metallization is fabricated to couple to the raised source/drain regions. As shown in Figure 6E, backside contacts 532 are deposited on the backside raised source/drain regions 530. Additionally, a second backside dielectric layer 540-2 is deposited over the backside contact 532 and the first backside dielectric layer 540-1. Once deposited, second backside dielectric layer 540 - 2 is patterned in accordance with backside contacts 532 . The second backside dielectric layer 540 - 2 is then etched (eg, a dry plasma etch and cleaning process) to expose a portion of the backside contact 532 . Backside metallization 550 is then deposited over the exposed portions of backside contacts 532 to contact the source/drain regions of active device 510 .According to another aspect of the present disclosure, an integrated circuit structure is described that includes a transistor on a front-side semiconductor layer supported by an isolation layer. The transistor includes a first source/drain/body region. The integrated circuit structure may also include means for extending a backside of the first source/drain/body region of the transistor from the isolation layer toward a backside dielectric layer supporting the isolation layer. The integrated circuit structure may further include a backside metallization coupled to a backside of the first source/drain/body region via the extension feature. The extended features may be the raised source/drain regions shown in Figures 5A and 5B. The extension member may also be a raised body region as shown in Figures 7D and 7E. On the other hand, the aforementioned components may be any module or any device configured to perform the functions recited by the aforementioned components.Unfortunately, successful fabrication of transistors using silicon-on-insulator (SOI) technology may involve the use of raised source/drain regions. Conventionally, raised source/drain electrodes enable contact between the raised source/drain regions and subsequent metallization layers. Additionally, the raised source/drain regions provide pathways for carriers to travel. Unfortunately, conventional transistors with raised source/drain regions often suffer from the raised source/drain region problem. Additionally, conventional CMOS technology is limited to epitaxial growth on the front side of active devices. As a result, aspects of the present disclosure include post-layer transfer processes to enable backside semiconductor deposition/growth to eliminate raised source/drain region issues.Aspects of the present disclosure describe integrated circuit structures including transistors with backside raised source/drain/body regions that may be used as integrated radio frequency (RF) circuits for high quality (Q) factor RF applications. Antenna switching transistor in structure. In one configuration, post-layer transfer metallization is used to form raised source/drain/body regions on the backside of the transistor. A post-layer transfer process may form a backside semiconductor layer on the backside of the source/drain regions of the transistor. The backside semiconductor layer may extend from a first surface of the isolation layer supporting the transistor to a second surface.In this configuration, the post-layer transfer process may include a post-layer deposition process or a post-layer growth process for forming a backside semiconductor layer on the backside of the source/drain region of the transistor. A subsequent annealing process is applied to the semiconductor layer to form raised source/drain regions on the backside of the transistor. In this configuration, the rear raised source/drain regions of the transistor can reduce the parasitic capacitance associated with the front raised source/drain regions fabricated using conventional CMOS processes. That is, extending the source/drain regions to the backside of the transistor helps prevent parasitic capacitance from forming between the body of the transistor and conventional front-side raised source/drain regions.10 is a block diagram illustrating an exemplary wireless communications system 1000 in which an aspect of the present disclosure may be advantageously employed. For illustration purposes, Figure 10 shows three remote units 1020, 1030, and 1050 and two base stations 1040. It will be appreciated that a wireless communication system may have many more remote units and base stations. Remote units 1020, 1030, and 1050 include IC devices 1025A, 1025C, and 1025B, which include the disclosed backside semiconductor growth. It will be appreciated that other devices may also include the disclosed backside semiconductor growth, such as base stations, switching equipment, and network equipment. 10 shows forward link signals 1080 from base station 1040 to remote units 1020, 1030, and 1050, and reverse link signals 1090 from remote units 1020, 1030, and 1050 to base station 1040.In Figure 10, remote unit 1020 is shown as a mobile phone, remote unit 1030 is shown as a portable computer, and remote unit 1050 is shown as a fixed location remote unit in a wireless local loop system. For example, the remote unit may be a mobile phone, a handheld personal communications system (PCS) unit, a portable data unit such as a personal digital assistant (PDA), a GPS enabled device, a navigation device, a set top box, a music player, a video player, Entertainment units, fixed location data units such as meter reading equipment, or other communications devices that store or retrieve data or computer instructions, or combinations thereof. Although FIG. 10 illustrates remote units in accordance with aspects of the disclosure, the disclosure is not limited to these exemplary illustrated units. Aspects of the present disclosure may be appropriately employed in many devices including the disclosed RF devices.11 is a block diagram illustrating a design workstation for circuit, layout, and logic design of semiconductor components, such as the RF devices disclosed above. Design workstation 1100 includes a hard drive 1101 that contains operating system software, support files, and design software, such as Cadence or OrCAD. The design workstation 1100 also includes a display 1102 to facilitate the design of a circuit 1110 or a semiconductor component 1112, such as an RF device. Storage media 1104 is provided for tangibly storing circuit designs 1110 or semiconductor components 1112 . Circuit design 1110 or semiconductor component 1112 may be stored on storage medium 1104 in a file format such as GDSII or GERBER. Storage medium 1104 may be a CD-ROM, DVD, hard drive, flash memory, or other suitable device. In addition, the design workstation 1100 includes a driver 1103 for accepting input from or writing output to the storage medium 1104 .Data recorded on storage medium 1104 may specify logic circuit configuration, pattern data for a photolithography mask, or mask pattern data for a serial writing tool such as electron beam lithography. The data may further include logic verification data, such as timing diagrams or network circuits associated with logic simulation. Providing data on storage medium 1104 facilitates circuit design 1110 or semiconductor component 1112 design by reducing the number of processes used to design a semiconductor wafer.For firmware and/or software implementations, methods may be implemented utilizing modules (eg, procedures, functions, etc.) that perform the functions described herein. Machine-readable media tangibly embodying instructions can be used in performing the methods described herein. For example, the software code may be stored in memory and executed by a processor unit. The memory may be implemented within the processor unit or external to the processor unit. As used herein, the term "memory" refers to long-term, short-term, volatile, non-volatile, or other types of memory and is not limited to the specific type of memory or the specific number of memory or the memory on which the memory is stored. The type of media.If implemented in firmware and/or software, the functionality may be stored on a computer-readable medium as one or more instructions or code. Examples include computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. The storage media may be available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or other media that may be used Stores desired program code in the form of instructions or data structures that can be accessed by a computer; as used herein, disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), Floppy disks and Blu-ray discs, where disks typically reproduce data magnetically, while discs reproduce data optically using lasers. Combinations of the above should also be included within the scope of computer-readable media.In addition to being stored on a computer-readable medium, instructions and/or data may be provided as signals on a transmission medium included in a communications device. For example, a communication device may include a transceiver having signals indicating instructions and data. The instructions and data are configured to cause one or more processors to perform the functions outlined in the claims.Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the technology of the disclosure as defined by the appended claims. For example, relative terms such as "above" and "below" are used with respect to a substrate or electronic device. Of course, if the substrate or electronic device is turned upside down, top becomes bottom, and vice versa. Additionally, if oriented sideways, above and below may refer to the sides of the substrate or electronic device. Furthermore, the scope of the present application is not intended to be limited to the specific configurations of processes, machines, articles of manufacture and compositions of matter, components, methods and steps described in the specification. As those of ordinary skill in the art will readily appreciate from this disclosure, presently existing or later developed processes that perform substantially the same functions or achieve substantially the same results as corresponding configurations described herein may be utilized, Machine, article of manufacture, composition of matter, component, method or step. Accordingly, the appended claims are intended to include within their scope such processes, machines, articles of manufacture, compositions of matter, components, methods or steps.
Technologies for flexible I/O protocol acceleration include a computing device having a root complex, a smart endpoint coupled to the root complex, and an offload complex coupled to the smart endpoint. The smart endpoint receives an I/O transaction that originates from the root complex and parses the I/O transaction based on an I/O protocol and identifies an I/O command. The smart endpoint may parse the I/O transaction based on endpoint firmware that may be programmed by the computing device. The smart endpoint accelerates the I/O command and provides a smart context to the offload complex. The smart endpoint may copy the I/O command to memory of the smart endpoint or the offload complex. The smart endpoint may identify protocol data based on the I/O command and copy the protocol data to the memory of the smart endpoint or the offload complex. Other embodiments are described and claimed.
1.An intelligent endpoint used for I/O protocol acceleration, the intelligent endpoint includes:Transaction layer for receiving I/O transactions originating from the root port of a computing device, wherein the smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device ;A protocol parser for (i) analysing the I/O transaction based on the I/O protocol in response to receiving the I/O transaction, and (ii) responsive to the parsing of the I/O transaction Recognize I/O commands; andThe protocol accelerator is used to (i) accelerate the I/O command, and (ii) provide an intelligent context to the load transfer complex in response to the acceleration of the I/O command.2.The smart endpoint of claim 1, further comprising: a firmware manager for programming the endpoint firmware of the smart endpoint, wherein parsing the I/O transaction includes parsing the I/O based on the endpoint firmware Affairs.3.The smart endpoint of claim 1, wherein accelerating the I/O command comprises: copying the I/O command to a memory of the smart endpoint.4.The smart endpoint of claim 1, wherein accelerating the I/O command comprises: copying the I/O command to the memory of the load transfer complex.5.The smart endpoint of claim 1, wherein accelerating the I/O command comprises:Identify the protocol data associated with the I/O command; andCopy the protocol data to the memory of the smart endpoint.6.The smart endpoint of claim 1, wherein accelerating the I/O command comprises:Identify the protocol data associated with the I/O command; andCopy the protocol data to the memory of the load transfer complex.7.The smart endpoint of claim 1, wherein:Parsing the I/O transaction includes determining whether the I/O transaction is a doorbell notification;Identifying the I/O command includes identifying the I/O command in host memory in response to determining that the I/O transaction is a doorbell notification; andAccelerating the I/O command includes reading the I/O command from the host memory.8.7. The smart endpoint of claim 7, wherein determining whether the I/O transaction includes a doorbell notification comprises: determining whether the I/O transaction includes a tail pointer update.9.8. The smart endpoint of claim 7, wherein providing the smart context to the load transfer complex comprises: providing the I/O command to the load transfer complex.10.The smart endpoint of claim 7, wherein accelerating the I/O command further comprises:Identifying protocol data in the host memory based on the I/O command; andRead the protocol data from the host memory.11.The smart endpoint of claim 10, wherein providing the smart context to the load transfer complex comprises: providing the protocol data to the load transfer complex.12.The smart endpoint of claim 1, wherein:The protocol accelerator is further configured to: receive a response to the I/O command from the load transfer complex in response to providing the intelligent context to the load transfer complex; andThe transaction layer is further configured to forward the response to the root complex in response to receiving the response.13.The smart endpoint of claim 12, wherein receiving the response comprises:Receive doorbell notifications from the load transfer complex; andThe response is read from the load transfer complex in response to receipt of the doorbell notification.14.A method for I/O protocol acceleration, the method includes:An I/O transaction originating from the root port of the computing device is received by the smart endpoint of the computing device, wherein the smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the load of the computing device Transfer complexIn response to receiving the I/O transaction, the smart endpoint parses the I/O transaction based on the I/O protocol;Identifying I/O commands by the smart endpoint in response to parsing the I/O transaction;Accelerate the I/O command by the smart endpoint; andIn response to accelerating the I/O command, the intelligent endpoint provides an intelligent context to the load transfer complex.15.The method of claim 14, further comprising: programming the endpoint firmware of the smart endpoint by the computing device, wherein parsing the I/O transaction includes parsing the I/O transaction based on the endpoint firmware.16.The method of claim 14, wherein:Parsing the I/O transaction includes determining whether the I/O transaction is a doorbell notification;Identifying the I/O command includes identifying the I/O command in host memory in response to determining that the I/O transaction is a doorbell notification; andAccelerating the I/O command includes reading the I/O command from the host memory.17.The method of claim 16, wherein providing the intelligent context to the load transfer complex comprises: providing the I/O command to the load transfer complex.18.The method of claim 16, wherein accelerating the I/O command further comprises:Identifying protocol data in the host memory based on the I/O command; andRead the protocol data from the host memory.19.The method of claim 18, wherein providing the intelligent context to the load transfer complex comprises: providing the protocol data to the load transfer complex.20.One or more computer-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause the computing device to:An I/O transaction originating from a root port of the computing device is received by a smart endpoint of the computing device, wherein the smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the computing device The load transfer complex;In response to receiving the I/O transaction, the smart endpoint parses the I/O transaction based on the I/O protocol;Identifying I/O commands by the smart endpoint in response to parsing the I/O transaction;Accelerate the I/O command by the smart endpoint; andIn response to accelerating the I/O command, the intelligent endpoint provides an intelligent context to the load transfer complex.21.The one or more computer-readable storage media of claim 20, further comprising a plurality of instructions stored thereon that, in response to being executed, cause the computing device to program the endpoint firmware of the smart endpoint, Wherein parsing the I/O transaction includes parsing the I/O transaction based on the endpoint firmware.22.The one or more computer-readable storage media of claim 20, wherein:Parsing the I/O transaction includes determining whether the I/O transaction is a doorbell notification;Identifying the I/O command includes identifying the I/O command in host memory in response to determining that the I/O transaction is a doorbell notification; andAccelerating the I/O command includes reading the I/O command from the host memory.23.The one or more computer-readable storage media of claim 22, wherein providing the smart context to the load transfer complex comprises: providing the I/O command to the load transfer complex.24.The one or more computer-readable storage media of claim 22, wherein accelerating the I/O command further comprises:Identifying protocol data in the host memory based on the I/O command; andRead the protocol data from the host memory.25.The one or more computer-readable storage media of claim 24, wherein providing the intelligent context to the load transfer complex comprises: providing the protocol data to the load transfer complex.
Technology for flexible protocol accelerationTechnical fieldThe present disclosure relates to techniques for acceleration of flexible protocols.Background techniqueTypical PCI Express (PCI Express, PCIe) I/O devices include fixed-function endpoints. The fixed function endpoint usually includes the endpoint transaction layer, the basic hardware configuration space and the endpoint interface to other parts of the I/O device. The endpoint may also include fixed function protocol conversion. PCIe devices may also include hardware switches, bridges, or other components that establish a fixed PCI hierarchy.Current computing systems can use software virtualization to share computing resources such as disk drives or other storage devices among multiple tenants, where software virtualization is usually performed by a virtual machine monitor, a hypervisor, or a host processor Virtualized guest software to execute. Some computing systems can support bare metal virtualization by offloading certain virtualization tasks to an offload complex.Summary of the inventionAccording to an aspect of the present disclosure, there is provided an intelligent endpoint for I/O protocol acceleration, the intelligent endpoint includes: a transaction layer for receiving I/O transactions originating from a root port of a computing device, wherein the A smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device; a protocol resolver for (i) based on the reception of the I/O transaction The I/O protocol parses the I/O transaction, and (ii) identifies I/O commands in response to the analysis of the I/O transaction; and a protocol accelerator for (i) accelerating the I/O command And (ii) providing an intelligent context to the load transfer complex in response to the acceleration of the I/O command.According to one aspect of the present disclosure, there is provided a method for I/O protocol acceleration, the method comprising: receiving, by a smart endpoint of a computing device, an I/O transaction originating from a root port of the computing device, wherein The smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device; in response to receiving the I/O transaction, the smart endpoint is based on the I/O protocol Parse the I/O transaction; identify the I/O command by the smart endpoint in response to parse the I/O transaction; accelerate the I/O command by the smart endpoint; and respond to speeding up the I/O O command and the intelligent end point provides the intelligent context to the load transfer complex.According to one aspect of the present disclosure, one or more computer-readable storage media are provided, including a plurality of instructions stored thereon, which in response to being executed cause a computing device to receive a source from an intelligent endpoint of the computing device I/O transactions from the root port of the computing device, wherein the smart endpoint is coupled to the root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device; in response to receiving The I/O transaction is resolved by the smart endpoint based on the I/O protocol; the smart endpoint recognizes the I/O command in response to the resolution of the I/O transaction; The endpoint accelerates the I/O command; and in response to the acceleration of the I/O command, the intelligent endpoint provides an intelligent context to the load transfer complex.Description of the drawingsThe concepts described herein are illustrated in the drawings by way of example and not limitation. For simplicity and clarity of illustration, the elements illustrated in the drawings are not necessarily drawn to scale. When deemed appropriate, reference numerals are repeated between the drawings to indicate corresponding or similar elements.Figure 1 is a simplified diagram of at least one embodiment of a data center for performing workloads with de-aggregated resources;Figure 2 is a simplified diagram of at least one embodiment of a computer room that may be included in the data center of Figure 1;Figure 3 is a perspective view of at least one embodiment of a rack that may be included in the machine room of Figure 2;Figure 4 is a side elevation view of the rack of Figure 3;Figure 5 is a perspective view of the rack of Figure 3 with a bracket placed therein;Figure 6 is a simplified block diagram of at least one embodiment of the top side of the bracket of Figure 5;Figure 7 is a simplified block diagram of at least one embodiment of the bottom side of the bracket of Figure 6;FIG. 8 is a simplified block diagram of at least one embodiment of a computing bay usable in the data center of FIG. 1;Figure 9 is a top perspective view of at least one embodiment of the computing bracket of Figure 8;Fig. 10 is a simplified block diagram of at least one embodiment of an accelerator bracket usable in the data center of Fig. 1;Figure 11 is a top perspective view of at least one embodiment of the accelerator bracket of Figure 10;Figure 12 is a simplified block diagram of at least one embodiment of a storage tray usable in the data center of Figure 1;Figure 13 is a top perspective view of at least one embodiment of the storage tray of Figure 12;Figure 14 is a simplified block diagram of at least one embodiment of a storage bay usable in the data center of Figure 1; andFIG. 15 is a simplified block diagram of a system that can be built in the data center of FIG. 1 to execute workloads using managed nodes composed of de-aggregated resources.Figure 16 is a simplified block diagram of at least one embodiment of a system for flexible protocol acceleration;Figure 17 is a simplified block diagram of at least one embodiment of the intelligent endpoint and load transfer complex of the computing device of Figure 16;18 is a simplified block diagram of at least one embodiment of an environment that the computing device of FIGS. 16-17 can establish;19 is a simplified flowchart of at least one embodiment of a method for flexible protocol acceleration executable by the computing device of FIGS. 16-18; and20 and 21 are simplified flowcharts of at least one embodiment of a method for flexible doorbell notification acceleration executable by the smart endpoint of FIGS. 16-18.detailed descriptionAlthough the concept of the present disclosure allows various modifications and alternative forms, specific embodiments thereof are shown in the drawings by way of example and will be described in detail herein. However, it should be understood that it is not intended to limit the concept of the present disclosure to the specific forms disclosed, but on the contrary, it is intended to cover all modifications, equivalents, and replacements that conform to the present disclosure and the appended claims.References in the specification to "one embodiment", "an embodiment", "an illustrative embodiment", etc. indicate that the described embodiment may include specific features, structures or characteristics, but each embodiment may or may not Including the specific feature, structure or characteristic. In addition, such phrases do not necessarily refer to the same embodiment. In addition, when a specific feature, structure, or characteristic is described in connection with one embodiment, it is considered that it is within the knowledge of those skilled in the art to realize such feature, structure, or characteristic in connection with other embodiments (whether explicitly described or not). In addition, it should be understood that items included in a list of the form "at least one of A, B, and C" may refer to (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C). Similarly, items listed in the form of "at least one of A, B, or C" can refer to (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C).The disclosed embodiments may be implemented by hardware, firmware, software, or any combination thereof in some cases. The disclosed embodiments can also be implemented as instructions carried by or stored on a transient or non-transitory machine-readable (eg, computer-readable) storage medium, and these instructions can be read and executed by one or more processors . The machine-readable storage medium may be implemented as any storage device, mechanism, or other physical structure (for example, volatile or non-volatile memory, medium disk, or other medium device) for storing or transmitting information in a machine-readable form .In the drawings, some structural or method features may be shown in a specific arrangement and/or order. However, it should be understood that this particular arrangement and/or ordering is not necessary. Conversely, in some embodiments, such features may be arranged in a different manner and/or order than shown in the schematic diagram. In addition, the inclusion of a certain structural or method feature in a particular drawing is not intended to imply that this feature is required in all embodiments, and in some embodiments, this feature may not be included or may be combined with other features .Referring now to FIG. 1, a data center 100 in which disaggregated resources can cooperate to execute one or more workloads (for example, applications on behalf of customers) includes multiple pods 110, 120, 130, 140, Each computer room includes one or more rows of racks. Of course, although the data center 100 is shown as having multiple computer rooms, in some embodiments, the data center 100 may be implemented as a single computer room. As described in more detail herein, each rack accommodates multiple sleds, and each sled can be mainly equipped with specific types of resources (e.g., memory devices, data storage devices, accelerator devices, general-purpose processors), The resources that can be logically coupled to form a synthetic node, which can act as a server, for example. In the illustrative embodiment, the bays in each computer room 110, 120, 130, 140 are connected to multiple pod switches (eg, switches that route data communications to and from bays in the computer room). The computer room switch is further connected to a spine switch 150, and the spine switch 150 exchanges communications between computer rooms (for example, computer rooms 110, 120, 130, 140) in the data center 100. In some embodiments, the bracket can be connected to an architecture using Intel Omni-Path technology. In other embodiments, the bracket can be connected to other architectures, such as InfiniBand or Ethernet. As described in more detail herein, the resources in the bays in the data center 100 can be allocated to a group (referred to herein as a "managed node") that includes resources from one or more bays. The resources of the rack can be collectively used in the execution of the workload. The workload can be executed as if the resources belonging to the managed node are on the same bay. The resources in the managed node may belong to different racks and even to the brackets of different machine rooms 110, 120, 130, 140. In this way, some resources of a single bay can be allocated to one managed node, and other resources of the same bay can be allocated to different managed nodes (for example, a processor is assigned to a managed node, and the same Another processor of the rack is assigned to a different managed node).Data centers that include disaggregated resources—for example, data center 100—can be used in a variety of scenarios, such as enterprises, governments, cloud service providers, and communication service providers (for example, telecommunications companies). In a variety of sizes, from cloud service provider mass data centers that consume more than 100,000 square feet to single-rack or multi-rack installations in base stations.De-aggregate resources into a bay mainly composed of a single type of resources (for example, a computing bay mainly includes computing resources, and a memory bay mainly contains memory resources) and selectively allocate and deallocate the de-aggregated resources to form The managed nodes assigned to perform workloads improve the operation and resource usage of the data center 100 relative to typical data centers, which consist of hyper-converged computing, storage, storage, and possibly additional resources contained in a single chassis Server composition. For example, because the tray mainly contains a specific type of resource, a given type of resource can be upgraded independently of other resources. In addition, because different resource types (processors, storage, accelerators, etc.) usually have different refresh rates, higher resource utilization and lower total cost of ownership can be achieved. For example, data center operators can upgrade the processors in their entire facility simply by swapping out computing bays. In this case, the accelerator and storage resources may not be upgraded at the same time, but may be allowed to continue operations until these resources are scheduled for their own refresh. Resource utilization can also be increased. For example, if the managed node is constructed based on the requirements of the workload that will run on it, the resources within the node are more likely to be fully utilized. This utilization may allow more managed nodes to run in a data center with a given set of resources, or allow the use of fewer resources to build a data center that is expected to run a given set of workloads.Referring now to FIG. 2, the computer room 110 includes a set of racks 240 of multiple rows 200, 210, 220, 230 in an illustrative embodiment. Each rack 240 can accommodate multiple racks (eg, sixteen racks) and provide power and data connections to the accommodated racks, as described in more detail herein. In the illustrative embodiment, the racks in each row 200, 210, 220, 230 are connected to multiple computer room switches 250, 260. The computer room switch 250 includes a set of ports 252 to which the racks of the computer room 110 are connected, and another set of ports 254 that connects the computer room 110 to the spine switch 150 to provide connectivity to other computer rooms in the data center 100. Similarly, the machine room switch 260 includes a set of ports 262 to which the racks of the machine room 110 are connected and a set of ports 264 to connect the machine room 110 to the spine switch 150. In this way, the use of a pair of switches 250 and 260 provides the computer room 110 with a certain amount of redundancy. For example, if any one of the switches 250 and 260 fails, the bay in the computer room 110 can still maintain data communication with the remaining part of the data center 100 (for example, bays in other computer rooms) through the other switch 250 and 260. In addition, in the illustrative embodiment, the switches 150, 250, 260 may be implemented as dual-mode optical switches, capable of routing Ethernet protocol communications that carry Internet Protocol (Internet Protocol, IP) packets, and also capable of routing optical communications via optical architectures. Let the media route communication according to the second high-performance link layer protocol (for example, Intel's Omni-Path architecture, InfiniBand, PCI Express).It should be understood that each of the other computer rooms 120, 130, 140 (and any additional computer rooms of the data center 100) may be constructed similarly to the computer room 110 shown in and described with reference to FIG. 2 and have similar components (eg Each machine room can have a rack row that accommodates multiple racks as described above). In addition, although two computer room switches 250, 260 are shown, it should be understood that in other embodiments, each computer room 110, 120, 130, 140 can be connected to a different number of computer room switches, providing even more failover capabilities. . Of course, in other embodiments, the layout of the machine room may be different from the rack row configuration shown in FIGS. 1 to 2. For example, the computer room can be implemented as multiple groups of racks, where each group of racks is arranged radially, that is, the racks are equidistant from the central switch.Referring now to FIGS. 3-5, each illustrative rack 240 of the data center 100 includes two elongated pillars 302, 304, which are arranged vertically. For example, the elongated pillars 302, 304 may extend upward from the floor of the data center 100 when deployed. The rack 240 also includes one or more pairs 310 of elongated support arms 312 (identified by a dotted ellipse in FIG. 3) that are configured to support the horizontal of the brackets of the data center 100 as described below. One elongate support arm 312 of the pair of elongate support arms 312 extends outward from the elongate support 302 and the other elongate support arm 312 extends outward from the elongate support 304.In the illustrative embodiment, each bay of the data center 100 is implemented as a chassisless bay. That is, each tray has a chassis-less circuit board substrate on which physical resources (eg, processors, memories, accelerators, storage devices, etc.) are placed as discussed in more detail below. In this way, the rack 240 is configured to receive a chassisless tray. For example, each pair 310 of elongated support arms 312 defines a bracket slot 320 of the rack 240, which is configured to receive a corresponding chassisless bracket. To this end, each illustrative elongated support arm 312 includes a circuit board rail 330 configured to receive a chassis-less circuit board substrate of the bracket. Each circuit board guide 330 is fastened or otherwise mounted to the top side 332 of the corresponding elongated support arm 312. For example, in the illustrative embodiment, each circuit board guide 330 is placed at the distal end of the corresponding elongate support arm 312 relative to the corresponding elongate pillar 302, 304. For the clarity of the drawings, each circuit board guide 330 may not be cited in every figure.Each circuit board guide 330 includes an inner wall defining a circuit board slot 380 configured to receive the chassis-less circuit board of the bracket 400 when the bracket 400 is received in the corresponding bracket slot 320 of the rack 240 Substrate. To this end, as shown in FIG. 4, the user (or robot) aligns the illustrative chassisless circuit board substrate of the chassisless bracket 400 to the bracket slot 320. The user or the robot can then slide the chassisless circuit board substrate forward into the bracket slot 320, so that each side edge 414 of the chassisless circuit board substrate is received in one of the corresponding bracket slots 320 as shown in FIG. Pair 310 of the elongated support arm 312 in the corresponding circuit board slot 380 of the circuit board guide 330. By making the robot-accessible and robot-manipulatable cradles include de-aggregated resources, each type of resource can be upgraded independently of each other and with its own optimized refresh rate. In addition, the brackets are configured to blindly pair with the power and data communication cables in each rack 240, enhancing its ability to be quickly removed, upgraded, reinstalled, and/or replaced. As such, in some embodiments, the data center 100 may operate without human intervention on the data center site (eg, perform workloads, undergo maintenance and/or upgrades, etc.). In other embodiments, humans may facilitate one or more maintenance or upgrade operations in the data center 100.It should be understood that each circuit board guide 330 is double-sided. That is, each circuit board guide 330 includes an inner wall defining the circuit board slot 380 on each side of the circuit board guide 330. In this way, each circuit board guide 330 can support a chassisless circuit board substrate on either side. In this way, a single additional slender strut can be added to the rack 240 to turn the rack 240 into a dual rack solution that can support twice as many tray slots 320 as shown in FIG. 3. The illustrative frame 240 includes seven pairs 310 of elongate support arms 312 that define corresponding seven bracket slots 320, each of which is configured to receive and support a corresponding bracket 400 as described above. Of course, in other embodiments, the frame 240 may include additional or fewer pairs 310 of elongated support arms 312 (eg, additional or fewer bracket slots 320). It should be understood that because the bracket 400 is chassisless, the bracket 400 may have a different overall height than a typical server. Thus, in some embodiments, the height of each tray slot 320 may be shorter than that of a typical server (for example, shorter than a single rack unit "1U"). That is, the vertical distance between each pair 310 of elongated support arms 312 may be smaller than the standard rack unit "1U". In addition, due to the relative reduction in the height of the bracket slot 320, the overall height of the rack 240 may be shorter than that of a conventional rack enclosure in some embodiments. For example, in some embodiments, each of the elongated struts 302, 304 may have a length of six feet or less. Likewise, in other embodiments, the rack 240 may have different sizes. For example, in some embodiments, the vertical distance between each pair 310 of elongated support arms 312 may be greater than the standard rack unit "1U". In such an embodiment, the increased vertical distance between the brackets allows larger heat sinks to be attached to physical resources and allows larger fans to be used (e.g., in the fan array 370 described below ) Cool each bay, which in turn may allow physical resources to operate at increased power levels. In addition, it should be understood that the frame 240 does not include any walls, enclosures, etc. On the contrary, the rack 240 is an unencapsulated rack open to the local environment. Of course, in some cases, where the rack 240 forms an end-of-row rack in the data center 100, the end plate may be attached to one of the elongated pillars 302, 304.In some embodiments, various interconnections may be routed up or down through elongated pillars 302, 304. To facilitate this wiring, each elongated pillar 302, 304 includes an inner wall that defines an inner cavity in which the interconnect can be located. The interconnections routed through the elongated pillars 302, 304 can be implemented as any type of interconnection, including but not limited to data or communication interconnections that provide communication connections to each tray slot 320, and provide each tray slot 320 Electric power interconnection and/or other types of interconnection.The rack 240 includes a support platform in the illustrative embodiment on which a corresponding optical data connector (not shown) is mounted. Each optical data connector is associated with the corresponding tray slot 320 and is configured to mate with the optical data connector of the corresponding tray 400 when the corresponding tray 400 is received in the corresponding tray slot 320. In some embodiments, the optical connections between components (eg, bays, racks, and switches) in the data center 100 are made using blind-paired optical connections. For example, a door on each cable prevents dust from contaminating the optical fibers inside the cable. During the connection to the blind mating optical connector mechanism, when the end of the cable approaches or enters the connector mechanism, the door is pushed open. Subsequently, the optical fiber inside the cable can enter the gel in the connector mechanism and the optical fiber of one cable contacts the optical fiber of the other cable in the gel inside the connector mechanism.The illustrative rack 240 also includes a fan array 370 coupled to the cross support arms of the rack 240. The fan array 370 includes one or more rows of cooling fans 372 that are aligned with the horizontal line between the elongated pillars 302,304. In the illustrative embodiment, fan array 370 includes a row of cooling fans 372 for each bay slot 320 of rack 240. As described above, each tray 400 does not include any on-board cooling system in the illustrative embodiment, and therefore, the fan array 370 provides cooling for each tray 400 received in the rack 240. Each rack 240 also includes a power supply associated with each tray slot 320 in the illustrative embodiment. Each power supply is fastened to one of the elongated support arms 312 of the pair 310 of elongated support arms 312 defining the corresponding bracket slot 320. For example, the frame 240 may include a power supply coupled to or fastened to each elongate support arm 312 extending from the elongate pillar 302. Each power supply includes a power connector configured to mate with the power connector of the cradle 400 when the cradle 400 is received in the corresponding cradle slot 320. In the illustrative embodiment, the cradle 400 does not include any on-board power supply, and therefore, the power supply provided in the rack 240 provides power to the corresponding cradle 400 when placed in the rack 240. Each power supply is configured to meet power requirements for its associated bay, which requirements can be different from bay to bay. In addition, the power supply provided in the rack 240 may operate independently of each other. That is, in a single rack, the first power supply that provides power to the computing bay may provide a power level that is different from the power level provided by the second power supply that provides power to the accelerator bay. The power supply can be controllable at the bay or rack level, and can be controlled locally by the components on the associated bay, or remotely controlled, such as by another bay or a coordinator.Referring now to FIG. 6, the bracket 400 is configured to be housed in a corresponding rack 240 of the data center 100 as described above in the illustrative embodiment. In some embodiments, each tray 400 may be optimized or configured in other ways for performing specific tasks, such as computing tasks, acceleration tasks, data storage tasks, and so on. For example, the bracket 400 may be implemented as a computing bracket 800 as described below with reference to FIGS. 8-9, and as an accelerator bracket 1000 as described below with reference to FIGS. 10-11, and as described below with reference to FIGS. 12-13 Said is implemented as a storage bay 1200, or a bay that is optimized or otherwise configured to perform other specialized tasks, such as a storage bay 1400, as described below with reference to FIG. 14.As described above, the illustrative bracket 400 includes a chassisless circuit board substrate 602 that supports various physical resources (e.g., electrical components) placed thereon. It should be understood that the circuit board substrate 602 is "chassisless" because the bracket 400 does not include a housing or enclosure. In contrast, the chassisless circuit board substrate 602 is open to the local environment. The chassisless circuit board substrate 602 may be formed of any material capable of supporting various electrical components mounted thereon. For example, in the illustrative embodiment, the chassisless circuit board substrate 602 is formed of FR-4 glass reinforced epoxy laminate material. Of course, other materials can be used to form the chassisless circuit board substrate 602 in other embodiments.As discussed in more detail below, the chassisless circuit board substrate 602 includes multiple features that improve the thermal cooling characteristics of various electrical components mounted on the chassisless circuit board substrate 602. As mentioned above, the chassisless circuit board substrate 602 does not include a housing or enclosure, which can improve the air flow of electrical components passing through the bracket 400 by reducing those structures that can inhibit air flow. For example, because the chassisless circuit board substrate 602 is not located in the individual housing or enclosure, no vertically arranged backplane (for example, the backplane of the chassis) is attached to the chassisless circuit board substrate 602. This backplane can inhibit wear Airflow of over-powered components. In addition, the chassisless circuit board substrate 602 has a geometric shape configured to reduce the length of the air flow path through the electrical components mounted to the chassisless circuit board substrate 602. For example, the illustrative chassisless circuit board substrate 602 has a width 604 that is greater than the depth 606 of the chassisless circuit board substrate 602. In one particular embodiment, for example, compared to a typical server having a width of about 17 inches and a depth of about 39 inches, the chassisless circuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches. In this way, the air flow path 608 extending from the front edge 610 of the chassisless circuit board substrate 602 toward the rear edge 612 has a shorter distance relative to a typical server, which can improve the thermal cooling characteristics of the bracket 400. In addition, although not shown in FIG. 6, the various physical resources placed on the chassisless circuit board substrate 602 are placed in corresponding positions so that no two electrical components that substantially generate heat will shield each other, as described in more detail below Discourse. That is, no two electrical components that generate appreciable heat (ie, greater than the nominal heat sufficient enough to adversely affect the cooling of another electrical component) during operation will be taken along the direction of the airflow path 608 (ie , Along the direction extending from the front edge 610 of the chassisless circuit board substrate 602 to the rear edge 612) are arranged in a line with each other on the chassisless circuit board substrate 602.As mentioned above, the illustrative tray 400 includes one or more physical resources 620 that are placed on the top side 650 of the chassisless circuit board substrate 602. Although two physical resources 620 are shown in FIG. 6, it should be understood that the tray 400 may include one, two, or more physical resources 620 in other embodiments. The physical resource 620 may be implemented as any type of processor, controller, or capable of performing various tasks such as computing functions and/or other calculations that control the functions of the tray 400 depending on, for example, the type of tray 400 or the desired function Circuit. For example, as discussed in more detail below, the physical resource 620 may be implemented as a high-performance processor in an embodiment where the tray 400 is implemented as a computing tray, and may be implemented in an embodiment where the tray 400 is implemented as an accelerator tray It is an accelerator coprocessor or a circuit, and can be implemented as a storage controller in an embodiment where the tray 400 is implemented as a storage tray, or as a set of memories in an embodiment where the tray 400 is implemented as a memory tray equipment.The cradle 400 also includes one or more additional physical resources 630 placed on the top side 650 of the chassisless circuit board substrate 602. In an illustrative embodiment, the additional physical resources include a network interface controller (NIC), as discussed in more detail below. Of course, depending on the type and function of the cradle 400, the physical resource 630 may include additional or other electrical components, circuits, and/or equipment in other embodiments.The physical resource 620 is communicatively coupled to the physical resource 630 via an input/output (I/O) subsystem 622. The I/O subsystem 622 may be implemented as a circuit and/or component for facilitating input/output operations with the physical resource 620, the physical resource 630, and/or other components of the tray 400. For example, the I/O subsystem 622 may be implemented as or otherwise include a memory controller center, an input/output control center, an integrated sensor center, a firmware device, a communication link (e.g., point-to-point link, bus link, Wires, cables, waveguides, light guides, printed circuit board traces, etc.) and/or other components and subsystems used to facilitate input/output operations. In an illustrative embodiment, I/O subsystem 622 is implemented as or otherwise includes a double data rate 4 (DDR4) data bus or a DDR5 data bus.In some embodiments, the tray 400 may also include a resource-to-resource interconnect 624. The resource-to-resource interconnection 624 may be implemented as any type of communication interconnection that can facilitate resource-to-resource communication. In the illustrative embodiment, resource-to-resource interconnection 624 is implemented as a high-speed point-to-point interconnection (e.g., faster than I/O subsystem 622). For example, the resource-to-resource interconnection 624 may be implemented as QuickPath interconnection (QPI), UltraPath interconnection (UPI), or other high-speed point-to-point interconnection dedicated to resource-to-resource communication.The bracket 400 also includes a power connector 640 configured to mate with the corresponding power connector of the rack 240 when the bracket 400 is placed in the corresponding rack 240. The cradle 400 receives power from the power supply of the rack 240 via the power connector 640 to provide power to various electrical components of the cradle 400. That is, the cradle 400 does not include any local power supply (ie, on-board power supply) to provide power to the electrical components of the cradle 400. Excluding local or on-board power supply promotes the reduction of the overall footprint of the chassisless circuit board substrate 602, which can increase the thermal cooling characteristics of various electrical components mounted on the chassisless circuit board substrate 602 as described above . In some embodiments, the voltage regulator is placed on the bottom side 750 (see FIG. 7) of the chassisless circuit board substrate 602, directly opposite the processor 820 (see FIG. 8), and the power is extended across the circuit board substrate 602 Vias are routed from the regulator to the processor 820. This configuration provides an increased thermal budget, additional current and/or voltage, and better voltage control relative to a typical printed circuit board where processor power is partly delivered from a voltage regulator by printed circuit tracks.In some embodiments, the cradle 400 may also include a mounting feature 642 configured to mate with a mounting arm or other structure of the robot to facilitate placement of the cradle 600 in the rack 240 by the robot. The mounting feature 642 can be implemented as any type of physical structure that allows the robot to grasp the cradle 400 without destroying the chassisless circuit board substrate 602 or the electrical components mounted to it. For example, in some embodiments, the mounting feature 642 may be implemented as a non-conductive gasket attached to the chassisless circuit board substrate 602. In other embodiments, the mounting feature may be implemented as a bracket, pillar, or other similar structure attached to the chassisless circuit board substrate 602. The specific number, shape, size, and/or composition of the placement features 642 may depend on the design of the robot configured to manage the cradle 400.Referring now to FIG. 7, in addition to the physical resources 630 placed on the top side 650 of the chassisless circuit board substrate 602, the tray 400 also includes one or more memory devices 720 placed on the bottom side 750 of the chassisless circuit board substrate 602. That is, the chassisless circuit board substrate 602 is implemented as a double-sided circuit board. The physical resource 620 is communicatively coupled to the memory device 720 via the I/O subsystem 622. For example, the physical resource 620 and the memory device 720 may be communicatively coupled by one or more vias extending through the chassisless circuit board substrate 602. Each physical resource 620 is communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Alternatively, in other embodiments, each physical resource 620 is communicatively coupled to each memory device 720.The memory device 720 may be implemented as any type of memory device capable of storing data for the physical resource 620 during the operation of the tray 400, such as any type of volatile (eg, dynamic random access memory (DRAM), etc.) ) Or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One specific type of DRAM that can be used in memory modules is synchronous dynamic random access memory (SDRAM). In certain embodiments, the DRAM of the memory component can comply with standards promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, and JESD79 for DDR4 SDRAM -4A, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4. This standard (and similar standards) may be referred to as a DDR-based standard, and the communication interface of a storage device that implements this standard may be referred to as a DDR-based interface.In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technology. Memory devices may also include next-generation non-volatile devices, such as Intel 3D XPointTM memory or other byte-addressable write-in-place non-volatile memory devices. In one embodiment, the memory device may be or may include a memory device using chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (Phase Change Memory, PCM), resistive memory , Nanowire memory, ferroelectric transistor random access memory (ferroelectric transistor random access memory, FeTRAM), antiferroelectric memory, magnetoresistive random access memory (MRAM) memory including memristor technology, including metal oxide The resistive memory of the substrate, the oxygen vacancy substrate and the conductive bridge random access memory (CB-RAM), or spin transfer torque (STT)-MRAM, based on the spintronic magnetic junction memory Devices based on magnetic tunneling junction (MTJ) devices, devices based on DW (Domain Wall) and SOT (Spin Orbit Transfer), memory devices based on semiconductor thyristors, Or a combination of any of the above, or other memory. The memory device may refer to the die itself and/or to the packaged memory product. In some embodiments, the memory device may include a transistor-free stackable cross-point architecture, where the memory cell is located at the intersection of the word line and the bit line and is individually addressable, and where the bit storage is based on changes in body resistance .Referring now to FIG. 8, in some embodiments, the bracket 400 may be implemented as a computing bracket 800. The computing cradle 800 is optimized or otherwise configured to perform computing tasks. Of course, as described above, the computing tray 800 may rely on other trays, such as accelerator trays and/or storage trays, to perform such computing tasks. The computing tray 800 includes various physical resources (for example, electrical components) similar to the physical resources of the tray 400, and the same reference numerals are used to identify them in FIG. 8. The description of such components provided above with reference to FIGS. 6 and 7 is applicable to the corresponding components of the computing bracket 800, and is not repeated here for clarity of the description of the computing bracket 800.In the illustrative computing bay 800, the physical resource 620 is implemented as a processor 820. Although only two processors 820 are shown in FIG. 8, it should be understood that the computing bay 800 may include additional processors 820 in other embodiments. Illustratively, the processor 820 is implemented as a high-performance processor 820 and may be configured to operate at a relatively high rated power. Although the processor 820 generates additional heat when operating at a power rating greater than that of a typical processor (which operates at approximately 155-230W), the enhanced thermal cooling characteristics of the chassisless circuit board substrate 602 discussed above facilitate higher Power operation. For example, in the illustrative embodiment, processor 820 is configured to operate at a rated power of at least 250W. In some embodiments, the processor 820 may be configured to operate at a rated power of at least 350W.In some embodiments, the computing bay 800 may also include a processor-to-processor interconnect 842. Similar to the resource-to-resource interconnection 624 of the tray 400 discussed above, the processor-to-processor interconnection 842 may be implemented as any type of communication interconnection capable of facilitating the processor-to-processor interconnection 842 communication. In the illustrative embodiment, processor-to-processor interconnect 842 is implemented as a high-speed point-to-point interconnect (e.g., faster than I/O subsystem 622). For example, the processor-to-processor interconnect 842 may be implemented as a QuickPath interconnect (QPI), UltraPath interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communication.The computing cradle 800 also includes a communication circuit 830. The illustrative communication circuit 830 includes a network interface controller (NIC) 832, which may also be referred to as a host fabric interface (HFI). The NIC 832 may be implemented as or otherwise include any type of integrated circuit, discrete circuit, controller chip, chipset, interposer, daughter card, network interface card, or may be used by the computing bay 800 to communicate with another computing Equipment (for example, other equipment connected to other cradle 400). In some embodiments, the NIC 832 may be implemented as part of a system-on-a-chip (SoC) that includes one or more processors, or may be included in a multi-chip that also includes one or more processors. On the package. In some embodiments, the NIC 832 may include a local processor (not shown) and/or a local memory (not shown) that are both local to the NIC 832. In such an embodiment, the local processor of the NIC 832 may be able to perform one or more functions of the processor 820. Additionally or alternatively, in such embodiments, the local memory of the NIC 832 may be integrated into one or more components of the computing bay at the board level, socket level, chip level, and/or other levels.The communication circuit 830 is communicatively coupled to the optical data connector 834. The optical data connector 834 is configured to mate with the corresponding optical data connector of the rack 240 when the computing rack 800 is placed in the rack 240. Illustratively, the optical data connector 834 includes a plurality of optical fibers leading from the mating surface of the optical data connector 834 to the optical transceiver 836. The optical transceiver 836 is configured to convert the incoming optical signal from the rack-side optical data connector into an electrical signal and convert the electrical signal into an outgoing optical signal to the rack-side optical data connector. Although shown as forming part of the optical data connector 834 in the illustrative embodiment, the optical transceiver 836 may form part of the communication circuit 830 in other embodiments.In some embodiments, the computing tray 800 may also include an expansion connector 840. In this embodiment, the expansion connector 840 is configured to mate with the corresponding connector of the expansion chassisless circuit board substrate to provide additional physical resources to the computing bay 800. The additional physical resources may be used by the processor 820 during the operation of the computing cradle 800, for example. The extended chassisless circuit board substrate may be substantially similar to the chassisless circuit board substrate 602 discussed above and may include various electrical components mounted to it. The specific electrical components placed on the expanded chassisless circuit board base may depend on the intended function of the expanded chassisless circuit board base. For example, expanding the chassis-less circuit board substrate can provide additional computing resources, memory resources, and/or storage resources. In this way, the additional physical resources for expanding the chassisless circuit board substrate may include, but are not limited to, processors, memory devices, storage devices, and/or accelerator circuits, such as field programmable gate arrays (FPGA), application-specific integrated circuits ( application-specific integrated circuit (ASIC), security coprocessor, graphics processing unit (GPU), machine learning circuit or other specialized processors, controllers, devices and/or circuits.Referring now to FIG. 9, an illustrative embodiment of a computing bracket 800 is shown. As shown in the figure, the processor 820, the communication circuit 830, and the optical data connector 834 are placed on the top side 650 of the chassisless circuit board substrate 602. Any appropriate attachment or placement technology can be used to place the physical resources of the computing bracket 800 on the chassisless circuit board substrate 602. For example, various physical resources can be placed in corresponding sockets (for example, processor sockets), supports, or brackets. In some cases, some of the electrical components may be directly mounted to the chassisless circuit board substrate 602 via soldering or similar techniques.As described above, the individual processor 820 and the communication circuit 830 are placed on the top side 650 of the chassisless circuit board substrate 602 so that no two electrical components that generate heat will shield each other. In the illustrative embodiment, the processor 820 and the communication circuit 830 are placed in corresponding positions on the top side 650 of the chassisless circuit board substrate 602 so that no two of these physical resources will interact with each other in the direction of the airflow path 608. Line up in a line. It should be understood that although the optical data connector 834 is aligned with the communication circuit 830, the optical data connector 834 generates no heat or only a negligible amount of heat during operation.The memory device 720 of the computing cradle 800 is placed on the bottom side 750 of the chassisless circuit board substrate 602 as discussed above for the cradle 400. Although placed on the bottom side 750, the memory device 720 is communicatively coupled to the processor 820 on the top side 650 via the I/O subsystem 622. Because the chassisless circuit board substrate 602 is implemented as a double-sided circuit board, the memory device 720 and the processor 820 may be communicatively coupled by one or more vias, connectors, or other mechanisms that extend through the chassisless circuit board substrate 602. Of course, each processor 820 is communicatively coupled to a different set of one or more memory devices 720 in some embodiments. Or, in other embodiments, each processor 820 is communicatively coupled to each memory device 720. In some embodiments, the memory device 720 may be placed on one or more memory interlayers on the bottom side of the chassisless circuit board substrate 602 and may be interconnected with the corresponding processor 820 through a ball grid array.Each processor 820 includes a heat sink 850 fastened to it. Since the memory device 720 is placed on the bottom side 750 of the chassisless circuit board substrate 602 (and the vertical spacing of the bracket 400 in the corresponding rack 240), the top side 650 of the chassisless circuit board substrate 602 includes additional "free" areas Or space, which facilitates the use of a radiator 850 that has a larger size than the traditional radiator used in a typical server. In addition, due to the improved thermal cooling characteristics of the chassisless circuit board substrate 602, no processor heat sink 850 includes a cooling fan attached to it. That is, each heat sink 850 is implemented as a fanless heat sink. In some embodiments, the heat sink 850 placed on top of the processor 820 may overlap the heat sink attached to the communication circuit 830 in the direction of the air flow path 608 due to its increased size, as is illustratively proposed in FIG. 9.Referring now to FIG. 10, in some embodiments, the bracket 400 may be implemented as an accelerator bracket 1000. The accelerator cradle 1000 is configured to perform specialized computing tasks, such as machine learning, encryption, hashing, or other computationally intensive tasks. In some embodiments, for example, the computing cradle 800 may transfer task load to the accelerator cradle 1000 during operation. The accelerator cradle 1000 includes various components similar to components of the cradle 400 and/or the computing cradle 800, and these components are identified by the same reference numerals in FIG. 10. The description of such components provided above with reference to FIGS. 6, 7 and 8 is applicable to the corresponding components of the accelerator bracket 1000, and is not repeated here for clarity of the description of the accelerator bracket 1000.In the illustrative accelerator bay 1000, the physical resource 620 is implemented as an accelerator circuit 1020. Although only two accelerator circuits 1020 are shown in FIG. 10, it should be understood that the accelerator bracket 1000 may include additional accelerator circuits 1020 in other embodiments. For example, as shown in FIG. 11, the accelerator cradle 1000 may include four accelerator circuits 1020 in some embodiments. The accelerator circuit 1020 may be implemented as any type of processor, coprocessor, calculation circuit, or other device capable of performing calculation or processing operations. For example, the accelerator circuit 1020 may be implemented as, for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a security coprocessor, a graphics processing unit (GPU), a neuromorphic processor unit, a quantum computer, a machine learning circuit Or other specialized processors, controllers, devices and/or circuits.In some embodiments, the accelerator cradle 1000 may also include an accelerator-to-accelerator interconnect 1042. Similar to the resource-to-resource interconnection 624 of the cradle 600 discussed above, the accelerator-to-accelerator interconnection 1042 may be implemented as any type of communication interconnection capable of facilitating accelerator-to-accelerator communication. In an illustrative embodiment, accelerator-to-accelerator interconnect 1042 is implemented as a high-speed point-to-point interconnect (e.g., faster than I/O subsystem 622). For example, the accelerator-to-accelerator interconnect 1042 may be implemented as QuickPath interconnect (QPI), UltraPath interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communication. In some embodiments, the accelerator circuit 1020 may be daisy-chained, where the primary accelerator circuit 1020 is connected to the NIC 832 and the memory device 720 through the I/O subsystem 622 and the secondary accelerator circuit 1020 is connected to the NIC 832 and the memory device through the primary accelerator circuit 1020. Storage device 720.Referring now to FIG. 11, an illustrative embodiment of an accelerator bracket 1000 is shown. As described above, the accelerator circuit 1020, the communication circuit 830, and the optical data connector 834 are placed on the top side 650 of the chassisless circuit board substrate 602. Also, as described above, the individual accelerator circuit 1020 and the communication circuit 830 are placed on the top side 650 of the chassisless circuit board substrate 602 so that no two electrical components that generate heat will shield each other. The memory device 720 of the accelerator bracket 1000 is placed on the bottom side 750 of the chassisless circuit board substrate 602 as described above for the bracket 600. Although placed on the bottom side 750, the memory device 720 is communicatively coupled to the accelerator circuit 1020 on the top side 650 via the I/O subsystem 622 (e.g., via vias). In addition, each accelerator circuit 1020 may include a heat sink 1070 that is larger than a conventional heat sink used in a server. As described above with reference to the heat sink 870, because the memory resource 720 is located on the bottom side 750 of the chassisless circuit board substrate 602 instead of the "free" area provided by the top side 650, the heat sink 1070 can be larger than a conventional heat sink.Referring now to FIG. 12, in some embodiments, the tray 400 may be implemented as a storage tray 1200. The storage bay 1200 is configured to store data in a data storage device 1250 local to the storage bay 1200. For example, during operation, the computing bay 800 or the accelerator bay 1000 may store and retrieve data from the data storage device 1250 of the storage bay 1200. The storage tray 1200 includes various components similar to those of the tray 400 and/or the computing tray 800, and these components are identified by the same reference numerals in FIG. 12. The description of such components provided above with reference to FIGS. 6, 7 and 8 is applicable to the corresponding components of the storage tray 1200, and will not be repeated here for clarity of the description of the storage tray 1200.In the illustrative storage bay 1200, the physical resource 620 is implemented as a storage controller 1220. Although only two storage controllers 1220 are shown in FIG. 12, it should be understood that the storage bay 1200 may include additional storage controllers 1220 in other embodiments. The storage controller 1220 may be implemented as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into the data storage device 1250 based on a request received via the communication circuit 830. In an illustrative embodiment, storage controller 1220 is implemented as a relatively low-power processor or controller. For example, in some embodiments, the storage controller 1220 may be configured to operate at a rated power of approximately 75 watts.In some embodiments, the storage bay 1200 may also include a controller-to-controller interconnect 1242. Similar to the resource-to-resource interconnection 624 of the tray 400 discussed above, the controller-to-controller interconnection 1242 may be implemented as any type of communication interconnection capable of facilitating controller-to-controller communication. In the illustrative embodiment, controller-to-controller interconnection 1242 is implemented as a high-speed point-to-point interconnection (e.g., faster than I/O subsystem 622). For example, the controller-to-controller interconnection 1242 may be implemented as QuickPath interconnection (QPI), UltraPath interconnection (UPI), or other high-speed point-to-point interconnections dedicated to processor-to-processor communication.Referring now to FIG. 13, an illustrative embodiment of a storage bay 1200 is shown. In an illustrative embodiment, the data storage device 1250 is implemented as or otherwise includes a storage cage 1252 configured to house one or more solid state drives (SSD) 1254. To this end, the storage cage 1252 includes a number of storage slots 1256, and each of the storage slots 1256 is configured to receive a corresponding solid-state drive 1254. Each mounting slot 1256 includes a plurality of driving guide rails 1258, and these driving guide rails 1258 cooperate to define the access hole 1260 of the corresponding mounting slot 1256. The storage cage 1252 is fastened to the chassis-less circuit board substrate 602 such that the manhole faces away from the chassis-less circuit board substrate 602 (ie, toward the front of the chassis-less circuit board substrate 602). In this way, when the storage bay 1200 is placed in the corresponding rack 240, the solid state drive 1254 is accessible. For example, while the storage bay 1200 remains seated in the corresponding rack 240, the solid state drive 1254 may be swapped out of the rack 240 (e.g., via a robot).The storage cage 1252 illustratively includes sixteen housing slots 1256 and is capable of housing and storing sixteen solid state drives 1254. Of course, the storage cage 1252 may be configured to store additional or fewer solid state drives 1254 in other embodiments. In addition, in the illustrative embodiment, the solid-state drive is installed vertically in the storage cage 1252, but in other embodiments may be installed in the storage cage 1252 in a different orientation. Each solid-state drive 1254 can be implemented as any type of data storage device capable of storing long-term data. To this end, the solid state drive 1254 may include the volatile and non-volatile memory devices discussed above.As shown in FIG. 13, the storage controller 1220, the communication circuit 830, and the optical data connector 834 are illustratively placed on the top side 650 of the chassisless circuit board substrate 602. Also, as described above, any suitable attachment or placement technique can be used to mount the electrical components of the storage tray 1200 to the chassisless circuit board substrate 602, including, for example, sockets (eg, processor sockets), supports, brackets, solder connections And/or other placement or fastening techniques.As described above, the individual storage controller 1220 and the communication circuit 830 are placed on the top side 650 of the chassisless circuit board substrate 602 so that no two electrical components that generate heat will shield each other. For example, the storage controller 1220 and the communication circuit 830 are placed in corresponding positions on the top side 650 of the chassisless circuit board substrate 602, so that no two of these electrical components will be aligned with each other in the direction of the airflow path 608 .The memory device 720 of the storage bay 1200 is placed on the bottom side 750 of the chassisless circuit board substrate 602 as described above for the bay 400. Although placed on the bottom side 750, the memory device 720 is communicatively coupled to the storage controller 1220 located on the top side 650 via the I/O subsystem 622. Similarly, because the chassisless circuit board substrate 602 is implemented as a double-sided circuit board, the memory device 720 and the storage controller 1220 can communicate with one or more vias, connectors, or other mechanisms that extend through the chassisless circuit board substrate 602. coupling. Each storage controller 1220 includes a heat sink 1270 fastened to it. As described above, due to the improved thermal cooling characteristics of the chassisless circuit board substrate 602 of the storage bay 1200, no heat sink 1270 includes a cooling fan attached to it. That is, each heat sink 1270 is implemented as a fanless heat sink.Referring now to FIG. 14, in some embodiments, the tray 400 may be implemented as a storage tray 1400. The storage bay 1400 is optimized or otherwise configured to provide other bays 400 (e.g., computing bay 800, accelerator bay 1000, etc.) with pools of storage local to the storage bay 1400 (e.g., in two groups or More groups 1430, 1432 in the memory device 720) access. For example, during operation, the computing bay 800 or the accelerator bay 1000 may use the logical address space mapped to the physical address in the memory bank 1430, 1432 to remotely send the memory bay 1400 to one or more of the memory banks 1430, 1432. Write to and/or read from it. The storage tray 1400 includes various components similar to the components of the tray 400 and/or the computing tray 800, and these components are identified by the same reference numerals in FIG. 14. The description of such components provided above with reference to FIGS. 6, 7 and 8 is applicable to the corresponding components of the storage tray 1400, and is not repeated here for clarity of the description of the storage tray 1400.In the illustrative memory bay 1400, the physical resource 620 is implemented as a memory controller 1420. Although only two memory controllers 1420 are shown in FIG. 14, it should be understood that the memory bay 1400 may include additional memory controllers 1420 in other embodiments. The memory controller 1420 may be implemented as any type of processor, controller, or control circuit capable of controlling the writing and reading of data into the memory bank 1430, 1432 based on a request received via the communication circuit 830. In the illustrative embodiment, each memory controller 1420 is connected to the corresponding memory group 1430, 1432 to write to and read from the memory device 720 within the corresponding memory group 1430, 1432 and implement and write to the memory bay 1400 Any permission (e.g., read, write, etc.) associated with the cradle 400 that sent a request to perform a memory access operation (e.g., read or write).In some embodiments, the storage bay 1400 may also include a controller-to-controller interconnect 1442. Similar to the resource-to-resource interconnection 624 of the tray 400 discussed above, the controller-to-controller interconnection 1442 may be implemented as any type of communication interconnection that can facilitate controller-to-controller communication. In the illustrative embodiment, the controller-to-controller interconnect 1442 is implemented as a high-speed point-to-point interconnect (e.g., faster than the I/O subsystem 622). For example, the controller-to-controller interconnection 1442 may be implemented as QuickPath interconnection (QPI), UltraPath interconnection (UPI), or other high-speed point-to-point interconnections dedicated to processor-to-processor communication. In this way, in some embodiments, the memory controller 1420 can access the memory in the memory group 1432 associated with another memory controller 1420 through the controller-to-controller interconnect 1442. In some embodiments, the scalable memory controller is composed of multiple smaller memory controllers on a memory bay (eg, memory bay 1400), and these smaller memory controllers are referred to herein as "chiplets". "(Chiplet). Chiplets can be interconnected (for example, using EMIB (Embedded Multi-Die Interconnect Bridge, embedded multi-die interconnect bridge)). The combined chiplet memory controller can be expanded to a relatively large number of memory controllers and I/O ports (e.g., up to 16 memory channels). In some embodiments, the memory controller 1420 may implement memory interleaving (eg, one memory address is mapped to the memory bank 1430, the next memory address is mapped to the memory bank 1432, and a third address is mapped to the memory bank 1430, etc. Wait). The interleaving can be managed in the memory controller 1420, or managed from the CPU socket (for example, the CPU socket of the computing bay 800) through the network link to the memory bank 1430, 1432, and is the same as the access from the same memory device. Consecutive memory addresses can improve the latency associated with performing memory access operations compared to.In addition, in some embodiments, the storage bay 1400 may be connected to one or more other bays 400 (for example, in the same bay 240 or adjacent bays 240) through a waveguide using the waveguide connector 1480. In the illustrative embodiment, the waveguide is a 64 mm waveguide that provides 16 Rx (i.e., receive) channels and 16 Tx (i.e., transmit) channels. In the illustrative embodiment, each channel is 16 GHz or 32 GHz. In other embodiments, the frequency may be different. Using waveguides can provide another bay (eg, bay 400 in the same bay 240 as the storage bay 1400 or in an adjacent bay 240) to the storage pool (eg, memory bank 1430, 1432). Throughput access without increasing the load on the optical data connector 834.Referring now to FIG. 15, according to the data center 100, a system for executing one or more workloads (for example, applications) can be implemented. In an illustrative embodiment, the system 1510 includes a coordinator server 1520, which may be implemented as a managed node that includes a computing device (e.g., computing bay) that executes management software (e.g., a cloud operating environment, such as OpenStack) 800 on the processor 820), the computing device is communicatively coupled to a plurality of bays 400, the bays 400 include a large number of computing bays 1530 (for example, each is similar to the computing bay 800), storage bays 1540 (E.g., each is similar to storage bay 1400), accelerator bay 1550 (e.g., each is similar to accelerator bay 1000), and storage bay 1560 (e.g., each is similar to storage bay 1200). One or more of the bays 1530, 1540, 1550, 1560 may be grouped into managed nodes 1570 by the coordinator server 1520, for example, to collectively execute workloads (for example, applications 1532 executed in virtual machines or in containers) . The managed node 1570 may be implemented as a collection of physical resources 620 (for example, a processor 820, a memory resource 720, an accelerator circuit 1020, or a data storage device 1250) from the same or different bays 400. In addition, the managed node may be created, defined or "spinned up" by the coordinator server 1520 when the workload is to be assigned to the managed node or at any other time, regardless of whether any workload is currently assigned to the managed node , Can exist. In an illustrative embodiment, according to the quality of service (QoS) goals (e.g., related to throughput, latency, instructions per second, etc.) associated with the service level agreement of the workload (e.g., application 1532) The coordinator server 1520 can selectively allocate and/or deallocate physical resources 620 from the bay 400, and/or add or remove one or more bays 400 from the managed node 1570. During this period, the coordinator server 1520 can receive telemetry data indicating the performance conditions (for example, throughput, latency, instructions per second, etc.) in each bay 400 of the managed node 1570, and combine the telemetry data with the service The quality objectives are compared to determine whether the service quality objectives are met. The coordinator server 1520 may also determine whether it is possible to deallocate one or more physical resources from the managed node 1570 while still meeting the QoS objectives, thereby releasing these physical resources for use in another managed node (for example, to perform different Workload). Alternatively, if the QoS target is not currently met, the coordinator server 1520 may determine to dynamically allocate additional physical resources to assist the execution of the workload (eg, application 1532) while it is executing. Similarly, if the coordinator server 1520 determines that the result of de-allocating the physical resources will be that the QoS target is still met, the coordinator server 1520 may determine to dynamically de-allocate the physical resources from the managed node.In addition, in some embodiments, the coordinator server 1520 may identify trends in resource utilization of the workload (eg, application 1532), for example, by identifying the stage of execution of the workload (eg, application 1532) (eg, performing different operations). Each operation has different resource utilization characteristics) and preemptively identify available resources in the data center 100 and allocate them to the managed node 1570 (for example, within a predetermined time period after the start of the associated phase). In some embodiments, the coordinator server 1520 can model performance based on various delays and distribution schemes for computing bays and other resources in the data center 100 (eg, accelerator bays, storage bays, storage bays). Work load is placed between racks). For example, the coordinator server 1520 may use a model that takes into account the performance of the resource on the tray 400 (for example, FPGA performance, memory access latency, etc.) and the path to the resource (for example, FPGA) through the network Performance (for example, congestion, latency, bandwidth). In this way, the coordinator server 1520 can be based on the total latency associated with each potential resource available in the data center 100 (e.g., the latency associated with the performance of the resource itself, and the computing bay and resource associated with the execution of the workload. The time delay associated with the path through the network between the cradles 400 located) determines which resource (or which) should be used for which workload.In some embodiments, the coordinator server 1520 may use the telemetry data (for example, temperature, fan speed, etc.) reported from the cradle 400 to generate a map of heat generation in the data center 100 and work according to the heat generation map and the difference. The predicted heat generation associated with the load is used to allocate resources to managed nodes to maintain the target temperature and heat distribution in the data center 100. Additionally or alternatively, in some embodiments, the coordinator server 1520 may organize the received telemetry data to indicate the relationship between the managed nodes (eg, spatial relationship, such as the resources of the managed node in the data center 100 The physical location and/or functional relationship of the managed node, such as the grouping of the managed node by the customer for which the managed node provides services, the type of function that the managed node usually performs, and the managed nodes that usually share or exchange workloads between each other Nodes, etc.) hierarchical model. Based on the difference in the physical location and resources in the managed node, a given workload can exhibit different resource utilization among the resources of different managed nodes (for example, causing different internal temperatures, using different percentages of processing Device or memory capacity). The coordinator server 1520 may determine these differences based on the telemetry data stored in the hierarchical model and account for these differences in the future resources of the workload if the workload is reassigned from one managed node to another. In the prediction of utilization, the utilization of resources in the data center 100 can be accurately balanced.In order to reduce the computational load on the coordinator server 1520 and the data transmission load on the network, in some embodiments, the coordinator server 1520 may send self-test information to the cradles 400 so that each cradle 400 can be locally (eg, On the cradle 400) it is determined whether the telemetry data generated by the cradle 400 meets one or more conditions (for example, the available capacity that meets a predetermined threshold, the temperature that meets the predetermined threshold, etc.). Each bay 400 can then report back a simplified result (eg, yes or no) to the coordinator server 1520, and the coordinator server 1520 can use the result to determine resource allocation to the managed node.Referring now to FIG. 16, an illustrative system 1600 for flexible protocol acceleration includes a computing device 1602 and a plurality of remote devices 1604 communicating over a network 1606. Each of the devices 1602, 1604 may be implemented as one or more bays 400 in a data center (e.g., computing bay 800 and multiple storage bays 1200, or another configuration). In use, as described further below, the computing device 1602 issues I/O commands to the smart endpoint 1632, such as fast NVM (NVMExpress, NVMe) commands, adaptive virtual function (Adaptive Virtual Function, AVF) commands, or other I/O commands. O command. The computing device 1602 may utilize standard drivers and/or operating systems to issue I/O commands. Smart endpoint 1632 includes programmable elements that parse and accelerate the processing of I/O commands. The intelligent endpoint 1632 provides an intelligent context to the load transfer complex 1634, and the load transfer complex 1634 can complete the processing of I/O commands. The protocol analysis and acceleration operations of the smart endpoint 1632 are programmable or configurable by the computing device 1602 in other ways. Thus, the system 1600 can support flexible partitioning of I/O protocol resolution tasks between the load transfer complex 1634 and the intelligent endpoint 1632. By transferring tasks from the load transfer complex 1634 to the smart endpoint 1632, the system 1600 may also improve the available processing cycles of the load transfer complex 1634 and/or may allow the use of the less expensive load transfer complex 1634.The computing device 1602 may be implemented as any type of device capable of performing the functions described herein. For example, the computing device 1602 may be implemented as, but not limited to, a tray, a computing tray, an accelerator tray, a storage tray, a computer, a server, a distributed computing device, a de-aggregated computing device, a laptop computer, a tablet Computers, notebook computers, mobile computing devices, smart phones, wearable computing devices, multi-processor systems, servers, workstations, and/or consumer electronic devices. As shown in FIG. 1, the illustrative computing device 1602 includes a processor 1620, an I/O subsystem 1622, a memory 1626, a data storage device 1628, and a communication subsystem 1630. Furthermore, in some embodiments, one or more of the illustrative components may be included in another component or otherwise form a part of another component. For example, in some embodiments, the memory 1626 or some portion thereof may be included in the processor 1620.The processor 1620 may be implemented as any type of processor capable of performing the functions described herein. For example, the processor 1620 may be implemented as a single-core or multi-core processor(s), digital signal processor, microcontroller, or other processor or processing/control circuit. Similarly, the memory 1626 may be implemented as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, the memory 1626 may store various data and software used during the operation of the computing device 1602, such as operating systems, applications, programs, libraries, and drivers.Illustratively, the memory 1626 is communicatively coupled to the processor 1620 via an I/O subsystem 1622, which may be implemented to facilitate input/output with other components of the processor 1620, memory 1626, and computing device 1602. Output circuits and/or components for operation. For example, the I/O subsystem 1622 may be implemented as or otherwise include a memory controller center, an input/output control center, a sensor center, a host controller, a firmware device, a communication link (ie, a point-to-point link, a bus Links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems used to facilitate input/output operations. As shown, the I/O subsystem 1622 illustratively includes a PCI Express (PCIe) root complex (RC) 1624. RC 1624 may include one or more root ports, PCIe links, PCIe switches, and/or a host system (eg, processor 1620 and/or memory 1626) that can be used in computing device 1602 and one or more I/O devices Other components that transfer I/O data between. In some embodiments, the memory 1626 may be directly coupled to the processor 1620, such as via an integrated memory controller hub or data port. Furthermore, in some embodiments, the I/O subsystem 1622 may form part of a system on chip (SoC) and be included on a single integrated circuit chip along with the processor 1620, memory 1626, and other components of the computing device 1602.The data storage device 1628 may be implemented as any type of one or more devices configured for short-term or long-term storage of data, such as memory devices and circuits, memory cards, hard disk drives, solid state drives, non-volatile flash memory, or other data Storage device. The computing device 1602 may also include a communication subsystem 1630, which may be implemented as any network interface controller (NIC) that can enable communication between the computing device 1602 and other remote devices through a computer network (not shown), Communication circuits, equipment, or collections thereof. The communication subsystem 1630 may be configured to use any one or more communication technologies (for example, wired or wireless communication) and associated protocols (for example, Ethernet, WiMAX, 3G, 4G LTE, etc.) to implement such communication.As shown in the figure, the computing device 1602 also includes a smart endpoint 1632 and a load transfer complex 1634. As described further below, the smart endpoint 1632 is coupled to the RC 1624 and to the load transfer complex 1634, for example via one or more PCIe channels. The smart endpoint 1632 receives I/O transactions from the RC 1624 and can process I/O transactions and/or provide I/O transactions to the load transfer complex 1634. The load transfer complex 1634 performs further processing of I/O transactions, for example, by performing bare metal virtualization (for example, virtualizing multiple storage devices, network devices, or other devices). The load transfer complex 1634 is also coupled to the communication subsystem 1630 and thus can communicate with one or more remote devices 1604. One possible embodiment of the smart endpoint 1632 and the load transfer complex 1634 is described below in connection with FIG. 17.Similarly, each remote device 1604 may be implemented as any type of device capable of performing the functions described herein. For example, each remote device 1604 may be implemented as, but not limited to, a bay, a computing bay, an accelerator bay, a storage bay, a computer, a server, a distributed computing device, a deaggregated computing device, a laptop computer , Tablet computers, notebook computers, mobile computing devices, smart phones, wearable computing devices, multi-processor systems, servers, workstations, and/or consumer electronic devices. In this way, each remote device 1604 may include similar components and features as computing device 1602, such as a processor, I/O subsystem, memory, data storage device, communication subsystem, or other components of a storage bay. As shown in the figure, each remote device 1604 may include a remote storage device 1640, which can be accessed by, for example, the load transfer complex 1634 to perform bare metal virtualization.As discussed in more detail below, the computing device 1602 and the remote device 1604 may be configured to send and receive data over the network 1606 with each other and/or with other devices of the system 1600. The network 1606 may be implemented as any number of various wired and/or wireless networks. For example, the network 1606 may be implemented as or include a wired or wireless local area network (LAN), and/or a wired or wireless wide area network (WAN). As such, the network 1606 may include any number of additional devices, such as additional computers, routers, and switches, to facilitate communication between the devices of the system 1600.Referring now to FIG. 17, a diagram 1700 illustrates one possible embodiment of a smart endpoint 1632 and a load transfer complex 1634. As shown in the figure, the smart endpoint 1632 includes an endpoint interface 1702, one or more soft cores 1704, a DMA engine 1706, a memory 1708, and an endpoint interface 1710. The endpoint interface 1702 is coupled to the Root Complex (RC) 1624 and can be implemented as any communication circuit or other component for communicating with the RC 1624 through a PCIe link. For example, the endpoint interface 1710 may be implemented as or otherwise include a PCIe physical layer, a PCIe data link layer, and a PCIe transaction layer.Each soft core 1704 can be implemented as a programmable element, such as a state machine, a microcontroller, a microprocessor, or other computing resources. As described further below, the soft core 1704 can be configured to emulate the PCIe endpoint hierarchy, process PCI transactions, and perform other tasks as described further below. The DMA engine 1706 can be implemented as a DMA controller or other component, which can perform DMA transactions (eg, read and/or write) between the memory 1626 and the smart endpoint 1632 and/or the memory 1626 and the load transfer complex 1634 Transfer data between. The memory 1708 may be implemented as any volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, the memory 1708 can store various data and software used during the operation of the smart endpoint 1632, such as firmware and data processed by the soft core 1704.The endpoint interface 1710 is coupled to the load transfer complex 1634 and can be implemented as any communication circuit or other component used to communicate with the load transfer complex 1634. As shown, the endpoint interface 1710 is coupled to the root complex 1712 of the load transfer complex 1634. Therefore, the endpoint interface 1710 can communicate with the load transfer complex 1634 through PCIe and thus can be implemented as or otherwise include a PCIe physical layer, a PCIe data link layer, and a PCIe transaction layer.As shown in the figure, the load transfer complex 1634 includes a root complex 1712, a plurality of processor cores 1714, a memory 1716, and a root complex 1718. As described above, the root complex 1712 is coupled to the endpoint interface 1710 of the smart endpoint 1632. Similar to RC 1624, RC 1712 can include one or more root ports, PCIe links, PCIe switches, and/or other components that can be used to transfer I/O data between load transfer complex 1634 and smart endpoint 1632.Each processor core 1714 can be implemented as any type of processor core capable of performing the functions described herein, such as a single-core or multi-core processor(s), digital signal processor, microcontroller, or other processor or Processing/controlling circuit. The processor core 1714 can execute instructions from the same instruction set architecture (ISA) as the processor 1620 or a different ISA. For example, in some embodiments, the processor core 1714 may be implemented as a core. In other embodiments, the processor core 1714 may be implemented as an ARM core. The memory 1716 may be implemented as any type of volatile or non-volatile memory or data storage device capable of performing the functions described herein. In operation, the memory 1716 may store various data and software used during the operation of the load transfer complex 1634, such as operating systems, applications, programs, libraries, and drivers.The root complex 1718 is coupled to the communication subsystem 1630 (eg, to a network interface controller) and may be implemented as any communication circuit or other component used to communicate with the communication subsystem 1630. For example, the root complex 1718 may include one or more root ports, PCIe links, PCIe switches, and/or other components that may be used to transfer I/O data between the load transfer complex 1634 and the communication subsystem 1630.Although shown as separate components in FIG. 17, it should be understood that in some embodiments the smart endpoint 1632 and the load transfer complex 1634 may be included in the same component and/or with other components. For example, in some embodiments, the smart endpoint 1632 and the load transfer complex 1634 may be implemented as separate dies included in the same computer chip. In these embodiments, the chip including the smart endpoint 1632 and the load transfer complex 1634 may be contained in a multi-chip package along with the NIC (eg, communication subsystem 1630), FPGA, or other components.Referring now to FIG. 18, in an illustrative embodiment, computing device 1602 establishes an environment 1800 during operation. The illustrative environment 1800 includes an application 1802, a driver 1804, a transaction layer 1806, a protocol parser 1808, a protocol accelerator 1810, and a firmware manager 1812. The various components of environment 1800 may be implemented as hardware, firmware, software, or a combination thereof. In this way, in some embodiments, one or more components of the environment 1800 may be implemented as a circuit or collection of electronic devices (eg, application circuit 1802, driver circuit 1804, transaction layer circuit 1806, protocol parser circuit 1808, protocol accelerator circuit 1810 and/or firmware manager circuit 1812). It should be understood that, in such an embodiment, one or more of the application circuit 1802, the driver circuit 1804, the transaction layer circuit 1806, the protocol parser circuit 1808, the protocol accelerator circuit 1810, and/or the firmware manager circuit 1812 may form a process Part of the other components of the computing device 1620, I/O subsystem 1622, smart endpoint 1632, and/or computing device 1602. Furthermore, in some embodiments, one or more of the illustrative components may form a part of another component, and/or one or more of the illustrative components may be independent of each other.The application 1802 may be implemented as any user application, system application, or other application executed by the computing device 1602. Similarly, the driver 1804 can be implemented as any device driver, operating system, virtual machine monitor, and/or a hypervisor that controls or otherwise transmits I/O data to the smart endpoint 1632. The driver 1804 can communicate with the smart endpoint 1632 using one or more standardized device protocols, such as fast NVM (NVMe), VirtIO, AVF or other protocols. For example, the drive 1804 may be implemented as a storage drive, a network drive, or other device drive. The application 1802 can access the services provided by the smart endpoint 1632 and/or the load transfer complex 1634 via the driver 1804.The firmware manager 1812 is configured to program the endpoint firmware of the smart endpoint 1632. For example, the endpoint firmware may be stored in the memory 1708 of the smart endpoint 1632 and executed using one or more of the soft cores 1704 of the smart endpoint 1632. The firmware manager 1812 may be implemented as any device driver, debugger, flash memory programmer, integrated development environment, deployment manager, dashboard, or other components capable of programming endpoint firmware to the smart endpoint 1632.The transaction layer 1806 is configured to receive I/O transactions originating from the root complex 1624. The I/O commands can be implemented as, for example, NVMe commands, VirtIO commands, or AVF commands.The protocol parser 1808 is configured to parse the I/O transaction based on the I/O protocol in response to receiving the I/O transaction. The analysis can be performed according to one or more instructions or other data of the endpoint firmware. Parsing the I/O transaction may include determining whether the I/O transaction is a doorbell notification, for example, by determining whether the I/O transaction is a tail pointer update. The protocol parser 1808 is also configured to identify I/O commands in response to parsing I/O transactions. Identifying the I/O command may include identifying the I/O command in the host memory 1626 in response to determining that the I/O transaction is a doorbell notification.The protocol accelerator 1810 is configured to accelerate I/O commands. The accelerated I/O command may include copying the I/O command to the memory 1708 of the smart endpoint 1632 or the memory 1716 of the load transfer complex 1634. The DMA engine 1706 of the smart endpoint 1632 may be used to read I/O commands from the host memory 1626, for example. In some embodiments, accelerating the I/O command may include identifying the protocol data associated with the I/O command and copying the protocol data to the memory 1708 or the memory 1716. The protocol data can be read from the host memory 1626, for example, by parsing a scattered-centralized list of I/O commands. The protocol accelerator 1810 is also configured to provide intelligent context to the load transfer complex 1634 in response to accelerated I/O commands. The smart context may include I/O commands, pointers to the location of I/O commands, protocol data, and/or pointers to the location of protocol data.Referring now to FIG. 19, in use, the computing device 1602 may perform a method 1900 for flexible protocol acceleration. It should be appreciated that in some embodiments, the operations of method 1900 may be performed by one or more components of environment 1800 of computing device 1602 as shown in FIG. 18. The method 1900 begins in block 1902, where the host (eg, the application 1802 and/or the driver 1804 executed by the processor 1620 of the computing device 1602) dispatches I/O commands to the smart endpoint 1632. The I/O command can be implemented as any descriptor, instruction, I/O transaction, or other command issued to the smart endpoint 1632. For example, the I/O command may be an NVMe command, VirtIO command, or AVF command. The host can write commands to a command queue or other data structure, and can use one or more I/O transactions (for example, one or more PCIe transaction layer packets (TLP) or other transactions) to dispatch commands to Smart endpoint 1632. The push model, the pull model, or a combination of these models can be used to dispatch I/O commands. The specific model used may depend on the specific I/O protocol and/or application in use.In some embodiments, in block 1904, the computing device 1602 may directly write the command to the endpoint queue in the smart endpoint 1632 (eg, push model). For example, the endpoint queue can be implemented as a ring buffer or other range of the memory 1708 of the smart endpoint 1632. The host can use one or more I/O transactions, DMA transfers, or other transactions to write commands. Pushing I/O commands can reduce latency compared with pulling models. However, the queue on the smart endpoint 1632 can be relatively small compared to the pull model.In some embodiments, in block 1906, the computing device 1602 may write the command to the command queue in the memory 1626 and then send a doorbell notification to the smart endpoint 1632. For example, after writing the command to the memory 1626, the computing device 1602 may update the tail pointer register of the smart endpoint 1632, such as by writing to the register using one or more I/O transactions. As described further below, smart endpoint 1632 can recognize doorbell notifications and read I/O commands from memory 1626 (eg, pull models). By maintaining the queue in the memory 1626, the queue for the pull model can be larger than the push model. Compared with the push model, the pull model can reduce the overhead for the processor 1620. However, compared to the push model, the pull model may increase the latency and may require the increased complexity of the smart endpoint 1632.In block 1908, the smart endpoint 1632 parses one or more I/O transactions based on the I/O protocol. The intelligent endpoint 1632 can parse I/O transactions to identify or otherwise process I/O commands dispatched by the host. For example, the smart endpoint 1632 can parse I/O transactions based on the NVMe protocol, VirtIO protocol, or AVF protocol. The smart endpoint 1632 can resolve I/O transactions based on the endpoint firmware. Thus, the smart endpoint 1632 can be flexibly programmed to handle different I/O protocols or updated I/O protocols.In block 1910, smart endpoint 1632 accelerates I/O commands for load transfer complex 1634. The accelerated I/O command may include any operation that transfers the protocol processing from the load transfer complex 1634 to the smart endpoint 1632. The smart endpoint 1632 can speed up I/O transactions based on the endpoint firmware. Thus, the smart endpoint 1632 can be flexibly programmed to perform different acceleration operations or technologies. In some embodiments, in block 1912, the smart endpoint 1632 may copy the I/O command to the memory 1708 of the smart endpoint 1632 or to the memory 1716 of the load transfer complex 1634. The smart endpoint 1632 can copy I/O commands from a source such as the host memory 1626 or the command buffer of the memory 1708. The DMA engine 1706 of the smart endpoint 1632 can be used to copy commands to the smart endpoint 1632 and/or load transfer complex 1634. Similarly, in some embodiments, in block 1914, the smart endpoint 1632 may copy the protocol data to the memory 1708 of the smart endpoint 1632 or to the memory 1716 of the load transfer complex 1634. The protocol data can be implemented as one or more memory pages, memory ranges, and/or memory addresses identified by I/O commands. For example, I/O commands may include scattered-centralized lists, linked lists, or other data structures that identify protocol data. In some embodiments, the smart endpoint 1632 may parse multiple lists or other data structures to follow the chained protocol data. The smart endpoint 1632 can copy protocol data from a source such as the host memory 1626 or the buffer of the memory 1708. The DMA engine 1706 of the smart endpoint 1632 can be used to copy the protocol data to the smart endpoint 1632 and/or the load transfer complex 1634. In some embodiments, in block 1916, the smart endpoint 1632 may maintain the context between multiple different operations for each accelerated command. For example, the smart endpoint 1632 can maintain context for multiple DMA transactions for each doorbell notification. Thus, by copying I/O commands and/or protocol data, the smart endpoint 1632 can load transfer one or more DMA transactions from the load transfer complex 1634, which can reduce the processing requirements and/or reduce the processing requirements of the load transfer complex 1634. The time delay experienced by the load transfer complex 1634 is small.In block 1918, the smart endpoint 1632 provides smart context data to the load transfer complex 1634. The smart context can be implemented as any data that can be used by the load transfer complex 1634 to process I/O commands. The smart endpoint 1632 may use any suitable technology to provide the smart context to the load transfer complex 1634. In some embodiments, the smart endpoint 1632 can push the smart context to the load transfer complex 1634, for example using one or more writes or other I/O transactions. In some embodiments, the smart endpoint 1632 may notify the load transfer complex 1634 (for example, using an interrupt or doorbell notification) and the load transfer complex 1634 may pull the smart context from the smart endpoint 1632, for example, using one or more read or Pull other I/O transactions. In some embodiments, in block 1920, the smart endpoint 1632 may provide a pointer to the location of the I/O command and/or protocol data. This pointer can identify commands or data in the memory 1708 of the smart endpoint 1632 or, in some embodiments, data in the memory 1716 of the load transfer complex 1634. In some embodiments, in block 1922, the smart endpoint 1632 may implement the transaction ordering specified by the appropriate I/O protocol. For example, commands and protocol data can arrive at the smart endpoint 1632 in any order in response to DMA transactions, and the smart endpoint 1632 can implement protocol sequencing before sending the smart context to the load transfer complex 1634.In block 1924, the load transfer complex 1634 uses intelligent context to process I/O commands. For example, the load transfer complex 1634 can execute a storage stack to simulate one or more I/O devices for bare metal virtualization. As part of processing I/O commands, load transfer complex 1634 may access remote storage 1640 on one or more remote devices 1604 (eg, storage bays or other storage devices). The load transfer complex 1634 can use the smart context to access I/O command data or protocol data that has been retrieved by the smart endpoint 1632 or processed in other ways, which can reduce the processing requirements on the load transfer complex 1634.In block 1926, the computing device 1602 determines whether a response is expected for the I/O command. For example, certain I/O commands may be advertised, for which no response is expected, or unannounced, for which response is expected. If a response is not expected, the method 1900 loops back to block 1902 to continue processing the I/O command. If a response is expected, the method 1900 proceeds to block 1928.In block 1928, the load transfer complex 1634 sends a response to the smart endpoint 1632. The response may be implemented as one or more I/O completions, I/O transactions, or other responses, and may include status information, protocol data, or other data generated by the load transfer complex 1634. The load transfer complex 1634 can use any technique to deliver the response. For example, in some embodiments, the load transfer complex 1634 may push a response to the smart endpoint 1632, such as using one or more writes or other I/O transactions to the memory 1708. As another example, the load transfer complex 1634 may notify the smart endpoint 1632 (e.g., using an interrupt or doorbell notification) and the smart endpoint 1632 may pull a response from the load transfer complex 1634, such as using one or more reads or other I /O transaction to pull.In block 1930, the smart endpoint 1632 forwards the response to the root complex 1624 of the host. The intelligent endpoint 1632 uses the technology specified by the corresponding I/O protocol to forward the response. For example, the smart endpoint 1632 may forward the response as one or more I/O completions, I/O transactions, interrupts, or other data transfers. In block 1932, the host (eg, application 1802 and/or driver 1804 executed by processor 1620 of computing device 1602) processes the response. After processing the response, the method 1900 loops back to block 1902 to continue processing the I/O command.Referring now to FIGS. 20-21, in use, the computing device 1602 may execute the method 2000 for acceleration of flexible doorbell notification. It should be understood that, in some embodiments, the operations of the method 2000 may be performed by one or more components of the environment 1800 of the computing device 1602 as shown in FIG. 18, such as the smart endpoint 1632. The method 2000 begins in block 2002, where the smart endpoint 1632 utilizes a protocol resolution configuration to program the endpoint firmware. Any suitable technique can be used to receive the endpoint firmware from the host. For example, the processor 1620 may be configured with firmware via the RC 1624 or the firmware may be configured out-of-band. The endpoint firmware may be implemented as stored instructions or other data processed by the soft core 1704 or other programmable elements of the smart endpoint 1632. The endpoint firmware may be stored in the memory 1708 or other volatile or non-volatile storage device of the smart endpoint 1632, for example. As described above, the endpoint firmware can flexibly define the protocol analysis and acceleration operations to be executed by the intelligent endpoint 1632.In block 2004, the smart endpoint 1632 monitors transactions from the host on the transaction layer of the I/O interconnect. These transactions may originate at the root complex 1624 of the computing device 1602, for example. These transactions may be implemented as, for example, PCI Express Transaction Layer Packets (TLP) sent from the host. In block 2006, the smart endpoint 1632 determines whether an I/O transaction has been received. If not, the method 2000 loops back to block 2004 to continue monitoring I/O transactions. If an I/O transaction is received, the method 2000 proceeds to block 2008.In block 2008, the smart endpoint 1632 identifies whether the I/O transaction is a doorbell notification. The doorbell notification may be implemented as any interrupt, register write, or other data sent by the host indicating that the I/O command is ready in the command queue in the host memory 1626. In some embodiments, in block 2010, the smart endpoint 1632 may recognize a change in the tail pointer of the command queue. For example, the smart endpoint 1632 can recognize writes to a specific address associated with the tail pointer register. The address can be in the memory space, I/O space, configuration space, or any other address space of the smart endpoint 1632. In block 2012, the smart endpoint 1632 determines whether a doorbell notification is recognized. If so, the method 2000 branches to block 2016, as described below. If the doorbell notification is not recognized, the method 2000 proceeds to block 2014.In block 2014, the smart endpoint 1632 passes the transaction to the load transfer complex 1634 via the endpoint interface 1710. The smart endpoint 1632 may pass the transaction to the load transfer complex 1634 as a PCIe TLP, for example. As mentioned above, the load transfer complex 1634 can process I/O transactions when received. For example, the load transfer complex 1634 can execute a storage stack to simulate one or more I/O devices for bare metal virtualization. As part of processing I/O commands, load transfer complex 1634 may access remote storage 1640 on one or more remote devices 1604 (eg, storage bays or other storage devices). After passing the I/O transaction to the load transfer complex 1634, the method 2000 loops back to block 2004 to continue monitoring of the I/O transaction.Returning to reference block 2012, if a doorbell notification is recognized, the method 2000 branches forward to block 2016, where the smart endpoint 1632 recognizes one or more command locations based on the doorbell value. For example, the smart endpoint 1632 may identify one or more addresses of the I/O command in the host memory 1626 based on the value of the tail pointer register set by the I/O transaction. In block 2018, the smart endpoint 1632 reads one or more commands from the host memory 1626. The smart endpoint 1632 can read commands by using the DMA engine 1706 to execute one or more DMA transactions. The DMA transaction may transfer I/O command data to a queue in the memory 1708 of the smart endpoint 1632 or, in some embodiments, to a queue in the memory 1716 of the load transfer complex 1634.In block 2020, the smart endpoint 1632 identifies one or more blocks, pages, or other ranges of protocol data based on the I/O command. The protocol data may be described by one or more lists or other data structures that are included in or otherwise associated with the I/O command. The smart endpoint 1632 can dereference or otherwise follow multiple chained pointers, descriptors, or other data items to identify protocol data. For example, in some embodiments, in block 2022, the smart endpoint 1632 may parse one or more scattered-centralized lists to identify protocol data. In block 2024, the smart endpoint 1632 reads the protocol data from the host memory 1626. The smart endpoint 1632 can read the protocol data by using the DMA engine 1706 to perform one or more DMA transactions. The DMA transaction may transfer protocol data to a buffer in the memory 1708 of the smart endpoint 1632 or, in some embodiments, to a buffer in the memory 1716 of the load transfer complex 1634.In block 2026, the smart endpoint 1632 provides I/O command data and/or protocol data to the load transfer complex 1634. The smart endpoint 1632 may use a push model, a pull model, or a combination of these models to provide data. The specific technology used may depend on the I/O protocol and/or current application. In some embodiments, in block 2028, intelligent endpoint 1632 may store I/O command data and/or protocol data in memory 1708 and notify load transfer complex 1634. For example, the smart endpoint 1632 may send an interrupt, a doorbell notification, or other instructions to the load transfer complex 1634. After the notification, the load transfer complex 1634 can pull the I/O command data and/or protocol data from the memory 1708. In some embodiments, in block 2030, smart endpoint 1632 may store I/O data and/or protocol data in memory 1716 of load transfer complex 1634. The smart endpoint 1632 may, for example, write data to a buffer in the memory 1716 and update one or more associated tail pointers, descriptors, or other data items. Compared to pulling data from the smart endpoint 1632, pushing the data to the load transfer complex 1634 may have lower latency and possibly a larger queue size. However, compared to the pull model, the push model may require increased complexity for the smart endpoint 1632. After providing data to the load transfer complex 1634, the method 2000 proceeds to block 2032, which is shown in FIG. 21.In block 2032 shown in FIG. 21, the smart endpoint 1632 determines whether a response is expected for the I/O command. For example, as described above, certain I/O commands may be advertised, for which response is not expected, or unannounced, for which response is expected. If a response is not expected, the method 2000 loops back to block 2004 shown in FIG. 20 to continue monitoring the I/O transaction. If a response is expected, the method 2000 proceeds to block 2034.In block 2034, the smart endpoint 1632 monitors the response from the load transfer complex 1634. As described above, the response may be implemented as one or more I/O completions, I/O transactions, or other responses, and may include status information, protocol data, or other data generated by the load transfer complex 1634. The smart endpoint 1632 can use any technology to receive the response, including push models and pull models. The specific model used may depend on the specific I/O protocol and/or application in use. In some embodiments, in block 2036, smart endpoint 1632 may identify doorbell notifications from load transfer complex 1634. Similar to the doorbell notification from the host, the doorbell notification from the load transfer complex 1634 can be implemented as any interrupt, register write, or the response data sent by the load transfer complex 1634 indicating that the response data is ready in the memory 1716 of the load transfer complex 1634 Other data. The smart endpoint 1632 can similarly use the DMA engine 1706 to read the response from the memory 1716. In some embodiments, in block 2038, the smart endpoint 1632 may identify the response written by the load transfer complex 1634 directly to the queue or buffer in the memory 1708 of the smart endpoint 1632. In block 2040, the smart endpoint 1632 determines whether a response has been received. If not, the method 2000 loops back to block 2034 to continue monitoring the response. If a response is received, the method 2000 proceeds to block 2042.In block 2042, the smart endpoint 1632 forwards the response to the root complex 1624 of the host. The intelligent endpoint 1632 uses the corresponding I/O protocol and/or application-specific technology to forward the response. For example, the smart endpoint 1632 may forward the response as one or more I/O completions, I/O transactions, interrupts, or other data transfers. The response data itself may be forwarded data of the smart endpoint 1632 or the load transfer complex 1634. In some embodiments, in block 2044, the smart endpoint 1632 may retrieve the response from the load transfer complex 1634. For example, the smart endpoint 1632 can utilize the DMA engine 1706 to execute one or more DMA transactions to transfer the response from the load transfer complex 1634 to the host. In some embodiments, in block 2046, the smart endpoint 1632 may forward the response from the queue or other buffer of the smart endpoint 1632. For example, the smart endpoint 1632 may forward response data from a buffer in the memory 1708 of the smart endpoint 1632. After forwarding the response, the method 2000 loops back to block 2004 shown in FIG. 20 to continue monitoring the I/O transaction.ExampleIllustrative examples of the technology disclosed herein are provided below. An embodiment of the technology may include any one or more of the examples described below and any combination thereof.Example 1 includes a smart endpoint used for I/O protocol acceleration, the smart endpoint includes: a transaction layer for receiving I/O transactions originating from a root port of a computing device, wherein the smart endpoint is coupled to the Root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device; a protocol resolver for (i) in response to the reception of the I/O transaction, resolve the data based on the I/O protocol The I/O transaction, and (ii) the I/O command is identified in response to the analysis of the I/O transaction; and a protocol accelerator for (i) accelerating the I/O command, and (ii) responding In order to accelerate the I/O command, an intelligent context is provided to the load transfer complex.Example 2 includes the subject matter described in Example 1, and further includes: a firmware manager for programming the endpoint firmware of the smart endpoint, wherein parsing the I/O transaction includes parsing the I/O transaction based on the endpoint firmware /O transaction.Example 3 includes the subject matter as described in any of Examples 1 and 2, and wherein speeding up the I/O command includes copying the I/O command to the memory of the smart endpoint.Example 4 includes the subject matter of any one of Examples 1-3, and wherein accelerating the I/O command includes copying the I/O command to the memory of the load transfer complex.Example 5 includes the subject matter of any one of Examples 1-4, and wherein accelerating the I/O command includes: identifying protocol data associated with the I/O command; and copying the protocol data to The memory of the smart endpoint.Example 6 includes the subject matter of any one of Examples 1-5, and wherein accelerating the I/O command includes: identifying protocol data associated with the I/O command; and copying the protocol data to The memory of the load transfer complex.Example 7 includes the subject matter of any one of Examples 1-6, and wherein providing the smart context includes providing a pointer to the location of the I/O command to the load transfer complex.Example 8 includes the subject matter of any of Examples 1-7, and wherein accelerating the I/O command includes identifying protocol data associated with the I/O command; and providing the smart context includes The load transfer complex provides a pointer to the location of the protocol data.Example 9 includes the subject matter of any one of Examples 1-8, and wherein parsing the I/O transaction includes determining whether the I/O transaction is a doorbell notification; identifying the I/O command includes responding to determining The I/O transaction is a doorbell notification to identify the I/O command in the host memory; and accelerating the I/O command includes reading the I/O command from the host memory.Example 10 includes the subject matter of any one of Examples 1-9, and wherein determining whether the I/O transaction includes a doorbell notification includes determining whether the I/O transaction includes a tail pointer update.Example 11 includes the subject matter of any one of Examples 1-10, and wherein providing the smart context to the load transfer complex includes: providing the I/O command to the load transfer complex.Example 12 includes the subject matter of any one of Examples 1-11, and wherein accelerating the I/O command further includes: identifying protocol data in the host memory based on the I/O command; and The host memory reads the protocol data.Example 13 includes the subject matter of any one of Examples 1-12, and wherein identifying the protocol data includes parsing a decentralized-centralized list of the I/O commands.Example 14 includes the subject matter of any one of Examples 1-13, and wherein providing the smart context to the load transfer complex includes: providing the protocol data to the load transfer complex.Example 15 includes the subject matter of any one of Examples 1-14, and wherein the I/O command includes an NVMe command, a VIRTIO command, or an AVF command.Example 16 includes the subject matter of any one of Examples 1-15, and wherein the protocol accelerator is further used to: receive from the load transfer complex in response to providing the smart context to the load transfer complex A response to the I/O command; and the transaction layer is also used to forward the response to the root complex in response to receiving the response.Example 17 includes the subject matter of any one of Examples 1-16, and wherein receiving the response includes: receiving a doorbell notification from the load transfer complex; and receiving the doorbell notification from the The load transfer complex reads the response.Example 18 includes a method for I/O protocol acceleration, the method comprising: receiving, by a smart endpoint of a computing device, an I/O transaction originating from a root port of the computing device, wherein the smart endpoint is coupled to The root port, and wherein the smart endpoint is also coupled to the load transfer complex of the computing device; in response to receiving the I/O transaction, the smart endpoint resolves the I/O based on the I/O protocol O transaction; in response to parsing the I/O transaction, the smart endpoint identifies the I/O command; the smart endpoint accelerates the I/O command; and in response to speeding up the I/O command, The intelligent endpoint provides an intelligent context to the load transfer complex.Example 19 includes the subject matter of Example 18, and further includes: programming the endpoint firmware of the smart endpoint by the computing device, wherein parsing the I/O transaction includes parsing the I/O transaction based on the endpoint firmware O business.Example 20 includes the subject matter of any one of Examples 18 and 19, and wherein accelerating the I/O command includes copying the I/O command to a memory of the smart endpoint.Example 21 includes the subject matter of any one of Examples 18-20, and wherein speeding up the I/O command includes copying the I/O command to the memory of the load transfer complex.Example 22 includes the subject matter of any one of Examples 18-21, and wherein accelerating the I/O command includes: identifying protocol data associated with the I/O command; and copying the protocol data to The memory of the smart endpoint.Example 23 includes the subject matter of any one of Examples 18-22, and wherein accelerating the I/O command includes: identifying protocol data associated with the I/O command; and copying the protocol data to The memory of the load transfer complex.Example 24 includes the subject matter of any of Examples 18-23, and wherein providing the intelligent context includes providing the load transfer complex with a pointer to the location of the I/O command.Example 25 includes the subject matter of any one of Examples 18-24, and wherein accelerating the I/O command includes identifying protocol data associated with the I/O command; and providing the smart context includes reporting to all The load transfer complex provides a pointer to the location of the protocol data.Example 26 includes the subject matter of any one of Examples 18-25, and wherein parsing the I/O transaction includes determining whether the I/O transaction is a doorbell notification; identifying the I/O command includes responding to determining The I/O transaction is a doorbell notification to identify the I/O command in the host memory; and accelerating the I/O command includes reading the I/O command from the host memory.Example 27 includes the subject matter of any one of Examples 18-26, and wherein determining whether the I/O transaction includes a doorbell notification includes determining whether the I/O transaction includes a tail pointer update.Example 28 includes the subject matter of any of Examples 18-27, and wherein providing the smart context to the load transfer complex includes providing the I/O command to the load transfer complex.Example 29 includes the subject matter of any one of Examples 18-28, and wherein accelerating the I/O command further includes: identifying protocol data in the host memory based on the I/O command; and The host memory reads the protocol data.Example 30 includes the subject matter of any one of Examples 18-29, and wherein identifying the protocol data includes parsing a scattered-centralized list of the I/O commands.Example 31 includes the subject matter of any of Examples 18-30, and wherein providing the smart context to the load transfer complex includes providing the protocol data to the load transfer complex.Example 32 includes the subject matter of any one of Examples 18-31, and wherein the I/O command includes an NVMe command, a VIRTIO command, or an AVF command.Example 33 includes the subject matter of any one of Examples 18-32, and further includes: receiving, by the smart endpoint, a pair from the load transfer complex in response to providing the smart context to the load transfer complex A response to the I/O command; and in response to receiving the response, the smart endpoint forwards the response to the root complex.Example 34 includes the subject matter of any one of Examples 18-33, and wherein receiving the response includes: receiving a doorbell notification from the load transfer complex; and in response to receiving the doorbell notification, from the load The transfer complex reads the response.Example 35 includes a computing device, including: a processor; and a memory, in which a plurality of instructions are stored, which when executed by the processor cause the computing device to perform as described in any one of Examples 18-34 Methods.Example 36 includes one or more non-transitory computer-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause the computing device to perform the method of any one of Examples 18-34 .Example 37 includes a computing device including means for performing the method of any of Examples 18-34.
Embodiments contained in the disclosure provide a method and apparatus for device specific thermal mitigation. The thermal and power behavior of the device, is characterized. A thermal threshold is then determined for the device. The thermal data and thermal ramp factor for each device are determined and stored in a cross-reference matrix. A correlation factor is determined for temperature and frequency. These correlation factors determine a device mitigation temperature. The device mitigation temperature may be stored in a fuse table on the device, with a fuse blown on the device to permanently store the device mitigation temperature. The apparatus includes: an electronic device, a memory within the electronic device, and a set of fuses within the electronic device. The device also includes means for determining if a static or dynamic frequency is high, and means for mitigating a voltage and frequency used by the device, based on that determination.
1.A device-specific thermal slowdown method includes:Characterize the thermal behavior of the device;Characterize the power behavior of the device; andDetermine the thermal threshold tolerance for the device.2.The method of claim 1, further comprising:The thermal threshold data is stored in a cross-reference matrix.3.The method of claim 1, further comprising:Determine the thermal ramp factor for each device;Determine the correlation factor for the device based on temperature and frequency;Storing the temperature and voltage correlation factors in a cross-reference matrix;Storing the temperature and frequency correlation factors in the cross-reference matrix; andBased on the correlation factor, the device slowdown temperature is determined.4.The method of claim 3, further comprising:Storing the device mitigation temperature in a fuse table on the device; andThe fuse on the device is blown to permanently store the device slow down temperature.5.The method of claim 4, further comprising:The device is operated based on the device slowing down the temperature.6.The method of claim 4, wherein the device mitigation power factor is also permanently stored in the device.7.The method of claim 4, wherein dynamic frequency power ratios according to voltage and frequency are encoded within the device.8.The method of claim 4, wherein a static power ratio based on voltage and frequency is encoded within the device.9.The method of claim 7, further comprising:Determine whether the dynamic component is high; andIf the dynamic component of net power is high, the frequency used by the device is slowed.10.The method of claim 8, further comprising:Determine whether the static power component is high; andIf the static component of the net power is high, the frequency and voltage used by the device is slowed.11.A device for device-specific thermal slowdown, including:Electronic devices;a memory within the electronic device; andA set of fuses is inside the electronic device.12.The apparatus of claim 11 wherein at least one of the fuses in the set of fuses has been blown to permanently store the device slow down temperature.13.A device for device-specific thermal slowdown, including:Device for characterizing the thermal behavior of a device;An apparatus for characterizing the power behavior of the device; andMeans for determining a thermal threshold tolerance for the device.14.The apparatus of claim 13, further comprising:An apparatus for storing the thermal threshold data in a cross-reference matrix.15.The apparatus of claim 13, further comprising:Devices for determining the thermal ramp factor for each device;Means for determining a correlation factor for the device based on temperature and frequency;Means for storing the temperature and voltage correlation factors in a cross-reference matrix;Means for storing the temperature and frequency correlation factors in the cross-reference matrix; andMeans for determining device slow down temperature based on the correlation factor.16.The apparatus according to claim 15, further comprising:a device for slowing the temperature of a memory device in a fuse table on the device; andA device for fusing fuses on the device to permanently store the temperature-reducing temperature of the device.17.The apparatus of claim 16 further comprising:Means for encoding a dynamic component of net power within the device.18.The apparatus of claim 16 further comprising:Means for encoding a dynamic component of net power within the device.19.The apparatus of claim 17, further comprising:Means for determining whether the dynamic component of the net power is high; andMeans for slowing down the frequency used by the device if the dynamic component of the net power is high.20.The apparatus of claim 18, further comprising:Means for determining whether the static component of the net power is high; andMeans for slowing down the frequency and voltage used by the device if the static component of the net power is high.
Heat Dedicated to Devices Slows DownCross-reference to related applicationsThis application claims the benefit of and priority to Non-Provisional Application No. 14/696,182, filed on April 24, 2015, in the U.S. Patent and Trademark Office, the entire contents of which are hereby incorporated by reference.Technical fieldThe present disclosure generally relates to thermal mitigation strategies for integrated circuits, and more specifically to device-specific thermal mitigation to avoid over-current, high power, and uncontrolled thermal behavior while optimizing performance.Background techniqueIntegrated circuits (ICs) are used in most electronic devices, including desktop computers, laptop computers, tablet computers, mobile phones, smart phones, and other personal devices. The range of applications for these devices continues to grow, and usage increases as more applications become available. Integrated circuits have become integral components of their devices. Integrated circuits have also become significantly more complex with multiple cores that provide a wide variety of processing tools. A typical example is a system-on-chip (SoC) that is visible in many smart phones. Many electronic devices use multiple complex integrated circuits or processors to perform tasks directed by a wide variety of applications.The increased use of processors often results in heat generated by the operation of the on-chip circuitry. This heat can increase and can lead to unsatisfactory device performance, data loss, or failure. Faults within the device can be limited to a dedicated core that is highly utilized or can be more widely distributed and multiple cores are affected.Even when failures do not occur, performance can degrade. In smart phones, SoCs may have problems that tolerate temperatures near high temperature limits. Near the limits, SoC performance can be affected when the frequency can bounce between high and low frequencies. Each integrated circuit is unique and changes in how hard it is affected by high temperatures and how fast it can cool down. Tests can be used to determine the high temperature behavior of the IC and can be used to set performance limits.Test ICs are frequently performed in large quantities because many devices may need to be delivered to electronics manufacturers to continue manufacturing. In this case, the test determines the IC device specification for the entire lot. When each IC can be unique, it is not feasible to individually determine and specify operating characteristics because the batch size can be too large. In practice, this means that the behavior of the worst-test device in the lot determines the thermal benchmark for the entire device population.Using the worst-case device as a benchmark can save time, but can lead to underestimation of the IC's performance and result in suboptimal performance. There is a need in the art to provide device-specific thermal mitigation to avoid overcurrent, high power, or uncontrolled thermal behavior.Summary of the InventionThe embodiments included in the present disclosure provide a method for device-specific thermal mitigation. The thermal behavior of devices such as SoCs is characterized as power behavior. Thermal thresholds are then determined for the device based on thermal and power behavior. Thermal data and thermal ramp factors for each device are stored in a cross-reference matrix. Correlation factors are determined for temperature and also for frequency. These correlation factors are used to determine the device slow down temperature for a particular device. The device slow down temperature can be stored in an EEPROM or a fuse table on the device, where the fuse is blown on the device to permanently store the device slow down temperature. The individual devices can then be controlled by software in accordance with device slowing down temperature.Another embodiment provides an apparatus for dedicated thermal slowdown of a device. Devices include electronics, memory within the electronics, and a set of fuses within the electronics. At least one of the fuses may be blown to permanently store the device slow down temperature.Yet another embodiment provides a device for device-specific thermal mitigation. The device includes means for characterizing the thermal behavior of the device; means for characterizing the power behavior of the device; and means for determining a thermal threshold tolerance for the device. The device also includes means for determining if the static or dynamic power is high, and means for reducing the voltage and frequency used by the device based on the determination.Description of the drawingsFIG. 1 shows a rapid thermal gradient for a plurality of active cores according to embodiments described herein.FIG. 2 provides an overview of a method of temperature, voltage sensitivity, and frequency sensitivity according to embodiments described herein.FIG. 3 is a flowchart of a method of encoding power and temperature behavior in each device according to embodiments described herein.FIG. 4 is a flowchart of a method of encoding frequency and temperature behavior in each device according to embodiments described herein.FIG. 5 is a flow diagram of a method of encoding thermal ramp information in each device according to embodiments described herein.FIG. 6 is a flow diagram of a method for device-specific thermal mitigation according to embodiments described herein.detailed descriptionThe detailed description set forth below in connection with the appended drawings is intended as a description of exemplary embodiments of the present invention and is not intended to show only embodiments in which the present invention may be practiced. The term "exemplary" used throughout this specification means "serving as an example, instance, or illustration," and is not necessarily to be construed as preferred or advantageous over other exemplary embodiments. The detailed description includes specific details for the purpose of providing a thorough understanding of the exemplary embodiments of the invention. It will be apparent to those skilled in the art that the exemplary embodiments of the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form in order to avoid obscuring the novelty of the exemplary embodiments presented herein.As used in this application, the terms "component," "module," "system," and the like are intended to refer to a computer-related entity, or to hardware, firmware, a combination of hardware and software, software, or software in execution. . For example, a component may be, but is not limited to, a process running on a processor, an integrated circuit, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computing device and the computing device can be a component. One or more components may reside within a process and/or thread of execution, and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. Components can be communicated by local and/or remote processes, such as based on signals having one or more data packets (eg, data coming from interaction with a local system, a distributed system, and/or another component across a network such as the Internet). A component that uses signals to interact with other systems).In addition, various features or features described herein may be implemented as a method, apparatus, or manufacturing article using standard programming and/or engineering techniques. The term "manufactured goods" as used herein is intended to include computer programs available from any computer-readable device, carrier, or media. For example, a computer readable medium may include, but is not limited to, magnetic storage devices (eg, hard disks, floppy disks, magnetic tapes... ), optical disks (eg, Compact Disc (CD), Digital Versatile Disk (DVD)... ), smart cards, and flash memory. Devices (eg, cards, sticks, key drivers...), as well as integrated circuits such as read-only memory, programmable read-only memory, and electrically erasable programmable read-only memory.Each feature aspect will be presented in terms of a system that may include many devices, components, modules, and the like. It should be understood and appreciated that various systems may include additional devices, components, modules, etc., and/or may not include all of the devices, components, modules, etc. discussed in connection with the figures. Combinations of these schemes can also be used.Other features of the invention, as well as features and advantages of various features, will be apparent to those skilled in the art from consideration of the specification, the drawings, and the appended claims.ICs and SoCs are evaluated when subjected to thermal tests, which may also be known as thermal fiducial markers. The thermal reference marks establish the behavior of the device and determine the operating parameters of the device. Tests such as multi-core Dhrystone tests can be used for hot reference marks. These values ​​are used to determine temperature limit nuclear design constraints when the device is included in end products such as smart phones, tablet computer cores, and other electronic devices.At work, when using electronic devices, heat is generated. This heat is generated in the active core in the electronic device's IC or SoC. The heat generated by the active core increases the temperature of the die containing the core die. As the die temperature increases, the temperature ramp is expected to be proportional to the power dissipated by the core.Existing mitigation algorithms and temperatures are common. The worst-case performance in the group determines the performance limits for the device group. As a result, performance can be sacrificed to achieve thermal stability. The worst-case device can have a faster thermal ramp than the device's global population. For these worst-case devices, closer temperature mitigation is needed to ensure stability. When these requirements enable the use of the lowest performance devices, the remainder of the device population is penalized and can subsequently be degraded. The embodiments described herein provide mitigation only for devices that require it and avoid punishing the device population as a whole.Thermal control can be performed using frequency or voltage pairing. Components with higher dynamic power are more affected by frequency reduction, while devices with higher static power are more affected by voltage reduction. The frequency or voltage can be radically managed by the components using the embodiments described herein.Figure 1 shows the behavior of a temperature sensor on a SoC device when performing a system level test. As shown in Figure 1, there is a sudden temperature rise due to core activity. This sudden warming can lead to a noticeable overshoot of the die temperature limit, which, if exceeded, is known as a fast thermal gradient (FTG), which can cause potential system or device failure or crash.System level testing can use software to test the die tolerance for increased temperature. Dies with low thermal tolerances can be slow in frequency and can bounce or oscillate between lower and higher frequencies. During the test, the temperature can be fixed between 80 and 90 degrees, at which time the device behavior is observed. Each IC is unique and will both operate at high temperatures for different periods of time and will all be cooled and cooled at different rates. In addition, some devices can have high static power and do not cool down. For most system level tests, the worst-case device determines the thermal reference. The thermal reference must be strict enough so that the worst case device can operate. If the test method will identify the worst performing devices and manage their individual temperature profiles, the overall device performance will be improved.The embodiments described herein provide optimized voltage meters for devices such as application programs and graphics processors, modems, and SoCs that maximize performance and minimize power. More specifically, the embodiments described herein provide thermal relaxation setpoints programmed into a single device for each component. These set points are set in the fuse table and the fuse table is read to determine the slowing temperature for the device. As a result, over-current and over-temperature events that degrade the performance of smart phones, tablets, or PCs are avoided. In addition, each component's customized mitigation solution maximizes performance while minimizing risk. The device above the average is not punished by the behavior of a limited sample that requires progressive mitigation.Figure 2 provides a schematic of the method of slowing down the temperature, voltage sensitivity, and frequency sensitivity. Method 200 provides for performing thermal and power characterization for each device in step 202 . This characterization is provided in the test form factor. In step 204, the corresponding behavior of the commercial form is determined in parallel. These values ​​are used to determine the thermal threshold margin for each part or device under test. In step 208, these values ​​are placed in the matrix. In step 206, extensive thermal ramp information and temperature and voltage and temperature-to-frequency correlations and processes are stored separately, possibly in the cloud or in EEPROM or fuses in the device, or in device software . In step 210, the slow temperature recommendation for each device is stored in the fuses in each device. The value can be read back by software when the method is executed. In step 214, a table for mitigation temperature, voltage sensitivity, frequency sensitivity, and sampling rate is determined. These tables are performed based on the fuses inside the device and the device form factors.FIG. 3 illustrates a method for encoding power and temperature behavior within each device. Method 300 begins at step 302 when each device is tested for power and temperature behavior. As part of this determination, in step 304, the power value is determined. In step 306, the value is encoded within the device. Perform this step for each device in the heap. In step 308, the fuses in the device are blown to permanently store the power value. Next, the temperature value is determined for each device in step 310 . In step 312, the value is encoded in each device. In step 314, the fuse is blown in each device to permanently store the temperature value.The values ​​for power and temperature behavior are encoded by blowing fuses in each device. The encoded value is specific to the device. A single slowing temperature can be stored and used to customize the thermal ramp rate for each device. The stored table defines a single mitigation temperature thus avoiding over-current and other power issues.FIG. 4 is a flowchart of a method for encoding frequency and temperature behavior within each device. Method 400 begins at step 402 where static and dynamic frequency power ratios are encoded within each device. In step 404, static and dynamic power relative to voltage and frequency are encoded in each device. In step 406, the operating voltage is measured. This measurement can be performed by the processor using a software table. In decision block 408, it is determined if the measured power at the operating voltage, frequency and temperature has a high dynamic value or a high static value. If the value is a high dynamic value, frequency slowing is selected in step 410 for the device under consideration. If the measured operating power has a high static value, aggressive frequency/voltage reduction is required. This aggressive slowdown is limited to devices that exhibit high static values, and this value does not characterize the entire stack of devices.Figure 5 is a flowchart for encoding a thermal ramp for each device in the device. Using a blow fuse as described above, a thermal ramp is coded in each device. Method 500 begins at step 502 where each device is thermally tested. Next, a heat ramp is determined for each device in step 504 . The thermal ramp is coded in each device using a blow fuse in step 506 . A look-up table can be used to determine the progressive temperature for each device to initiate mitigation measurements to avoid over-temperature issues.FIG. 6 is a flow diagram of a method that provides device-specific thermal mitigation to avoid over-current, high power, and uncontrolled thermal behavior. Method 600 begins at step 602 when each device is characterized for thermal and power behavior. Next, in step 604, a thermal threshold is determined for each device based on the above determined characterization. Then in step 606, the thermal threshold information is loaded or stored in the thermal threshold tolerance cross-reference matrix. In step 608, thermal ramp parameters for each device are determined based on the above information. Then in step 610, the correlation between temperature and voltage is determined. Similarly, the correlation between temperature and frequency is determined in step 612 . In step 614, these correlation factors are also stored in the cross-reference matrix. Based on the relevant information, the device slowdown temperature is determined in step 616. The device slows down temperature may then be stored in the fuse on the device in step 618 and in the fuse table. The ASIC SoC control logic then uses the associated data set in the matrix to limit the maximum voltage of those SoC devices that are sensitive to voltage/temperature conditions, limiting the maximum frequency of those SoC devices that are sensitive to frequency/temperature conditions when thermal slowdown is needed. Additionally, in some cases, a set of related data is used to determine the switching frequency on some of the SoC devices based on the frequency/temperature profile to keep the device below the maximum temperature in step 620 .Fuse information can be stored as an automatic test equipment (ATE) fuse table. This table contains fuse information for each device that uses the ATE test. Additional embodiments provide for changing the sampling rate, which allows polling at a higher rate for hazardous devices. Each line of the fuse matrix can have a different form factor. A look-up or scaling table can be provided and can be accessed using software. Software can contain detailed device threshold tables for temperature, voltage, and frequency that can be programmed into software. When executed, a threshold table that returns read information and form factors based on fuses allows customization of each device. The performance can be optimized using algorithms that provide optimized performance for each component.Those skilled in the art will understand that information and signals may be represented using any of a variety of different processes and techniques. For example, data, instructions, commands, information, signals, code bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or magnetrons, optical fields or photons, or any combination thereof.Those skilled in the art should further appreciate that various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the exemplary embodiments disclosed herein may be implemented as electronic hardware, computer software, or a combination of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, circuits, and steps have been described above generally in terms of their functionality. Whether this function is implemented as hardware or software depends on the specific application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention.Various illustrative logical blocks described in connection with the exemplary embodiments disclosed herein are incorporated. Modules and circuits may employ general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other programmable logic devices designed to perform the functions described herein, Discrete gate or transistor logic, discrete hardware components, or any combination thereof are implemented or implemented. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.In one or more exemplary embodiments, the functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium can be any applicable medium that can be accessed by a computer. By way of example and not limitation, the computer readable medium may include RAM, ROM, EEPROM, CD-ROM, or other optical disk storage or other magnetic storage device, or may be used to carry or store in the form of instructions or data structures. Any other medium that requires program code and can be accessed by the computer. In addition, any connection is properly termed a computer-readable medium. For example, if the software is sent from a website, server, or any other remote source using coaxial cable, fiber optic cable, hinged wire pairing, digital subscriber line (DSL), or wireless technology such as infrared, radio frequency, and microwave, the coaxial cable, light, Cables, hinged wiring pairs, DSL or wireless technologies such as infrared, radio frequency and microwave are included in the definition of media. Disks and discs as used herein include compact discs (CDs), laser discs, compact discs, digital versatile discs (DVDs), floppy discs, and Blu-ray discs, where disks usually reproduce data magnetically and discs use lasers to optically copy data. Combinations of the above should also be included within the scope of computer-readable media.The previous description of the disclosed exemplary embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these exemplary embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the exemplary embodiments shown herein but is to be accorded the widest scope consistent with the principles and innovative features disclosed herein.
An integrated fuse in a self-aligned gate endcap for finfet architectures and methods of fabrication. A device structure in some embodiments includes a first gate on a first fin, and a second gate on a second fin, wherein the second gate is spaced apart from the first gate by a distance. A fuse spans the distance and is in contact with the first gate and the second gate. A first dielectric is between the first fin and the second fin, wherein the first dielectric is in contact with, and below, the fuse and a second dielectric is between the first gate and the second gate, and the second dielectric is on the fuse.
1.A device structure comprising:a first gate on the first fin;a second gate on the second fin, wherein the second gate is spaced a distance from the first gate;a fuse spanning the distance and in contact with the first gate and the second gate;a first dielectric between the first fin and the second fin, wherein the first dielectric is in contact with and below the fuse; anda second dielectric between the first gate and the second gate, wherein the second dielectric is on the fuse.2.The device structure of claim 1, wherein the first gate and the second gate each comprise a gate dielectric and a gate metal, wherein the gate metal comprises tantalum, titanium, nitrogen, or tungsten one or more of.3.The device structure of claim 1 wherein the fuse includes the gate metal and has a minimum thickness of 2 nm.4.4. The device structure of claim 3, wherein the gate dielectric is adjacent to sidewalls of the first dielectric and sidewalls of the second dielectric.5.5. The device structure of claim 4, wherein the fuse is between the gate dielectric adjacent to the first dielectric and the gate dielectric adjacent to the second dielectric.6.5. The device structure of any one of claims 1-5, wherein the first dielectric comprises one or more of oxygen, nitrogen and carbon and silicon, and wherein the second dielectric comprises Hf, One or more of Zr, La and oxygen.7.The device structure of any of claims 1-5, wherein the distance is between 15 nm and 25 nm.8.The device structure of any one of claims 1-5, wherein the distance is a first distance, and the device structure further comprises:a third gate on a third fin, wherein the third gate is spaced apart from the second gate by a second distance;a third dielectric between the first fin and the third fin; andThe fourth dielectric on the third dielectric has no fuse between the third dielectric and the fourth dielectric.9.The device structure of claim 8, wherein the second distance is between 15 nm and 25 nm.10.The device structure of any one of claims 1-5, wherein the distance is a first distance, and the device structure further comprises:a fourth gate on the fourth fin spaced a third distance from the second gate;a fifth dielectric between the fourth fin and the second fin;a sixth dielectric within the fifth dielectric;a seventh dielectric over the fifth dielectric and the sixth dielectric; anda backing layer in contact with and between the sixth and seventh dielectrics without a fuse between the sixth and seventh dielectrics .11.11. The device structure of claim 10, wherein the third distance is at least 50 nm.12.11. The device structure of claim 10, wherein the liner layer is at least 3 nm thick, and wherein the liner layer comprises at least one of oxygen, nitrogen, or carbon and silicon.13.5. The device structure of any of claims 1-5, wherein the fuse is discontinuous between the first gate and the second gate.14.5. The device structure of any of claims 1-5, wherein the interface between the first dielectric and the second dielectric is non-planar and the fuse is non-planar.15.A device structure comprising:a first gate on the first fin;a second gate on the second fin, wherein the second gate is spaced apart from the first gate by a first distance;a fuse spanning the first distance and in contact with the first gate and the second gate;a first dielectric between the first fin and the second fin, wherein the first dielectric is in contact with and below the fuse;a second dielectric between the first gate and the second gate, wherein the second dielectric is on the fuse;a third gate on the third fin, wherein the third gate is spaced apart from the second gate by a second distance;a third dielectric between the first fin and the third fin; andThe fourth dielectric on the third dielectric has no fuse between the third dielectric and the fourth dielectric.16.16. The device structure of claim 15, wherein the first gate and the second gate each comprise a gate dielectric and a gate metal, and wherein the gate metal comprises tantalum, titanium, nitrogen or One or more of tungsten.17.17. The device structure of claim 16, wherein the fuse comprises the gate metal, wherein the fuse comprises a minimum thickness of 2 nm.18.18. The device structure of any of claims 15-17, wherein the fuse is between the gate dielectric adjacent to the first dielectric and the gate adjacent to the second dielectric and wherein the first distance is between 15 nm and 25 nm and the second distance is at least 50 nm.19.A method of manufacturing a fuse, the method comprising:forming a first fin and a second fin spaced apart by a certain distance;forming a dielectric liner adjacent the first fin and the second fin;depositing a dielectric layer on the dielectric liner between the first fin and the second fin;recessing the dielectric layer under the first fin and the second fin;forming a liner layer over the dielectric liner and on the dielectric layer;forming a mask dielectric on the liner layer;removing the liner layer from above the first fin and the second fin;removing the liner layer from the region between the dielectric layer and the mask dielectric to form voids; andA gate is formed on the first fin and the second fin, wherein forming the gate includes depositing metal in the void to form a fuse connected to the gate.20.20. The method of claim 19, wherein the distance is a first distance, and the method further comprises:forming a third fin adjacent to the second fin, wherein the third fin is spaced apart from the second fin by a second distance;forming the dielectric layer between the second fin and the third fin;forming the liner layer on the dielectric layer;forming the mask dielectric on the liner layer;partially removing the liner layer from the region between the dielectric layer and the mask dielectric; andA gate is formed on the third fin.21.The method of any of claims 19-20, further comprising:recessing the dielectric liner below the uppermost surfaces of the first and second fins;forming a dummy gate structure on the first fin, on the second fin and on the dielectric liner;forming a dielectric barrier adjacent to the dummy gate structure; andThe dummy gate structure is removed and the dielectric liner and the dielectric layer between the first and second fins are exposed.
Integrated Fuses and Fabrication in Self-Aligned Gate End Caps for FINFET Architectures methodBackground techniqueScaling of features in integrated circuits has been the driving force behind the growing semiconductor industry over the past few decades. Scaling to smaller and smaller features makes it possible to increase the density of functional units on the limited chip area of a semiconductor chip. For example, shrinking transistor size allows an increased number of devices to be incorporated on a chip, thereby facilitating the manufacture of products with increased functionality. For SOC applications, two or more transistors can be connected through one-time programmable fuses. Although fuses have been used in interconnect structures, integrating fuses within the transistor gate level provides many device and processing advantages.Description of drawingsThe materials described herein are shown by way of example and not by way of limitation in the accompanying drawings. For simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Furthermore, various physical features may be represented in their simplified "ideal" forms and geometric shapes for clarity of discussion, it being understood that actual implementations may only approximate the ideals shown. For example, smooth surfaces and square intersections can be drawn without considering the limited roughness, rounded corners, and imperfect angled intersection properties of structures formed by nanofabrication techniques. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.1A is a plan view illustration of a device structure including a plurality of transistors in accordance with an embodiment of the present disclosure.Figure IB is a cross-sectional illustration of the device structure through line A-A'.1C is an enhanced cross-sectional illustration of a portion of the gate on the first fin.Figure ID is an enhanced cross-sectional illustration of the fuse.1E is an enhanced cross-sectional illustration of a fuse with voids.1F is an enhanced cross-sectional illustration of a fuse having an irregular shape.FIG. 1G is an enhanced cross-sectional illustration of a portion of the gate end cap structure depicted in FIG. 1B .Figure 2A is a cross-sectional illustration of a device structure in which first and second transistors are separated by a gate end cap structure.2B is a cross-sectional illustration of a device structure in which each transistor includes two fins.Figure 3 is a method of fabricating a device structure such as that described in connection with Figure IB.4A is a cross-sectional illustration of a plurality of fin structures patterned over a substrate in accordance with an embodiment of the present disclosure.4B is a cross-sectional illustration of the structure in FIG. 4A after forming the first dielectric.FIG. 4C shows the structure of FIG. 4B after forming a second dielectric on the first dielectric.FIG. 4D shows the structure of FIG. 4C after forming a third dielectric on the second dielectric.Figure 4E shows the structure of Figure 4D after the process of recessing the first and second dielectrics and selectively recessing the third dielectric.FIG. 4F shows the structure of FIG. 4E after the formation of the liner layer.Figure 4G shows the structure of Figure 4F after forming a fourth dielectric on the liner layer and after a planarization process.FIG. 5A shows the structure of FIG. 4G after the process of recessing the first dielectric.Figure 5B is a plan view cross-section of the illustration in Figure 5A.FIG. 6A shows the structure of FIG. 5B after forming dummy gate structures in the plurality of openings.FIG. 6B shows the structure of FIG. 6A after forming a fifth dielectric in the plurality of openings.FIG. 6C shows the structure of FIG. 6B after removal of the dummy gate structure.Figure 7A shows a cross-sectional view along line A-A' of the structure of Figure 6B after the process of removing the liner layer.Figure 7B is a plan view illustration of the structure of Figure 7A.FIG. 8A shows the structure of FIG. 7A after forming fuses and multiple transistor gates.8B is an enhanced cross-sectional illustration of a fuse formed between two dielectric layers.Figure 9A is a cross-sectional illustration of a plurality of fin structures that have undergone the processing operations described in conjunction with Figures 4A-4F.FIG. 9B shows the structure of FIG. 9A after a process of etching and removing the liner layer not covered by the mask layer.9C shows FIG. 9B after removing the mask layer, depositing an eighth dielectric in the openings between the first and second fins and between the third and fourth fins, and then performing a planarization process Structure.10 illustrates a computing device according to an embodiment of the present disclosure.11 illustrates an integrated circuit (IC) structure including one or more embodiments of the present disclosure.Figure 12A is a plan view illustration of a memory cell coupled with a transistor adjacent to a gate end cap in accordance with an embodiment of the present disclosure.12B is a cross-sectional illustration of a memory cell in accordance with an embodiment of the present disclosure.12C is a cross-sectional illustration of a magnetic tunnel junction device according to an embodiment of the present disclosure.12D is a cross-sectional illustration of a resistive random access memory device according to an embodiment of the present disclosure.detailed descriptionIntegrated fuses and fabrication methods in self-aligned gate end caps for FinFET architectures are described. In the following description, numerous specific details are set forth, such as structural schemes and detailed fabrication methods, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features, such as operation associated with FinFET transistors, are not described in less detail so as not to unnecessarily obscure embodiments of the present disclosure. Furthermore, it is to be understood that the various embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.In some instances, in the following description, well-known methods and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present disclosure. Reference throughout the specification to "an embodiment" or "one embodiment" or "some embodiments" means that a particular feature, structure, function or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases "in an embodiment" or "in one embodiment" or "some embodiments" in various places throughout the specification are not necessarily referring to the same embodiments of the present disclosure. Furthermore, the particular features, structures, functions or characteristics may be combined in any suitable manner in one or more embodiments. For example, the first embodiment may be combined with the second embodiment so long as the particular features, structures, functions or characteristics associated with the two embodiments are not mutually exclusive.As used in the specification and the appended claims, the singular forms "a" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.The terms "coupled" and "connected" and their derivatives may be used herein to describe a functional or structural relationship between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in certain embodiments, "connected" may be used to indicate that two or more elements are in direct physical, optical or electrical contact with each other. "Coupled" may be used to indicate that two or more elements are in direct or indirect physical, electrical or magnetic contact with each other (with other intervening elements between them), and/or two or more elements cooperate with or with each other Interaction (eg, as in a causal relationship).The terms "above", "below", "between" and "on" as used herein refer to the relative position of one component or material relative to other components or materials, where such physical relationships are noted. For example, in the context of materials, one material or materials disposed above or below another material may be in direct contact or may have one or more intervening materials. Furthermore, a material disposed between the two materials may be in direct contact with the two layers or may have one or more intervening layers. In contrast, the first material "on" the second material is in direct contact with the second material/material. A similar distinction is made in the context of component assemblies. As used throughout this specification and in the claims, a list of items linked by the terms "at least one of" or "one or more of" may represent any combination of the listed terms.The term "adjacent" as used herein generally refers to the location of one thing in close proximity (e.g., directly adjacent, or in close proximity with one or more things in between) or adjacent to another thing (e.g., adjacent to the other thing).The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meanings of "a" and "the" include plural references. The meaning of "middle" includes "middle" and "up".The term "device" may generally refer to a device, depending on the context in which the term is used. For example, a device may refer to a stack of layers or structures, a single structure or layer, a connection of various structures with active and/or passive components, and the like. Typically, a device is a three-dimensional structure with a plane along the x-y direction of an x-y-z Cartesian coordinate system and a height along the z direction. The plane of a device may also be the plane of a device including the device.As used throughout this specification and in the claims, a list of items linked by the terms "at least one of" or "one or more of" can mean any combination of the listed terms.Unless stated otherwise in the explicit context of their use, the terms "substantially equal", "about equal" and "approximately equal" mean that there are only incidental variations between the two things described. In the art, this variation typically does not exceed +/- 10% of the predetermined target value.The terms "left," "right," "front," "rear," "top," "bottom," "above," "above," etc. in the specification and claims are used for descriptive purposes and are not necessarily used Used to describe a permanent relative position. For example, the terms "on", "under", "front", "back", "top", "bottom", "over", "under" and "upper" as used herein refer to an element, The relative position of a structure or material relative to other referenced components, structures or materials within a device, where such physical relationships are of interest. These terms are used herein for descriptive purposes only and are primarily used in the context of the device z-axis, and thus may be related to the orientation of the device. Thus, a first material that is "above" a second material in the context of the figures provided herein may also be "below" a second material if the device is oriented with respect to the context of the figures provided. In the context of materials, one material disposed above or below another material may be in direct contact or may have one or more intervening materials. Furthermore, a material disposed between the two materials may be in direct contact with the two layers or may have one or more intervening layers. In contrast, the first material "on" the second material is in direct contact with the second material. A similar distinction is made in the context of component assemblies.The term "between" may be used in the context of the z-axis, x-axis, or y-axis of a device. A material between two other materials may be in contact with one or both of those materials, or it may be separated from both other materials by one or more intervening materials. Thus, a material "between" two other materials can be in contact with either of the other two materials, or it can be coupled to the other two materials through an intervening material. A device between two other devices may be directly connected to one or both of those devices, or it may be separated from both other devices by one or more intervening devices.One-time programmable (OTP) fuse elements are used in a variety of SOC applications in combination with large arrays of logic transistors. SoC applications utilize all CMOS-compatible functional blocks. Due to the manufacturing and testing process, some components may need to be serviced, tuned or calibrated, while others may need to be personalized or programmed. Fuse elements can be effectively used to isolate device components such as transistors in logic circuits. Fuse elements can also be used for electrostatic discharge protection in integrated circuits.In a CMOS circuit, the fuse element is made of a thick metal layer and is usually located above the active transistors at the interconnect level in the back end, between M1 and M2 or between M2 and M3. However, such a fuse element may require an integrated charge pump for programming the logic transistors below the interconnect level transistors. Such a charge pump requires additional area on the chip. A typical plan area of a fuse element on an IC chip is between 1 and 2 microns, where the fuse element requires a programming voltage that can be quite high, for example up to 4V. High programming voltages entail high current requirements in order to break down thick metal structures. This problem can be mitigated by inserting fuses at the transistor level rather than at the interconnect level.The inventors have found that OTP fuse elements can be successfully integrated with CMOS transistors. More specifically, OTP fuse elements can be integrated at the gate level in FinFET architectures. Gate end cap structures can be implemented in a CMOS FinFET architecture to isolate two or more transistors from each other. For example, the metal gate of the first transistor may be isolated from the metal gate of the second transistor by a gate end cap structure located between the two devices.The fuse element can be more specifically integrated in the gate end cap structure between two immediately adjacent transistors. This fuse element occupies the area between two adjacent gate structures, which covers an area of approximately 30nm x 50nm. The gate end cap integrated fuse structure is advantageous because it can utilize low programming voltages (less than or equal to 1V) because fuses are relatively thin elements with a thickness of a few nanometers or less. The low voltage requirement also eliminates the need for a charge pump and additional circuitry. The fuse element has a thickness that allows breakdown at low currents and may not require additional charge pump circuitry. In some embodiments, the fuse element can be as thin as 1 nm.The inventors have found that a fuse element can be integrated within the gate end cap between the two transistors for operational advantages and ease of manufacture. For example, the fuse elements can be fabricated simultaneously during fabrication of the transistor gates. An additional advantage provided by the gate end cap structure is that the structure is self-aligned with the gate. In an embodiment, the gate end cap structure includes a top portion formed with an etch resistant material, such as an oxide of a metal such as hafnium, zirconium, lanthanum, etc., and the bottom portion includes an oxide such as silicon nitride, silicon carbide or Silicon oxynitride material. Beneficially, the top portion of the gate end cap structure includes an etch resistant material to minimize damage caused by multiple etching processes during the transistor fabrication sequence. While etch-resistant materials such as hafnium oxide may be desirable, for example, to gain manufacturing advantages over the entire gate end cap structure, hafnium oxide can also increase the overlap capacitance between transistor gates. Thus, the top portion of the gate end cap structure has a thickness that balances etch selectivity with threshold overlap capacitance requirements, thereby maintaining transistor performance.An additional advantage provided by gate end cap integrated fuse structures is that they can be integrated in gate end caps between selected transistors and omitted between other transistors. In one embodiment, the gate end caps of the fuse elements are not required to be masked during the fabrication process so that no metal is deposited to form the fuse. In other embodiments, a fuse may be present between two transistors separated by a first distance (eg, 15 nm-25 nm), but not by a distance that is at least greater than twice the first distance between the two transistors. The separation between the two transistors defines the width of the gate end cap.According to an embodiment of the present disclosure, a device structure includes a first gate on the first fin of the first transistor, and a second gate on the second fin of the second transistor, wherein the first gate is connected to The first fins are spaced apart by a first distance. The device structure includes a fuse spanning the distance and in contact with the first gate and the second gate. The first dielectric is between the first fin and the second fin, wherein the first dielectric is in contact with and below the fuse. The second dielectric is between the first gate and the second gate, wherein the second dielectric is on the fuse.In some embodiments, the third gate of the third transistor is separated from the first or second transistor by a second distance. In some such embodiments, the third dielectric is between the first fin or the second fin and the fourth dielectric is directly on the third dielectric, and there is no fuse between the third dielectric and the fourth dielectric device. In some embodiments, the second distance is the same as the first distance. In other embodiments, the second distance is at least twice the first distance. Where the second distance is greater than the first distance, two or more different dielectric materials may be present between the third dielectric and the fourth dielectric.FIG. 1A is a plan view illustration of a device structure 100 over a substrate 101 (not directly visible), where the device structure 100 includes a plurality of transistors 102 , 104 and 106 . Each transistor 102, 104 and 106 includes a gate 108, 110 and 112, respectively, as shown. Each gate is separated from adjacent gates by a gate end cap structure. The gate termination structure may include one or more dielectric layers. Typically, gate end cap structures provide electrical isolation between any two transistors, however, in this disclosure, fuse elements are embedded in selected gate end cap structures, as will be discussed further below. Transistors 102 and 104 are separated by dielectric 114 of gate end cap structure 116 as shown in the plan view illustration. In the illustrative embodiment, dielectric 114 extends laterally beyond (Z-axis) gate sidewalls 108A and 108B and gate sidewalls 110A and 110B. In the illustrative embodiment, gates 104 and 106 are separated by dielectric 118 of gate end cap structure 120 that extends laterally beyond (Z-axis) gate sidewalls 110A and 110B and gate sidewalls 112A and 112A 112B.Each transistor 102, 104 or 106 may include one or more fins. In the illustrative embodiment, individual transistors 102, 104, and 106 include a single fin. For example, transistor 102 includes fin 122, transistor 104 includes fin 124 and transistor 106 includes fin 126.Figure IB is a cross-sectional illustration of device structure 100 through line A-A'. The device structure includes gate 108 on fin 122 and gate 110 on fin 122, wherein gate 108 is separated from gate 110 by a distance S1. Device structure 100 also includes fuse 128 spanning distance S1. Fuse 128 is in contact with gate 108 and gate 110 . The gate termination structure 116 also includes a dielectric 130 between the fins 122 and 124, where the dielectric 130 is in contact with and below the fuse 128. As shown, the dielectric 114 is between the gate 108 and the gate 110, wherein the dielectric 114 is on the fuse 128.In an exemplary embodiment, the gate includes a gate dielectric and a gate metal having a work function of the gate. FIG. 1C is an enhanced cross-sectional illustration of portion 129A of gate 108 on fin 122 . As shown, gate 108 includes gate dielectric layer 132 and gate metal 134 on gate dielectric layer 132. In an embodiment, the gate dielectric layer 132 includes a material having a high dielectric constant or a high-K material. Examples of gate dielectric layer 132 include one or more of elements such as hafnium, silicon, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, or zinc, and oxygen. Examples of high-k materials that may be used in gate dielectric layer 132 include, but are not limited to, hafnium oxide, hafnium silicon oxide, lanthanum oxide, lanthanum aluminum oxide, zirconium oxide, silicon zirconium oxide, tantalum oxide, titanium oxide, barium strontium titanium oxide , barium titanium oxide, strontium titanium oxide, yttrium oxide, aluminum oxide, lead scandium tantalum oxide and lead zinc niobate.Depending on whether the transistor will be a PMOS transistor or an NMOS transistor, the gate metal 134 may include at least one of a P-type work function metal or an N-type work function metal. Examples of N-type materials include hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide, or aluminum carbide, and examples of P-type materials These include ruthenium, palladium, platinum, cobalt, nickel or conductive metal oxides such as ruthenium oxide.An enhanced cross-sectional illustration of fuse 128 and a portion 129B of gate end cap structure 116 is shown in FIG. ID. As shown in 129B, fuse 128 occupies the space between dielectric 114 and dielectric 130 within gate end cap structure 116. Dielectric 114 and dielectric 130 have vertical separation S2. A portion of the space between dielectric 114 and dielectric 130 within gate end cap structure 116 also includes gate dielectric layer 132. As shown, fuse 128 is between gate dielectric layer 132 adjacent to dielectric 114 and gate dielectric adjacent to dielectric 130. Fuse 128 has a vertical thickness TF less than distance S2 due to the presence of gate dielectric layer 132 adjacent to dielectric 114 and due to the presence of gate dielectric adjacent to dielectric 130. In an embodiment, gate dielectric layer 132 has a thickness less than or equal to half the thickness of vertical separation S2. As shown, gate dielectric layer 132 has a thickness that is less than half the thickness of vertical separation S2.In the illustrative embodiment, dielectric 130 has an uppermost surface 130A and dielectric 114 has a lowermost surface 114A, the uppermost surface 130A and the lowermost surface 114A being substantially flat. In some such embodiments, S2 is substantially uniform. In the illustrative embodiment, fuse 128 has a lateral thickness WF, where WF depends on spacing S1 and the thickness of gate dielectric layer 132. In an embodiment, the spacing S1 is between 15 nm and 25 nm. As shown, WF is larger than S1 by twice the thickness TD of gate dielectric layer 132 . WF and TF can be advantageously selected to obtain a fuse size capable of delivering maximum current prior to disintegration. In an embodiment, fuse 128 has a thickness TF of at least 2 nm.Fuse 128 includes the material of gate metal 134 . In embodiments where gate metal 134 includes multiple layers, fuse 128 may include one or more of the multiple layers, depending on the thickness of each layer. For example, a fuse may include one or more metal layers, the total thickness of which is limited to thickness TF. In other embodiments, if the lowermost layer of the plurality of layers has a thickness greater than TF, the fuse includes the material of the lowermost layer directly adjacent to gate dielectric layer 132. In some embodiments where gate metal 134 includes a work function layer and a filler metal, fuse 128 may include a work function layer and no filler metal.In an embodiment, both dielectric 130 and dielectric 114 have lateral widths (in the X direction) that are not equal to each other. For example, as shown, the lateral width of dielectric 114 is smaller than the lateral width of dielectric 130. In an illustrative embodiment in which the sidewalls of dielectrics 130 and 114 are substantially perpendicular with respect to surfaces 130A and 114A, dielectric 130 has a lateral thickness WD that is greater than lateral thickness S1 of dielectric 114. In some such embodiments, WD is larger than S1 by twice the separation S2. In other embodiments, the dielectric 114 may have a lateral width S1 that varies with height from the lowermost surface 114A. In some embodiments, S1 may have a maximum value substantially equal to or greater than WD.In other embodiments, a portion of fuse 128 is discontinuous, as shown in Figure IE. In some such embodiments, the discontinuous portion of fuse 128 includes void 131. As shown, portions of fuse 128 extend between gate dielectric layers 132 adjacent surfaces 130A and 114A and voids 131 occupy the space between gate dielectric layers 132 adjacent surfaces 130A and 114A. The remaining part. There is no gate metal 134 in the void 131 . In the illustrative embodiment, gate dielectric layer 132 is adjacent to dielectric surfaces 130A and 114A. As shown, void 131 has a width WV. In an embodiment, WV is less than WF. In other embodiments, WV is substantially equal to WF.In other examples where voids 131 are present, gate dielectric layer 132 may also be discontinuous over dielectric surfaces 130A and 114A. The void 131 may be the result of a potential difference (eg, a potential difference above 1V) applied between the gate 108 and the gate 110 . Potential differences may be caused by the shaft transistor during operation. The shaft transistor may have one terminal connected to one of gate 108 or gate 110 at a positive or negative potential. The other of gate 108 or gate 110 that is not connected to the axis transistor may be at ground potential.In other examples, fuse 128 has an irregular shape, wherein the fuse substantially matches the shape of dielectric 114 and dielectric 130 . For example, as shown in Figure IF, dielectric 130 has a non-planar uppermost surface 130A. The non-planar surface 130A may be curved in cross-section, as shown. In some such embodiments, dielectric 114 has a curved shape, wherein portions of the curved shape substantially match the curved portions of dielectric surface 130A. In embodiments where surfaces 114A and 130A are not planar, separation S2 may not be uniform. As shown, the spacing between dielectric 130 and dielectric 114 has a minimum separation S2. In an embodiment, the fuse has a thickness TF of at least 2 nm. As shown, fuse 128 is continuous between gate 108 and gate 110.Referring again to Figure IB, in an embodiment, the dielectric 130 includes one or more of oxygen, nitrogen, and carbon, and silicon. In an embodiment, the dielectric 114 includes one or more of hafnium, titanium, tantalum, lanthanum, aluminum, zirconium, barium, strontium, yttrium, lead, scandium, niobium, zirconium, or zinc and oxygen. The dielectric 114 is appropriately selected to enable fabrication of the gate end cap structure 116 and to provide sufficient vertical thickness to isolate the gate 110 from the gate 108.As shown, transistor 106 is electrically isolated from transistor 104 by gate end cap structure 120 . As shown, gate 110 and gate 112 are spaced apart by a distance S3. In an embodiment, S3 is at least 50 nm. In the illustrative embodiment, gate termination structure 120 includes dielectric 138 between fins 124 and 126 , dielectric 140 within dielectric 138 , dielectric 118 over dielectric 140 , and in contact with dielectric 118 and dielectric 140 And a liner layer 142 between the dielectric 118 and the dielectric 140 . Dielectric 140 has an uppermost surface 140A and dielectric 138 has an uppermost surface 138A, wherein surfaces 140A and 138A may be coplanar or at different levels. In the illustrative embodiment, surface 140A extends above surface 138A to obtain structural advantages as described below. Dielectric 138 may have a lateral thickness (in the X direction) that is greater than the lateral thickness of dielectric 118 and is indicative of the process operations used to fabricate gate termination structure 120.FIG. 1G is an enhanced cross-sectional illustration of a portion of gate end cap structure 120 inside block 143 . The liner layer 142 may extend over a portion of the dielectric surface 140A or over the entire surface 140A, as shown. The backing layer 142 has a thickness TV. When surface 140A extends above surface 138A, liner layer 142 may also be adjacent to dielectric sidewalls 140B, such as shown. In examples where the liner layer 142 is adjacent to the dielectric sidewalls 140B, the dielectric 118 may be adjacent to the portion of the liner layer 142 that is directly adjacent to the sidewalls 140B, as shown.Liner layer 142 has a lowermost surface 142A, which may be at a level substantially equal to dielectric surface 118A, or at a level higher or lower than dielectric surface 118A. In the illustrative embodiment, dielectric surface 118A is substantially aligned with lowermost dielectric surface 118A. In an embodiment, dielectric surface 118A is above dielectric surface 138A by a distance substantially equal to the thickness of liner layer 142, such as shown. In some such embodiments, portions of gate 110 are in the space between dielectric surface 138A and dielectric surface 118A.In the illustrative embodiment, dielectric sidewalls 138B extend beyond sidewalls 118B. In some embodiments where dielectric 118 is adjacent to a portion of liner layer 142 that is directly adjacent to sidewall 140B, dielectric sidewall 138B extends beyond sidewall 118A by a distance substantially equal to the thickness of liner layer 142, For example shown.Referring again to FIG. 1B , the liner layer 142 may be substantially symmetrical about the dielectric 140 as shown. In some embodiments, liner layer 142 may be asymmetric with respect to dielectric 140 relative to dielectric surface 138A. For example, portions of liner layer 142 may not be present on sidewall 140C but rather on sidewall 140B. In other examples, liner layer 142 may be on sidewalls 140B and 140C, but at different heights relative to dielectric surface 138A.As shown, device structure 100 also includes a dielectric 144 adjacent to fins 122, 124, and 126. The height of the uppermost surfaces of fins 122 and 124 relative to dielectric surface 144A can be determined by selecting an appropriate vertical thickness of dielectric 144. As shown, the device structure also includes a dielectric 146 below the dielectric 144 and adjacent to the fins 122 , 124 and 126 . Dielectric 146 is also directly below dielectrics 130 and 138 .In an embodiment, dielectric 140 includes the same or substantially the same material as dielectric 130, and dielectric 118 includes the same or substantially the same material as dielectric 114. In an embodiment, dielectric 144 includes the same or substantially the same material as dielectric 140 . In an embodiment, the dielectric 140 includes one or more of nitrogen, oxygen, or carbon and silicon. In an embodiment, the dielectric 146 includes one or more of nitrogen, oxygen, or carbon and silicon. Dielectric 146 may be different in composition from dielectric 144, for example, dielectric 144 has a greater density than dielectric 146. In various embodiments, the liner layer 142 includes silicon and oxygen.In an embodiment, the substrate 101 includes materials such as monocrystalline silicon, polycrystalline silicon, and silicon-on-insulator (SOI), as well as substrates formed from other semiconductor materials such as III-V materials. In some embodiments, substrate 101 may also include semiconductor dopants, depending on the desired MOS characteristics. In an embodiment, fins 122, 124 and 126 comprise the same material as substrate 101, such as single crystal silicon or one or more layers of III-V materials.In an exemplary embodiment, gates 110 and 112 each comprise the same material as gate 108.In some examples, a first pair of transistors may be separated by a first gate end cap that includes a fuse, and an adjacent second pair of transistors may be separated by a second gate end cap that does not include a fuse, wherein the first and second transistors The lateral widths of the gate end caps are substantially equal.2A is a cross-sectional illustration of a device structure 200 in which transistors 104 and 202 are separated by a gate end cap structure 204. As shown, transistor 202 includes gate 206 over fin 208 . Gate 206 is spaced from gate 110 by separation S4, where S4 is equal to the width of gate end cap structure 204. In the illustrative embodiment, gate end cap structure 204 includes a dielectric 210 between fins 124 and 208 and a dielectric 212 on dielectric 210 , and there is no fuse between dielectrics 210 and 212 . In an embodiment, S4 is between 15 nm and 25 nm. In the illustrative embodiment, S4 is substantially equal to WD. In other embodiments, S4 is greater or less than WD.In an embodiment, gate 206 includes the same material as gate 110 or gate 108 . In an embodiment, dielectric 210 includes the material of dielectric 130 and dielectric 212 includes the material of dielectric 118. In an embodiment, fins 208 comprise the same material as that of fins 122.In the example shown in Figures 1A and 2A, the gates 108, 110, 206 are over a single fin. In other examples, transistors 102, 104, 106, and 202 include a plurality of fins, wherein the plurality includes 2-10 fins.2B is a cross-sectional illustration of a device structure 250 in which each transistor includes two fins. Device structure 250 has one or more characteristics of device structure 100 . In the illustrative embodiment, device structure 250 includes transistors 102, 104, and 106, where transistors 102, 104, and 106 include gates 108, 110, and 112, respectively. As shown, gate 108 is over fins 122A and 122B, gate 110 is over fins 124A and 124B, and gate 112 is over fins 126A and 126B. Gate end cap structure 116 is between fins 122B and 124A and gate end cap structure 120 is between fins 124B and 126A. In an illustrative embodiment, there is no gate end cap structure between fins 122A and 122B, between fins 124A and 124B, and between fins 126A and 126B. In an embodiment, the spacing between fins 122A and 122B, between fins 124A and 124B, and between fins 126A and 126B is between 5 nm and 25 nm.FIG. 3 is a method 300 of fabricating a device structure, such as the device structure 100 described in connection with FIG. 1B , in accordance with an embodiment of the present disclosure. Method 300 begins at operation 310, where a dielectric liner is formed adjacent to the first and second fins. The method 300 continues at operation 320 where a dielectric layer is deposited on the dielectric liner between the first fin and the second fin. The method 300 continues at operation 330 in which the dielectric layers underlying the first and second fin structures are recessed. The method 300 continues at operation 340, where a liner layer is formed on the dielectric liner and on the dielectric layer. The method 300 continues at operation 350 where a mask dielectric is formed on the liner layer. The method 300 continues at operation 360 where the liner layer is removed from over the first and second fins. The method 300 continues at operation 370 in which the dielectric liners below the uppermost surfaces of the first and second fins are recessed. The method 300 continues at operation 380 with forming a dummy gate structure on the first fin, on the second fin, and on the dielectric liner and forming a dielectric barrier adjacent to the dummy gate structure. The method 300 continues at operation 390, wherein the dummy gate structure is removed and the dielectric liner and dielectric layer between the first fin and the second fin are exposed. Method 300 continues at operation 395, where the liner layer is removed from the region between the dielectric layer and the mask dielectric to form voids. The method ends at operation 399 by forming a gate on the first fin and on the second fin, wherein forming the gate includes depositing metal in the void to form a fuse structure.4A is a cross-sectional illustration of a plurality of fin structures patterned over substrate 101 in accordance with an embodiment of the present disclosure. As shown, fin structure 400 includes sacrificial pillars 402 over fins 122, fin structure 404 includes sacrificial pillars 406 over fins 124, and fin structure 408 includes sacrificial pillars 406 over fins 124 The sacrificial pillar 409 above the shape 126 .In an embodiment, a blanket layer of amorphous silicon is deposited over substrate 101 . Portions of substrate 101 and amorphous silicon are patterned and etched to form fin structures 400, 404 and 408.In an embodiment, the substrate 101 includes materials such as monocrystalline silicon, polycrystalline silicon, and silicon-on-insulator (SOI), as well as substrates formed from other semiconductor materials such as III-V materials. In some embodiments, substrate 101 may also include semiconductor dopants, depending on the desired MOS characteristics.FIG. 4B is a cross-sectional illustration of the structure in FIG. 4A after dielectric 410 is formed. In an embodiment, dielectric 410 includes the same or substantially the same material as dielectric 146. In an embodiment, dielectric 410 is blanket deposited on fin structures 400, 404 and 408 and on substrate 101. Dielectric 410 forms a substantially conformal layer around fin structures 400 , 404 and 408 . The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the dielectric includes nitrogen and/or carbon and silicon.In embodiments where the fin structures 400 and 404 are close to each other, a gap 411 is formed between the dielectrics 410 formed on each of the fin structures 400 and 404. In the illustrative embodiment, a larger gap 413 is formed between fin structures 404 and 408.FIG. 4C shows the structure of FIG. 4B after forming dielectric 414 on dielectric 410 . In an embodiment, the dielectric comprises the same or substantially the same material as the dielectric 130. In an exemplary embodiment, dielectric 414 includes one or more of oxygen, nitrogen, or carbon and silicon.In an embodiment, dielectric 414 is blanket deposited on dielectric 410 . The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the dielectric 414 includes nitrogen and/or carbon and silicon. In the illustrative embodiment, dielectric 414 fills gap 411 and forms a liner on dielectric 410 in gap 413.FIG. 4D shows the structure of FIG. 4C after forming dielectric 416 on dielectric 414 . In an embodiment, dielectric 416 is blanket deposited into gap 413. The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the dielectric 416 includes at least one of oxygen, nitrogen, and carbon having a flowable composition and silicon. Dielectric 416 is suitable for filling small and large openings. After the deposition process, the dielectric 416 is annealed at a temperature above 400°C.In an embodiment, the dielectric 416 is planarized. In an embodiment, the dielectric 416 is planarized using a chemical mechanical polishing (CMP) process, which forms an uppermost surface 416A that is substantially coplanar with the uppermost surface 414A of the dielectric 414 .FIG. 4E shows the structure of FIG. 4D after a process of selectively recessing dielectrics 414 and 416 relative to dielectric 410. FIG. In an embodiment, a combination of wet chemical etching and plasma etching is used to recess the dielectrics 414 and 416. In an embodiment, a wet chemical process is used to selectively recess dielectric 414 relative to dielectric 416. In an embodiment, the dielectric 414 is recessed below the level of the uppermost surface of the fins 122 or 124. In an embodiment, the dielectric 416 is recessed below the level below the uppermost surface of the fins 122 or 124, but above the level of the dielectric 414. In other embodiments, uppermost dielectric surface 414A and dielectric surface 416A are at the same level. Recessing dielectrics 414 and 416 reopens gaps 411 and 413 .FIG. 4F shows the structure of FIG. 4E after liner layer 418 is formed. In an embodiment, the backing layer 418 comprises the same or substantially the same material as the backing layer 142 described above. In an embodiment, the liner layer 418 is blanket deposited on the upper and sidewall surfaces of the dielectric 410, on the uppermost dielectric surface 414A, and on the uppermost dielectric surface 416A. The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the liner layer 418 is deposited to a thickness that will determine the thickness of the fuse that will be fabricated in a downstream operation. The liner layer 418 is deposited to a thickness to achieve the desired thickness of the gate dielectric layer and gate material used to form the fuse. For example, the portion of liner layer 418 deposited over dielectric 414 between fins 122 and 124 will determine the maximum thickness of the fuse formed between fins 122 and 124. In an embodiment, the liner layer 418 is deposited to a thickness of at least 3 nm.In some embodiments, the deposition thickness of the liner layer 418 depends on the lateral width WD of the dielectric 414 between the fins 122 and 124. The process used to remove liner layer 418 will depend on the thickness of liner layer 418 and the lateral extent over a dielectric such as dielectric 414 or dielectric 416.As shown, liner layer 418 reduces the lateral width (in the X direction) of gap 411 and gap 413 . In an embodiment, the lateral widths of gaps 411 and 413 determine the width of the dielectric that will be deposited in subsequent operations.FIG. 4G shows the structure of FIG. 4F after the formation of the dielectric 420 on the liner layer 418 and after the planarization process. In an embodiment, the dielectric 420 comprises the same or substantially the same material as the dielectric 114 or 118 described above. In an embodiment, dielectric 420 is blanket deposited onto liner layer 418. The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) process, or atomic layer deposition (ALD) process. Dielectric 420 includes the same or substantially the same material as dielectric 114 or 118 . In an embodiment, the dielectric 420 is planarized. In an embodiment, a chemical mechanical polishing (CMP) process is used to planarize dielectric 420, which forms uppermost surface 420A that is substantially coplanar with uppermost surface 410A of dielectric 410. The upper portion of the liner layer 418 above the dielectric 410 is also removed during the CMP process. The planarization process does not expose sacrificial pillars 402 , 406 or 409 .FIG. 5A shows the structure of FIG. 4G after the process of recessing the dielectric 410 . In an embodiment, dielectric 410 is selectively recessed relative to dielectric 420, liner layer 418, fins 122, 124, and 126 by a wet chemical etch process. In an embodiment, the dielectric 410 is recessed by utilizing a chemical fluid that is selective with respect to the liner layer 418. In the illustrative embodiment, the dielectric 410 is recessed to the level of the lowermost surface of the dielectric 414 to prevent undercutting of the dielectric 410 below the dielectric 414. Dielectric 420 formed over dielectric 414 between fins 122 and 124 constitutes a self-aligned gate end cap structure 426A, and dielectric 420 formed over dielectric 414 between fins 124 and 126 constitutes a self-aligned gate Extreme cap structure 426B.As shown, sacrificial pillars 402, 406 or 409 are also removed. In an embodiment, the dielectric 410 is recessed to expose the uppermost surfaces of the sacrificial pillars 402, 406 or 409. The sacrificial pillars 402, 406 or 409 may be etched by plasma etching or wet chemical processes. The process used to etch sacrificial pillars 402 , 406 or 409 does not affect fins 122 , 124 or 126 . In an embodiment, after removal of sacrificial pillars 402, 406 or 409, dielectric 410 is recessed. As shown, the process of removing sacrificial pillars 402 , 406 or 409 and recessing dielectric 410 creates openings 421 , 423 and 425 .In an embodiment, dielectric 422 is blanket deposited after a process for recessing dielectric 410 and etching sacrificial pillars 402, 406 or 409 into openings 421, 423 and 425. In an embodiment, dielectric 422 includes the same or substantially the same material as dielectric 416. Dielectric 422 may be blanket deposited over exposed fins 122, 124, and 126, over exposed portions of liner layer 418, and over dielectric 410. The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the dielectric 422 includes silicon and at least one of oxygen, nitrogen, and carbon having a flowable composition. Dielectric 422 is suitable for filling small and large openings, such as the area between dielectric 414 and fins 122 , 124 and 126 . After the deposition process, the dielectric 422 may be annealed at a temperature above 400°C. The annealing process enables the dielectric 422 to be more resistant to the wet process, which will be used to remove the liner layer 418 in downstream operations.In an embodiment, the dielectric 422 is planarized by a CMP process after deposition. The dielectric 422 is then selectively recessed below the uppermost surfaces of the fins 122, 124 and 126 relative to the liner layer 418, the dielectric 420 and the fins 122, 124 and 126. The recess of the dielectric 422 defines the fin height of the non-planar transistor to be formed.Figure 5B is a plan view cross-section of the illustration in Figure 5A. In the illustrative embodiment, openings 421 , 423 and 425 expose fins 122 , 124 and 126 . The lateral extent (in the Z direction) of the dielectric 420 is shown in the plan view illustration.FIG. 6A shows the structure of FIG. 5B after forming portions of the dummy gate structures in openings 421, 423, and 425. FIG. In an embodiment, a dummy gate dielectric layer is blanket deposited into openings 421 , 423 and 425 , on liner layer 418 , and on dielectric 420 . The dummy gate material is then deposited on the dummy gate dielectric layer. In an embodiment, the dummy gate material and dummy gate dielectric layer are patterned to form dummy gate structure 427 as shown. The dummy gate structure 427 defines the lateral extent of the gate that will be formed in downstream operations.After the dummy gates are formed, dielectric spacers 430 may be formed adjacent to the dummy gates 427. In an embodiment, a dielectric spacer layer is blanket deposited in openings 421 , 423 and 425 and then patterned. In an embodiment, after the formation of dielectric spacers 430, epitaxially doped source or drain structures 431, 433 and 435 are formed on fins 122, 124 and 126, respectively.FIG. 6B shows the structure of FIG. 6A after dielectric 437 is formed in openings 421 , 423 and 425 . Dielectric 437 may be blanket deposited in openings 421, 423 and 425 on epitaxially doped source or drain structures 431, 433 and 435 adjacent to dielectric spacer 430. In the illustrative embodiment, dielectric 437 is also deposited on dielectric 420, liner layer 418, and dielectric 422 (not shown). The deposition process may include PECVD (plasma enhanced chemical vapor deposition), physical vapor deposition (PVD), chemical vapor deposition (CVD) processes. In an embodiment, the dielectric 437 includes at least one of oxygen, nitrogen, and carbon and silicon. In an embodiment, the dielectric 437 is planarized after deposition.FIG. 6C shows the structure of FIG. 6B after the dummy gate structure 427 has been removed. In an embodiment, the dummy gate structures 427 are removed by a combination of plasma etching and wet chemical etching processes. After the dummy gate structure 427 is removed, the dummy gate dielectric layer below the dummy gate structure 427 is also removed. The removal of the dummy gate dielectric layer under dummy gate structure 427 is performed selectively with respect to fins 122, 124 and 126, dielectric 420, dielectric 437 and liner layer 418.7A shows a cross-sectional view of the structure of FIG. 6C along line A-A' after the process of removing the liner layer 418. In an embodiment, the liner layer 418 is removed by a wet chemical process prior to forming the gate. As shown, liner layer 418 is removed in the region between dielectric 420 and dielectric 414 between gaps 421 and 423. In an embodiment, the wet chemical process includes dissolving the structure of Figure 6B in acid. In the illustrative embodiment, the capillary effect of the wet etch has limited penetration between dielectric 420 and dielectric 414. In an embodiment, wet etch removes liner layer 418 between dielectric 420 and dielectric 414 from gaps 421 and 423 . Removal of liner layer 418 between dielectric 420 and dielectric 414 and between fins 122 and 124 forms gate end cap structure 116 with gap 438. Gap 438 connects gap 421 with gap 423 . The liner layer on the sidewalls of the dielectric 420 is also removed.In the area between fins 124 and 126, liner layer 418 adjacent to portions of dielectric 416 is removed. In the illustrative embodiment, liner layer 418 is removed from the area between dielectric 420 and dielectric 414 that is directly adjacent to dielectric 416. As shown, liner layer 418 is not removed from sidewalls 416A and 416B or from top surface 416C.Dielectric 420 is anchored to dielectric 437 (not shown in cross-sectional illustration).Figure 7B is a plan view of the structure of Figure 7A. In the illustrative embodiment, after removing the liner layer from the sidewalls of dielectric 420, the uppermost surface of dielectric 414 is exposed. In an illustrative embodiment, the Z-direction portion of liner layer 418 (within dashed box 439) may be removed from below dielectric 437.FIG. 8A shows the structure of FIG. 7A after forming fuse 128 and multiple transistor gates. In an embodiment, gate material 440 is deposited in openings 421 , 423 and 425 . In an embodiment, depositing gate material 440 includes depositing a gate dielectric layer in openings 421, 423, and 425 and depositing a gate electrode material on the gate dielectric layer. Figure 8B is an enhanced cross-sectional illustration of block 441 in Figure 8A. As shown, gate dielectric layer 132 is deposited to a thickness TD that is less than thickness S2 of gap 438. Gate dielectric layer 132 and gate electrode material 134 may be deposited by an atomic layer deposition process to ensure gaps 438 are filled. Gate dielectric layer 132 is deposited in gap 438 and adhered to lowermost surface 420A of dielectric 420 and uppermost surface 414A of dielectric 414. As shown, gate dielectric layer 132 is deposited in gap 438 between lowermost surface 420A and uppermost surface 414A. The gate dielectric layer 132 deposition process forms a gap portion 438A having a vertical thickness TF. Gate electrode material 134 is deposited to fill gap portion 438A and form fuse 128 . In some embodiments, gate electrode material 134 includes one or more layers, wherein the lowermost layer is thicker than TF. In some such embodiments, any layers deposited after the lowermost layer of the one or more layers are not present in gap 438.Referring again to FIG. 8A , a gate dielectric layer is also deposited on fins 122 and 124 . In an embodiment, the gate electrode material is deposited on all exposed surfaces of the gate dielectric layer. Gate material 440 extends continuously between gaps 421 and 423 . The portion of the gate material within the gap 438 is the fuse 128 described in conjunction with Figures IB, 1D and 1E.Gate material 440 is also deposited in gaps 443 and 445 adjacent to dielectric sidewalls 416A and 416B. Gate electrode material is deposited on the gate dielectric layer formed adjacent to the dielectric sidewalls 416A and 416B. However, since there is a liner layer 418 between the dielectrics 416 and 420, and between the openings 423 and 425, there is no gate material 440 between the dielectrics 416 and 420. In the illustrative embodiment, gate material 440 in opening 425 is not connected to gate material 440 in opening 423. In some such embodiments, no fuses are formed.After depositing the gate material 440, a planarization process is performed. The planarization process may include a CMP process. As shown, after planarization, gate material 440 may be recessed below uppermost surface 420A.In applications where there is no fuse connecting the two gates, the process of recessing the gate provides electrical isolation between the gate from one transistor and the gate from the other transistor. The recesses complete the formation of transistors 102, 104 and 106 in openings 421, 423 and 425, respectively.Additional bridge conductors can be formed between two or more transistors. In the illustrative embodiment, bridge conductor 442 is formed to connect gate material 440 of transistor 104 with gate material 440 of transistor 106. In an embodiment, a metal layer is deposited on gate material 440 and on dielectric 420 . In an embodiment, the metal layer is patterned to form bridge conductors 442 that bridge gate material 440 of transistor 104 with gate material 440 of transistor 106. As shown, in some regions, the metal layer may be patterned to form individual conductors 444 over the gate material 440 of the transistor 102. A separate conductor 444 is not connected to another transistor.In some embodiments, a potential difference is applied between the individual conductors 444 and the bridge conductors 442 to "blow" the fuse 128. In some such embodiments, gate material 440, and in particular gate electrode material, may be discontinuous at least within portions of gap 438.In some applications, transistor pairs that are not separated by wide gate end cap structures (e.g., greater than 50 nm) may require electrical isolation. In applications where the spacing between gates is between 15 nm and 25 nm, method 300 can be modified to form gate end cap structures without fuses.Figure 9A is a cross-sectional illustration of a plurality of fin structures after the processing operations described in conjunction with Figures 4A-4F. In the illustrative embodiment, device structure 450 includes fin structures 122 , 124 , 452 , and 454 . As shown, the fin structures 122, 124, 452 and 454 are spaced apart from each other by substantially the same distance. As shown, dielectric 410 encapsulates fin structures 122 , 124 , 452 , and 454 in a substantially uniform manner, and liner layer 418 is blanket deposited on dielectric 410 .In the illustrative embodiment, mask layer 456 is formed, wherein mask layer 456 extends over fin structures 122 , 124 and partially over fin structure 452 . In other embodiments, the mask layer may extend laterally beyond fin structure 452 in a direction toward fin structure 454, but not over dielectric 414 between fin structures 452 and 454. The mask layer may be formed by photolithographic techniques and may include a photosensitive material.9B shows the structure of FIG. 9A after a process of etching and removing liner layer 418 not covered by mask layer 456. FIG. In an embodiment, a plasma etch process is performed to selectively etch liner layer 418 with respect to dielectric 410 and dielectric 414. As shown, the etching may include plasma etching or wet chemical etching or a combination thereof. As shown, liner layer 418 is removed from over dielectric 414 between fin structures 452 and 454 .9C shows removal of mask layer 456, deposition of dielectric 420 in the openings between fin structures 400 and 404, between fin structures 400 and 452, and between fin structures 452 and 454, and then The structure of FIG. 9B after the planarization process is performed.In an embodiment, the removal of the mask material exposes the liner layer 418 in the openings 458 and 460 . In an embodiment, the process for depositing the dielectric 420 and the method for planarizing the dielectric 420 are the same or substantially the same as those described above. In the illustrative embodiment, dielectric 420 is deposited on liner layer 418 in openings 458 and 460 . In some such embodiments, the dielectric 420 is deposited directly on the dielectric 414 in the openings 462. The absence of liner layer 418 between dielectrics 414 and 420 in opening 462 will prevent fuses from forming between fin structures 452 and 454.In an embodiment, the dielectric 420 is planarized. In an embodiment, the planarization process includes a CMP process. As shown, the CMP process removes the liner layer 418 from over the dielectric 410 .In an embodiment, the operations described in conjunction with FIGS. 5A-8 may be used to process the structure in FIG. 9C to obtain a first pair of transistors and a second pair of transistors, where the first pair of transistors are connected by fuses, and the second pair of transistors is There is no fuse in the transistor to connect this second pair.10 illustrates a computing device 1000 according to an embodiment of the present disclosure. As shown, computing device 1000 houses motherboard 1002. Motherboard 1002 may include a number of components including, but not limited to, processor 1001 and at least one communication chip 1004 or 1005. Processor 1001 is physically and electrically coupled to motherboard 1002. In some embodiments, the communication chip 1005 is also physically and electrically coupled to the motherboard 1002 . In other embodiments, the communication chip 1005 is part of the processor 1001 .Depending on its application, computing device 1000 may include other components that may or may not be physically and electrically coupled to motherboard 1002. These other components include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipset 1006, antennas, Displays, touchscreen displays, touchscreen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices such as hard drive, compact disc (CD), digital versatile disc (DVD), etc.).The communication chip 1005 enables wireless communication for transmitting data to and from the computing device 1000. The term "wireless" and derivatives thereof may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc., that can communicate data through a non-solid medium using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. The communication chip 1005 may implement any of a variety of wireless standards or protocols, including but not limited to Wi-Fi (IEEE801.11 series), WiMAX (IEEE801.11 series), Long Term Evolution (LTE), Ev-DO, HSPA+ , HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth and its derivatives, and any other wireless protocol designated as 3G, 4G, 5G and beyond. Computing device 1000 may include multiple communication chips 1004 and 1005. For example, the first communication chip 1005 may be dedicated to shorter-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 1004 may be dedicated to longer-range wireless communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE , Ev-DO and others.The processor 1001 of the computing device 1000 includes an integrated circuit die packaged within the processor 1001. In some embodiments, the integrated circuit die of processor 1001 includes one or more interconnect structures, non-volatile memory devices, and fuse-coupled transistors such as device structure 100 depicted in Figure IB. Referring again to Figure 9, the term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to convert the electronic data into other electronic data that may be stored in registers and/or memory.The communication chip 1005 also includes an integrated circuit die packaged within the communication chip 1005 . In another embodiment, the integrated circuit die of the communication chips 1004, 1005 includes one or more interconnect structures, non-volatile memory devices, capacitors, and fuse couplings such as the device structure 100 depicted in FIG. IB transistor. Referring again to Figure 10, computing device 1000 may include other components that may or may not be physically and electrically coupled to motherboard 1002, depending on its application. These other components may include, but are not limited to, volatile memory (eg, DRAM) 1007, 1008, non-volatile memory (eg, ROM) 1010, graphics CPU 1012, flash memory, global positioning system (GPS) device 1013, compass 1014, Chipset 1006, Antenna 1016, Power Amplifier 1009, Touch Screen Controller 1011, Touch Screen Display 1017, Speaker 1015, Camera 1003 and Battery 1018 (as shown), and other components such as digital signal processor, cryptographic processor , audio codecs, video codecs, accelerometers, gyroscopes, and mass storage devices (eg hard drives, solid state drives (SSD), compact discs (CDs), digital versatile discs (DVDs), etc. In other embodiments, any of the components housed within computing device 1000 and discussed above may comprise a separate integrated circuit memory die that includes one or more arrays of NVM devices.In various embodiments, computing device 1000 may be a laptop, netbook, notebook, ultrabook, smartphone, tablet, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer , scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In other embodiments, computing device 1000 may be any other electronic device that processes data.11 illustrates an integrated circuit (IC) structure 1100 that includes one or more embodiments of the present disclosure. An integrated circuit (IC) structure 1100 is an intervening substrate for bridging a first substrate 1102 to a second substrate 1104. For example, the first substrate 1102 may be an integrated circuit die. For example, the second substrate 1104 may be a memory module, a computer motherboard, or another integrated circuit die. Typically, the purpose of an integrated circuit (IC) structure 1100 is to expand connections to wider pitches or to reroute connections to different connections. For example, the integrated circuit (IC) structure 1100 can couple the integrated circuit die to a ball grid array (BGA) 1107, which in turn can be coupled to the second substrate 1104. In some embodiments, the first substrate 1102 and the second substrate 1104 are attached to opposite sides of the integrated circuit (IC) structure 1100. In other embodiments, the first substrate 1102 and the second substrate 1104 are attached to the same side of the integrated circuit (IC) structure 1100. And in other embodiments, three or more substrates are interconnected by an integrated circuit (IC) structure 1100.The integrated circuit (IC) structure 1100 may be formed of epoxy, glass fiber reinforced epoxy, ceramic materials, or polymeric materials such as polyimide. In other embodiments, integrated circuit (IC) structures may be formed from alternating rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III-V groups and Group IV materials.Integrated circuit (IC) structures may include metal interconnects 1108 and vias 1110, including but not limited to through silicon vias (TSVs) 1112. The integrated circuit (IC) structure 1100 may also include embedded devices 1114, including passive and active devices. Such embedded devices 1114 include transistors, resistors, inductors, fuses coupled to the transistors, such as in the device structure 100 described in Figures 1B-2B. Referring again to FIG. 11, the integrated circuit (IC) structure 1100 may also include embedded devices 1114, such as one or more resistive random access devices, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices may also be formed on the integrated circuit (IC) structure 1100.FIG. 12A shows a cross-sectional illustration of a memory cell 1200 including the transistor 104 described in connection with FIG. 1A and a memory structure 1201 coupled to a contact to the transistor 104 . In the illustrative embodiment, memory structure 1201 is coupled to the drain contact of transistor 104. In other embodiments, memory cell 1200 also includes a memory structure, such as memory structure 1201, which is coupled to transistors 102 and 106 separately.Figure 12B is a cross-sectional illustration of the memory structure 1201 through the line A-A' in the structure of Figure 12A in accordance with an embodiment of the present disclosure. As shown, memory structure 1201 includes non-volatile memory element 1202 between drain contact 1203 and interconnect 1204. In other embodiments, there are one or more interconnect levels between the drain contact 1203 and the volatile memory element 1202. In the illustrative embodiment, drain contact 1203 is in contact with epitaxial drain structure 433 over fin 124.Non-volatile memory elements 1202 may include magnetic tunnel junction (MTJ) devices, conductive bridge random access memory (CBRAM) devices, or resistive random access memory (RRAM) devices. Non-volatile memory elements, such as MTJ devices, require a nominal critical switching current that depends on the area of the MTJ device for magnetization switching. As the size of the MTJ shrinks, the critical switching current required to switch the memory state of the MTJ device also shrinks in proportion to the device area, but shrinking the MTJ presents a number of challenges. Feature scaling of the MTJ device can be relaxed if the transistors connected to the MTJ device can supply an amount of current that exceeds the critical switching current requirement of the MTJ device. In an embodiment, a non-planar transistor, such as transistor 104, that can provide additional current boost (by increasing drive current) can be advantageously coupled to non-volatile memory element 1202, such as an MTJ device, to overcome critical switching current requirements.12C shows a cross-sectional illustration of an exemplary non-volatile memory element 1202 including a magnetic tunnel junction (MTJ) material device. In the illustrated embodiment, the MTJ device includes a bottom electrode 1205 , a fixed magnet 1206 over the bottom electrode 1205 , a tunnel barrier 1208 over the stationary magnet 1206 , a free magnet 1210 over the tunnel barrier 1208 , and a top electrode over the free magnet 1210 1212. In an embodiment, dielectric spacers laterally surround (not shown) non-volatile memory elements 1202 .In an embodiment, the stationary magnet 1206 comprises a material and has a thickness sufficient to maintain the stationary magnetization. For example, the fixed magnet 1206 may include alloys such as CoFe and CoFeB. In an embodiment, the stationary magnet 1206 comprises Co100-x-yFexBy, where X and Y each represent atomic percentages, such that X is between 50 and 80 and Y is between 10 and 40, and the sum of X and Y is less than 100. In an embodiment, X is 60 and Y is 20. In an embodiment, the fixed magnet 1206 is FeB, wherein the concentration of boron is between 10 and 40 atomic percent of the total composition of the FeB alloy. In an embodiment, the fixed magnet 1206 has a thickness between 1 nm and 2.5 nm.In an embodiment, the tunnel barrier 1208 is comprised of a material suitable for allowing electrons with majority spins to flow through the tunnel barrier 1208, while at least to some extent preventing electrons with minority spins from passing through the tunnel barrier 1208. Thus, the tunnel barrier 1208 (or spin filter layer) may also be referred to as a tunnel layer for electron currents of specific spin orientations. In an embodiment, the tunnel barrier 1208 includes a material such as, but not limited to, magnesium oxide (MgO) or aluminum oxide (Al2O12). In an embodiment, the tunnel barrier 1208 comprising MgO has a crystal orientation of (001) and is lattice matched to the free magnet 1210 below the tunnel barrier 1208 and the fixed magnet 1206 above the tunnel barrier 1208 . In an embodiment, the tunnel barrier 1208 is MgO and has a thickness between 1 nm and 2 nm.In an embodiment, the free magnet 1210 includes a magnetic material such as Co, Ni, Fe, or an alloy of these materials. In an embodiment, the free magnet 1210 includes magnetic materials such as FeB, CoFe, and CoFeB. In an embodiment, the free magnet 1210 comprises Co100-x-yFexBy, where X and Y each represent atomic percentages, such that X is between 50 and 80 and Y is between 10 and 40, and the sum of X and Y is less than 100. In an embodiment, X is 60 and Y is 20. In an embodiment, the free magnet 1210 is FeB with a concentration of boron between 10 and 40 atomic percent of the total composition of the FeB alloy. In an embodiment, the free magnet 1210 has a thickness between 1 nm and 2.0 nm.In an embodiment, the bottom electrode 1205 includes an amorphous conductive layer. In an embodiment, the bottom electrode 1205 is a smooth topography electrode. In an embodiment, the bottom electrode 1205 includes a material such as W, Ta, TaN, or TiN. In an embodiment, the bottom electrode 1205 consists of Ru layers interleaved with Ta layers. In an embodiment, the bottom electrode 1205 has a thickness between 20 nm and 50 nm. In an embodiment, the top electrode 1212 includes a material such as W, Ta, TaN, or TiN. In an embodiment, the top electrode 1212 has a thickness between 30 nm and 70 nm. In an embodiment, the bottom electrode 1205 and the top electrode 1212 are the same metal, such as Ta or TiN. In an embodiment, the MTJ device has a combined total thickness of the individual layers between 60 nm and 100 nm and a width between 10 nm and 50 nm.Referring again to Figure 12B, in an embodiment, the non-volatile memory element 1202 is a resistive random access memory (RRAM) operating according to the principle of filament conduction. When an RRAM device undergoes an initial voltage breakdown, filaments form in a layer known as the switching layer. The size of the filament depends on the magnitude of the breakdown voltage and can greatly enhance reliable switching between different resistance states in filamentary RRAM devices at higher currents. In an embodiment, transistor 100, which can provide additional current boost (by increasing the drive current), can advantageously be coupled to the RRAM device to provide reliable switching operation.12D shows a cross-sectional illustration of an exemplary non-volatile memory element 1202 including a resistive random access memory (RRAM) device. In the illustrated embodiment, the RRAM material stack includes a bottom electrode 1214, a switching layer 1216 over the bottom electrode 1214, an oxygen exchange layer 1218 over the switching layer 1216, and a top electrode 1220 over the oxygen exchange layer 1218.In an embodiment, the bottom electrode 1214 includes an amorphous conductive layer. In an embodiment, the bottom electrode 1214 is a topographically smooth electrode. In an embodiment, the bottom electrode 1214 includes a material such as W, Ta, TaN, or TiN. In an embodiment, the bottom electrode 1214 consists of Ru layers interleaved with Ta layers. In an embodiment, the bottom electrode 1214 has a thickness between 20 nm and 50 nm. In an embodiment, the top electrode 1220 includes a material such as W, Ta, TaN, or TiN. In an embodiment, the top electrode 1220 has a thickness between 120 and 70 nm. In an embodiment, the bottom electrode 1214 and the top electrode 1220 are the same metal, such as Ta or TiN.Switching layer 1216 may be a metal oxide, for example, including atoms of oxygen and one or more metals such as, but not limited to, Hf, Zr, Ti, Ta, or W. In the case of titanium or hafnium or tantalum with oxidation state +4, switching layer 1216 has the chemical composition MOX, where O is oxygen and X is or substantially close to 2. In the case of tantalum having oxidation state +5, switching layer 1216 has a chemical composition M2OX, where O is oxygen and X is or substantially close to 5. In an embodiment, the switching layer 1216 has a thickness between 1 nm and 5 nm.The oxygen exchange layer 1218 acts as a source of oxygen vacancies or a sink for O2-. In an embodiment, the oxygen exchange layer 1218 is composed of a metal such as, but not limited to, hafnium, tantalum, or titanium. In an embodiment, the oxygen exchange layer 1218 has a thickness between 5 nm and 20 nm. In an embodiment, the thickness of the oxygen exchange layer 1218 is at least twice the thickness of the switching layer 1216 . In another embodiment, the thickness of the oxygen exchange layer 1218 is at least twice the thickness of the switching layer 1216 . In an embodiment, the RRAM device has a combined total thickness of the individual layers between 60 nm and 100 nm and a width between 10 nm and 50 nm.Referring again to FIG. 12B , drain contact 1203 is embedded in dielectric 437 . Dielectric 1224 is on dielectric 437. In the illustrative embodiment, drain interconnect 1204 and non-volatile memory element 1202 are embedded in dielectric 1224.In an embodiment, drain contact 1203 and drain interconnect 1204 include a liner layer including ruthenium or tantalum and a filler metal such as copper or tungsten. In an embodiment, the dielectric 1224 includes one or more of nitrogen, oxygen, and carbon and silicon, such as silicon nitride, silicon dioxide, carbon-doped silicon nitride, silicon oxynitride, or silicon carbide.Accordingly, one or more embodiments of the present disclosure generally relate to self-aligned gate end caps and fabrication methods for FinFET architectures, and may be used in embedded non-volatile memory and SOC applications.In a first example, a device structure includes a first gate on a first fin, a second gate on a second fin, wherein the second gate is spaced a distance from the first gate. A fuse spans the distance and is in contact with the first gate and the second gate. a first dielectric is between the first and second fins, wherein the first dielectric is in contact with and below the fuse, and the second dielectric is between the first and second gates, wherein The second dielectric is on the fuse.In a second example, for any of the first examples, the first gate and the second gate include a gate dielectric and a gate metal, wherein the gate metal includes one of tantalum, titanium, nitrogen, or tungsten or variety.In a third example, for any of the first to second examples, the fuse includes gate metal, wherein the fuse includes a minimum thickness of 2 nm.In a fourth example, for any of the first to third examples, the fuse is between a gate dielectric adjacent to the first insulator and a gate dielectric adjacent to the second insulator.In a fifth example, for any of the first to fourth examples, the gate dielectric layer is adjacent to sidewalls of the first dielectric and sidewalls of the second dielectric.In a sixth example, for any one of the first to fifth examples, the first insulator includes one or more of oxygen, nitrogen, and carbon and silicon, and wherein the second insulator includes Hf, Zr, W, or La one or more of and oxygen.In a seventh example, for any of the first to sixth examples, wherein the distance is between 15 nm and 25 nmIn an eighth example, for any of the first to seventh examples, the distance is a first distance, and the device structure further includes a third gate on the third fin, wherein the third gate The pole is spaced a second distance from the second gate. The third dielectric is between the first fin and the third fin, and the fourth dielectric is on the third dielectric, and there is no fuse between the fourth dielectric and the third dielectric.In a ninth example, for any of the first to eighth examples, wherein the second distance is between 15 nm and 25 nm.In a tenth example, for any of the first to ninth examples, the distance is a first distance, and the device structure further includes a third fin spaced from the second gate on the fourth fin A fourth gate of distance, a fifth dielectric between the fourth and second fins, a sixth dielectric within the fifth dielectric, the fifth dielectric and a seventh dielectric above the sixth dielectric, and with The sixth and seventh dielectrics are in contact and with a liner layer therebetween, there is no fuse between the sixth and seventh dielectrics.In the eleventh example, for any one of the first to tenth examples, the third distance is at least 50 nm.In a twelfth example, for any one of the first to eleventh examples, wherein the liner layer is at least 3 nm thick, and wherein the liner layer includes at least one of oxygen, nitrogen, or carbon and silicon.In a thirteenth example, for any of the first to twelfth examples, the interface between the first dielectric and the second dielectric is non-planar.In a fourteenth example, for any one of the first to thirteenth examples, a portion of the fuse is discontinuous between the first gate and the second gate, and wherein the discontinuous portion includes a gap.In a fifteenth example, a device structure includes a first gate on a first fin, a second gate on a second fin, wherein the second gate is spaced a distance from the first gate . A fuse spans the distance and is in contact with the first gate and the second gate. a first dielectric is between the first and second fins, wherein the first dielectric is in contact with and below the fuse, and the second dielectric is between the first and second gates, wherein The second dielectric is on the fuse. The device structure also includes: a third gate on the third fin, wherein the third gate is spaced apart from the second gate by a second distance; a third gate between the first and third fins three dielectrics; and a fourth dielectric on the third dielectric without a fuse between the third dielectric and the fourth dielectric.In a sixteenth example, for any one of the fifteenth to fifteenth examples, wherein the first gate and the second gate comprise a gate dielectric and a gate metal, wherein the gate metal comprises tantalum, titanium, nitrogen or one or more of tungsten.In a seventeenth example, for any of the fifteenth to sixteenth examples, wherein the fuse includes gate metal, wherein the fuse includes a minimum thickness of 2 nm.In an eighteenth example, for any of the fifteenth to seventeenth examples, wherein the fuse is between a gate dielectric adjacent to the first insulator and a gate dielectric adjacent to the second insulator.In a nineteenth example, a method of making a fuse includes forming first and second fins spaced apart by a distance, and forming adjacent first and second fins Dielectric gasket. The method also includes depositing a dielectric layer on the dielectric liner between the first and second fins, and recessing the dielectric layer under the first and second fin structures. The method also includes forming a liner layer on the dielectric liner and the dielectric layer, and forming a mask dielectric on the liner layer. The method also includes removing the liner layer from over the first and second fins, and recessing a dielectric liner below the uppermost surfaces of the first and second fins. The method also includes: forming a dummy gate structure on the first fin, the second fin and the dielectric liner; forming a dielectric barrier adjacent to the dummy gate structure; and removing the dummy gate structure and exposing the dummy gate structure A dielectric liner and dielectric layer between the first and second fins. The method also includes: removing a liner layer from a region between the dielectric layer and the mask dielectric to form a void; and forming a gate on the first fin and on the second fin, wherein forming the gate includes at Metal is deposited in the voids to form the fuse structure.In a twentieth example, for any of the nineteenth examples, the distance is a first distance, and the method further comprises: forming a third fin adjacent to the second fin, wherein the first distance The three fins are separated from the second fin by a second distance; and a dielectric layer is formed between the second fin and the third fin. The method also includes forming a liner layer on the dielectric layer and forming a mask dielectric on the liner layer. The method also includes partially removing the liner layer from the region between the dielectric layer and the mask dielectric, and forming a gate on the third fin.
A processor includes a memory configured to store data in a plurality of pages, a TLB, and a TLB controller. The TLB is configured to search, when accessed by an instruction having a virtual address, for address translation information that allows the virtual address to be translated into a physical address of one of the plurality of pages, and to provide the address translation information if the address translation information is found within the TLB. The TLB controller is configured to determine whether a current instruction and a subsequent instruction seek access to a same page within the plurality of pages, and if so, to prevent TLB access by the subsequent instruction, and to utilize the results of the TLB access of a previous instruction for the current instruction.
CLAIMS 1. A processor comprising: a memory configured to store data in a plurality of pages; a translation lookaside buffer (TLB) configured to search, when accessed by an instruction having a virtual address, for address translation information that allows the virtual address to be translated into a physical address of one of the plurality of pages, and to provide the address translation information if the address translation information is found within the TLB; and a TLB controller configured to determine whether a current instruction and a subsequent instruction seek access to a same page within the plurality of pages, and if so, to prevent TLB access by the subsequent instruction. 2. The processor of claim 1, wherein the current instruction includes information about the subsequent instruction, and wherein the TLB controller is further configured to use the information included in the current instruction in order to determine whether the current instruction and the subsequent instruction seek access to a same page within the plurality of pages. 3. The processor of claim 1, wherein the TLB controller is further configured to compare a virtual address generated for the current instruction with a virtual address generated for the subsequent instruction, in order to determine whether the current instruction and the subsequent instruction seek access to a same page within the plurality of pages. 4. The processor of claim 3, wherein the TLB controller is further configured to determine whether the virtual address generated for the current instruction and the virtual address generated for the subsequent instruction translate into physical addresses of a same page within the plurality of pages. 5. The processor of claim 2, wherein the TLB controller is further configured to use for the subsequent instruction an address translation information that was already provided by the TLB for the current instruction, if the memory access controller determines that the current instruction and the subsequent instruction seek data access from the same page within the plurality of pages. 6. The processor of claim 1, wherein the current instruction comprises an instruction for an iterative operation. 7. The processor of claim 1, wherein the current instruction and the subsequent instruction comprise consecutive pieces of a single compound instruction. 8. The processor of claim 1, wherein the TLB is configured to store a plurality of TLB entries, each one of the plurality of TLB entries including a virtual address, a physical address of one of the plurality of the pages in the memory, and address translation information for translating the virtual address into the physical address, and wherein the TLB is further configured to search within the plurality of TLB entries for the address translation information, when accessed by the instruction containing the virtual address. 9. The processor of claim 1, wherein the TLB controller is further configured to determine, prior to a TLB access point of the subsequent instruction, whether the current instruction and the subsequent instruction seek access to a same page within the plurality of pages. 10. The processor of claim 1, wherein the current instruction and the subsequent instruction comprise consecutive instructions that seek sequential accesses to the memory. 11. The processor of claim 1, wherein the processor comprises a multi-stage pipelined processor. 12. The processor of claim 11, wherein the multi-stage pipelined processor comprises at least a fetch stage, a decode stage, an execute stage, a memory stage, and a write-back stage. 13. The processor of claim 12, further comprising: at least one fetch unit configured to fetch one or more instructions from the instructions register; at least one decode unit configured to decode the one or more instructions fetched by the fetch unit; and at least one execute unit configured to execute the one or more instructions decoded by the decode unit. 14. A processor comprising: a memory configured to store data in a plurality of pages; a TLB configured to search, when accessed by an instruction having a virtual address, for address translation information within the TLB that allows the virtual address to be translated into a physical address, and to provide the address translation information if the address translation information is found within the TLB; and a TLB controller configured to determine whether a current instruction and a plurality of subsequent instructions seek access to a same page within the plurality of pages, and if so, to prevent TLB access by the one or more of the plurality of subsequent instructions. 15. A processor comprising: a memory configured to store data in a plurality of pages; a TLB configured to search, when accessed by an instruction containing the virtual address, for address translation information that allows a virtual address to be translated into a physical address, and to provide the address translation information if the address translation information is found within the TLB; means for determining whether a current instruction and a subsequent instruction seek data access from a same page within the plurality of pages in the memory; and means for preventing TLB access by the subsequent instruction, if the current instruction and the subsequent instruction seek data access from a same page within the plurality of pages in the memory. 16. A method of controlling access to a TLB in a processor, the method comprising: receiving a current instruction and a subsequent instruction; determining that the current instruction and the subsequent instruction seek access to a same page within a plurality of pages in a memory; and preventing access to the TLB by the subsequent instruction. 17. The method of claim 16, wherein the current instruction includes information about the subsequent instruction, and further comprising using the information included in the current instruction to determine that the current instruction and the subsequent instruction seek access to the same page within the plurality of pages. 18. The method of claim 16, wherein the act of determining that the current instruction and the subsequent instruction seek access to a same page in a memory comprises generating a first virtual address for the current instruction and a second virtual address for the subsequent instruction, and comparing the first virtual address with the second virtual address. 19. The method of claim 18, wherein the act of comparing the first virtual address with the second virtual address comprises determining whether the first virtual address and the second virtual address translate into physical addresses that indicate a same page within the plurality of pages. 20. The method of claim 16, further comprising using for the subsequent instruction an address translation information that was already provided by the TLB for the current instruction, after determining that the current instruction and the subsequent instruction seek data access from a same page within the plurality of pages. 21. The processor of claim 1, further comprising a memory configured to store instructions in a plurality of pages. 22. The processor of claim 1, wherein the TLB controller is further configured to utilize a result of the TLB access of a previous instruction for the current instruction. 23. The processor of claim 1, wherein the processor comprises a plurality of levels of TLB.
PREVENTING MULTIPLE TRANSLATION LOOKASIDE BUFFERACCESSES FOR A SAME PAGE IN MEMORYFIELD[0001] The present invention relates to translation look-aside buffers.BACKGROUND[0002] In a processor that supports paged virtual memory, data may be specified using virtual (or "logical") addresses that occupy a virtual address space of the processor. The virtual address space may typically be larger than the amount of actual physical memory in the system. The operating system in these processors may manage the physical memory in fixed size blocks called pages.[0003] To translate virtual page addresses into physical page addresses, the processor may search page tables stored in the system memory, which may contain the necessary address translation information. Since these searches (or "page table walks") may involve memory accesses, unless the page table data is in a data cache, these searches may be time-consuming.[0004] The processor may therefore perform address translation using one or more TLBs (translation lookaside buffers). A TLB is an address translation cache, i.e. a small cache that stores recent mappings from virtual addresses to physical addresses. The processor may cache the physical address in the TLB, after performing the page table search and the address translation. A TLB may typically contain the most commonly referenced virtual page addresses, as well as the physical page address associated therewith. There may be separate TLBs for instruction addresses (instructions-TLB or I-TLB) and for data addresses (data-TLB or D-TLB).[0005] A TLB may be accessed to determine the physical address of an instruction, or the physical address of one or more pieces of an instruction. A virtual address may typically have been generated for the instruction, or the piece of an instruction. The TLB may search its entries to see if the address translation information for the virtual address is contained in any of its entries.[0006] In order to obtain the address translation information for multiple subsequent instructions, or for multiple pieces of an instruction, the TLB may be accessed for each individual instruction, or for each of the multiples pieces of an instruction. This process may entail some power however, since each TLB access requires some consumption of power.SUMMARY[0007] In one embodiment of the invention, a processor may include a memory, a TLB, and a TLB controller. The memory may be configured to store data in a plurality of pages. The TLB may be configured to search, when accessed by an instruction having a virtual address, for address translation information that allows the virtual address to be translated into a physical address of one of the plurality of pages, and to provide the address translation information if the address translation information is found within the TLB. The TLB controller may be configured to determine whether a current instruction and a subsequent instruction seek access to a same page within the plurality of pages, and if so, to prevent TLB access by the subsequent instruction. The TLB controller may also be configured to utilize the results of the TLB access of the current instruction for the subsequent instruction.[0008] In another embodiment of the invention, a processor may include a memory, a TLB, and a TLB controller. The memory may be configured to store data in a plurality of pages. The TLB may be configured to search, when accessed by an instruction having a virtual address, for address translation information within the TLB that allows the virtual address to be translated into a physical address, and to provide the address translation information if the address translation information is found within the TLB. The TLB controller may be configured to determine whether a current instruction and a plurality of subsequent instructions seek access to a same page within the plurality of pages, and if so, to prevent TLB access by one or more of the plurality of subsequent instructions. The TLB controller may also be configured to utilize the results of the TLB access of the current instruction for one or more of the plurality of subsequent instructions.[0009] In another embodiment of the invention, a processor may include a memory, and a TLB controller. The memory may be configured to store data in a plurality of pages. The TLB may be configured to search, when accessed by an instruction containing the virtual address, for address translation information that allows a virtual address to be translated into a physical address, and to provide the address translation information if the address translation information is found within the TLB. The processor may further include means for determining whether a current instruction and a subsequent instruction seek data access from a same page within the plurality of pages in the memory. The processor may further include means for preventing TLB access by the subsequent instruction, if the current instruction and the subsequent instruction seek data access from a same page within the plurality of pages in the memory. The processor may further include means for utilizing the results of the TLB access of the current instruction for the subsequent instruction.[0010] In yet another embodiment of the invention, a method of controlling access to a TLB in a processor may include receiving a current instruction and a subsequent instruction. The method may include determining that the current instruction and the subsequent instruction seek access to a same page within a plurality of pages in a memory. The method may include preventing access to the TLB by the subsequent instruction. The method may include utilizing the results of the TLB access of the current instruction for the subsequent instruction.[0011] In another embodiment of the invention, a processor may include a memory, a TLB, and a TLB controller. The memory may be configured to store data in a plurality of pages. The TLB may be configured to search, when accessed by an instruction having a virtual address, for address translation information within the TLB that allows the virtual address to be translated into a physical address, and to provide the address translation information if the address translation information is found within the TLB. The TLB controller may be configured to determine whether a current compound instruction and any number of subsequent pieces of that compound instruction seek access to a same page within the plurality of pages, and if so, to prevent TLB access by the one or more of the plurality of subsequent pieces of the compound instruction. The TLB controller may be configured to utilize the results of the TLB access for the first piece of the compound instruction for the plurality of subsequent pieces of that instruction. BRIEF DESCRIPTION OF THE DRAWINGS[0012] FIG. 1 schematically illustrates a translational lookaside buffer (TLB), known in the art, that provides address translation information for virtual addresses.[0013] FIG. 2 is a diagram of a multistage pipelined processor having a TLB controller configured to prevent multiple TLB accesses to a same page in memory.DETAILED DESCRIPTION[0014] The detailed description set forth below in connection with the appended drawings is intended to describe various embodiments of the present invention, but is not intended to represent the only embodiments in which the present invention may be practiced. The detailed description includes specific details, in order to permit a thorough understanding of the present invention. It should be appreciated by those skilled in the art, however, that the present invention may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form, in order to more clearly illustrate the concepts of the present invention.[0015] FIG. 1 schematically illustrates a conventional TLB that operates in a virtual memory system. As known in the art, in virtual memory systems mappings (or translations) may typically be performed between a virtual (or "linear") address space and a physical address space. A virtual address space typically refers to the set of all virtual addresses 22 generated by a processor. A physical address space typically refers to the set of all physical addresses for the data residing in the physical memory 30, i.e. the addresses that are provided on a memory bus to write to or read from a particular location in the physical memory 30.[0016] In a paged virtual memory system, it may be assumed that the data is composed of fixed-length units 31 commonly referred to as pages. The virtual address space and the physical address space may be divided into blocks of contiguous page addresses. Each virtual page address may provide a virtual page number, and each physical page address may indicate the location within the memory 30 of a particular page 31 of data. A typical page size may be about 4 kilobytes, for example, although different page sizes may also be used. The page table 20 in the physical memory 30 may contain the physical page addresses corresponding to all of the virtual page addresses of the virtual memory system, i.e. may contain the mappings between virtual page addresses and the corresponding physical page addresses for all the virtual page addresses in the virtual address space. Typically, the page table 20 may contain a plurality of page table entries (PTEs) 21, each PTE 21 pointing to a page 31 in the physical memory 30 that corresponds to a particular virtual address.[0017] Accessing the PTEs 21 stored in the page table 20 in the physical memory 30 may generally require memory bus transactions, which may be costly in terms of processor cycle time and power consumption. The number of memory bus transactions may be reduced by accessing the TLB 10, rather than the physical memory 30. As explained earlier, the TLB 10 is an address translation cache that stores recent mappings between virtual and physical addresses. The TLB 10 typically contains a subset of the virtual-to-physical address mappings that are stored in the page table 20. A TLB 10 may typically contain a plurality of TLB entries 12. Each TLB entry 12 may have a tag field 14 and a data field 16. The tag field 14 may include some of the high order bits of the virtual page addresses as a tag. The data field 16 may indicate the physical page address corresponding to the tagged virtual page address.[0018] When an instruction has a virtual address 22 that needs to be translated into a corresponding physical address, during execution of a program, the TLB 10 may be accessed in order to look up the virtual address 22 among the TLB entries 12 stored in the TLB 10. The virtual address 22 typically includes a virtual page number, which may be used in the TLB 10 to look up the corresponding physical page address.[0019] If the TLB 10 contains, among its TLB entries, the particular physical page address corresponding to the virtual page number contained in the virtual address 22 presented to the TLB, a TLB "hit" occurs, and the physical page address can be retrieved from the TLB 10. If the TLB 10 does not contain the particular physical page address corresponding to the virtual page number in the virtual address 22 presented to the TLB, a TLB "miss" occurs, and a lookup of the page table 20 in the physical memory 30 may have to be performed. Once the physical page address is determined from the page table 20, the physical page address corresponding to the virtual page address may be loaded into the TLB 10, and the TLB 10 may be accessed once again with the virtual page address 22. Because the desired physical page address has now been loaded in the TLB 10, the TLB access results in a TLB "hit" this time, and the recently loaded physical page address may be generated at an output of the TLB 10.[0020] A paged virtual memory system, as described above, may be used in a pipelined processor having a multistage pipeline. As known in the art, pipelining can increase the performance of a processor, by arranging the hardware so that more than one operation can be performed concurrently. In this way, the number of operations performed per unit time may be increased, even thought the amount of time needed to complete any given operation may remain the same. In a pipelined processor, the sequence of operations within the processor may be divided into multiple segments or stages, each stage carrying out a different part of an instruction or an operation, in parallel. The multiple stages may be viewed as being connected to form a pipe. Typically, each stage in a pipeline may be expected to complete its operation in one clock cycle. An intermediate storage buffer may commonly be used to hold the information that is being passed from one stage to the next. By way of example, a three stage pipelined processor may include the following stages: instruction fetch, decode, and execute; a four stage pipeline may include an additional write-back stage.[0021] Pipelining may typically exploit parallelism among instructions in a sequential instruction stream. As a sequential stream of instructions, or a sequential stream of multiple pieces of a single compound instruction, moves through the stages of a pipeline, the instructions may access the TLB at a TLB access point in the pipeline. Each instruction may access the TLB in turn, in order to look up the virtual-to-physical address translation needed to carry out the memory data accesses requested by the instructions. In order to determine whether the virtual addresses of a sequential instruction stream (or of a sequential stream of multiple pieces of an instruction) are included among the TLB entries in a TLB, a common practice may be to access the TLB for each instruction in the stream, in turn, or for each piece of an instruction, in turn. This may entail considerable power penalty, however, since each TLB access burns power.[0022] In one embodiment of an address translation system, the crossing of a page boundry for multiple subsequent instructions, or for multiple pieces of an instruction, may be determined prior to a TLB access point in the pipeline. If it is determined that no page boundry has been crossed, the multiple subsequent instructions (or pieces of an instruction) may be prevented from carrying out TLB accesses, thereby saving power and increasing efficiency.[0023] FIG. 2 is a functional diagram illustrating an address translation system 100 used in a pipelined processor having a multistage pipeline. In overview, the address translation system 100 includes a TLB 120, and a TLB controller 140 that controls the operation of the TLB 120, including the accesses to the TLB 120. In the illustrated embodiment, the TLB 120 may be a data-TLB (DTLB). The TLB controller 140 is configured to prevent subsequent accesses to the TLB 120, if it is determined that subsequent accesses to the TLB 120 seek data from a same page in memory. The TLB controller 140 may be part of a central processing unit (CPU) in the processor. Alternatively, the TLB controller 140 may be located within a core of a processor, and/or near the CPU of the processor.[0024] The address translation system 100 may be connected to a physical memory 130, which includes a page table 120 that stores the physical page addresses corresponding to the virtual page addresses that may be generated by the processor. A data cache 117 that provides high speed access to a subset of the data stored in the main memory 110 may also be provided. One or more instruction registers may be provided to store one or more instructions.[0025] An exemplary sequence 200 of pipeline stages is illustrated in FIG. 2. The sequence 200 of stages illustrated in FIG. 2 include: a fetch stage 210; a decode stage 220; an execute stage 230; a memory access stage 240; and a write back stage 250. The exemplary sequence in FIG. 2 is shown for illustrative purposes, and many other alternative sequences, having a smaller or a larger number of pipeline stages, are possible. The hardware may include at least one fetch unit 211 configured to fetch one or more instructions from the instruction memory; at least one decode unit 221 configured to decode the one or more instructions fetched by the fetch unit 211; at least one execute unit 231 configured to execute the one or more instructions decoded by the decode unit 221; at least one memory access unit 241 configured to access the memory 130; and at least one write back unit 251 configured to write back the data retrieved from the memory 130. The pipeline may include a TLB access point 241, at which one or more instructions may access the TLB 120 to search for address translation information.[0026] FIG. 2 illustrates a current instruction 112 and a subsequent instruction 114 being received at appropriate stages of the pipeline. The current instruction 112 and the subsequent instruction 114 may be data access instructions. The address translation system 100 may include an address generator (not shown) that generates a virtual address for instruction 112 and a virtual address for instruction 114. Instruction 112 and instruction 114 may be consecutive instructions that seek sequential locations in the TLB 120 or locations which reside within the same page. Alternatively, instructions 112 and 114 may be multiple pieces of a single compound instruction.[0027] If it is determined that one or more subsequent instructions, or subsequent pieces of an instruction, seek data access from a same page in the memory 130, TLB access by the subsequent instructions (or pieces of an instruction) may be prevented by the TLB controller 140. As explained earlier, this approach may save power and increase efficiency, compared to carrying out a TLB access to the TLB 120 for each and every instruction in order to determine whether the requisite address translation information can be found in the TLB 120.[0028] In the illustrated embodiment, the TLB controller 140 is configured to determine whether the current instruction 112 and the subsequent instruction 114 seek access to data from a same page in the memory 130. For example, information regarding subsequent data accesses sought by one or more subsequent instructions (e.g. instruction 114 in FIG. 2) may be obtained by the TLB controller 140 from a current instruction (e.g. instruction 112 in FIG. 2). In one embodiment, the TLB controller 140 may be configured to figure out what the subsequent data accesses will be for one or more subsequent instructions following a current instruction, just by examining the current instruction itself, and extracting therefrom information regarding the data accesses sought by the subsequent instructions following the current instruction 112.[0029] The information regarding subsequent data accesses may be provided by the type of the current instruction 112. By way of example, the instruction type of the current instruction 112 may be one of the following types: "load", "store", or "cache manipulation" Some types of instruction may define whether the CPU needs to go to the data cache 117 or to the main memory 130. In one embodiment, the current instruction 112 may be an instruction for an iterative operation whose data accesses have not yet reached the end of a page in the physical memory 130.[0030] In one embodiment, the TLB controller 140 may be configured to determine the virtual address of the subsequent instruction 114 (that follows instruction 112), at a time point along the pipeline that is above the TLB access point 119. The TLB controller 140 may be configured to compare the virtual address of instruction 114 with the virtual address of instruction 112, in order to determine whether the virtual address of instruction 114 would seek access to the same page, compared to the page sought by the virtual address of instruction 112. In other words, the TLB controller 140 may compare the virtual addresses, in order to determine whether the page in memory to which access is sought by instruction 112 has the same physical page address, compared to the physical page address of the page in memory to which access is sought by instruction 114.[0031] The TLB controller 140 may be configured to determine the virtual addresses of a plurality of subsequent instructions following instruction 112 at a point in the pipeline above the TLB access point 241. The TLB controller 140 may also be configured to compare the virtual addresses of the plurality of subsequent instructions with the virtual address of instruction 112, in order to determine whether the virtual addresses of the plurality of subsequent instructions would all seek access to the same page (i.e. the page in memory having the same physical page address), compared to the page sought by the virtual address of instruction 112.[0032] If the TLB controller 140 determines that the current instruction 112 and one or more subsequent instructions seek access to data from a same page in the memory 130, the TLB controller 140 may prevent a TLB access by the one or more subsequent instructions, because the TLB controller 140 has obtained advance knowledge that the next several TLB accesses would all hit the same page in the memory 130. In other words, the TLB controller 140 determines prior to the TLB access point 241 whether a crossing of a page boundry occurs for the subsequent instructions (or the subsequent pieces of an instruction), and prevents TLB accesses from occurring, if no page boundry is crossed. A lot of power may be saved by preventing TLB accesses that may generate only repetitive and redundant information, by finding out before the TLB access point 241 that all these TLB accesses would just hit the same page in the physical memory 130 every time, i.e. just provide the same information.[0033] The TLB controller 140 may be configured to use, for one or more subsequent instructions following the current instruction 112, the address translation information that was previously provided by the TLB 120 for the current instruction 112, if the TLB controller 140 determines that the subsequent instructions and the current instruction 112 seek data access from the same page in the memory 130.[0034] In one embodiment, the TLB controller 140 may be configured to determine the relation between the virtual address of instruction 112, and the virtual addresses of each of a plurality of subsequent instructions that follow instruction 112, by recognizing the type of instruction, and how that particular type of instruction works. As one example, the TLB controller 140 may be able to determine, based on the instruction type of a current instruction, that each one of the plurality of subsequent instructions will be sequentially coded, e.g. will be seeking addresses characterized by a predetermined number (e.g. 4) of incremental bytes.[0035] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein, but is to be accorded the full scope consistent with the claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more." All structural and functional equivalents to the elements of the various embodiments described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference, and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S. C. [section]112, sixth paragraph, unless the element is expressly recited using the phrase "means for" or, in the case of a method claim, the element is recited using the phrase "step for."WHAT IS CLAIMED IS:
A method for fabricating fine line and space routing described. The method includes providing a substrate having a dielectric layer and a seed layer disposed thereon. An anti-reflective coating layer and a photo-resist layer are then formed above the seed layer. The photo-resist layer and the anti-reflective coating layer are patterned to form a patterned photo-resist layer and a patterned anti-reflective coating layer, to expose a first portion of the seed layer, and to leave covered a second portion of the seed layer. A metal layer is then formed on the first portion of the seed layer, between features of the patterned photo-resist layer and the patterned anti-reflective coating layer. The patterned photo-resist layer and the patterned anti-reflective coating layer are subsequently removed. Then, the second portion of the seed layer is removed to provide a series of metal lines above the dielectric layer.
1.A method for manufacturing fine lines and space wiring, including:Forming a seed layer on a substrate with a dielectric layer;Forming an anti-reflection coating layer on the seed layer;Forming a photoresist layer on the anti-reflective coating layer;Patterning the photoresist layer and the anti-reflective coating layer to form a patterned photoresist layer and a patterned anti-reflective coating layer to expose the first portion of the seed layer, And the second part of the seed layer is still covered;Forming a metal layer between the features of the patterned photoresist layer and the patterned anti-reflective coating layer on the first portion of the seed layer;Removing the patterned photoresist layer and the patterned anti-reflective coating layer; andThe second portion of the seed layer is removed, thereby providing a series of metal lines above the dielectric layer.2.The method of claim 1, wherein forming the anti-reflective coating layer comprises spraying or rolling the anti-reflective coating layer on the seed layer.3.The method of claim 2, wherein the photoresist layer is a liquid photoresist layer, and wherein forming the photoresist layer includes on the anti-reflective coating layer Spray or roll coat the photoresist layer.4.The method according to claim 2, wherein forming the anti-reflective coating layer includes using an organic compound and a dye.5.The method of claim 1, wherein forming both the seed layer and the metal layer includes using copper.6.The method of claim 1, wherein removing the second portion of the seed layer to provide the series of metal lines includes forming each line of the series of metal lines to have a thickness of less than 5 microns The width of each line is less than 5 microns.7.A method for manufacturing fine lines and space wiring, including:Forming a seed layer on a substrate with a dielectric layer;Forming an anti-reflection coating layer on the seed layer;Forming a photoresist layer on the anti-reflective coating layer;Performing mask lithography and development on the photoresist layer to form a patterned photoresist layer;Etching the anti-reflective coating layer to form a patterned anti-reflective coating layer, thereby exposing the first portion of the seed layer, and still covering the second portion of the seed layer;Forming a metal layer between the features of the patterned photoresist layer and the patterned anti-reflective coating layer on the first portion of the seed layer;Removing the patterned photoresist layer and the patterned anti-reflective coating layer; andThe second portion of the seed layer is removed, thereby providing a series of metal lines above the dielectric layer.8.The method of claim 7, wherein the removal of the patterned photoresist layer and the patterned anti-reflective coating layer is performed in the same processing step.9.The method according to claim 7, wherein forming the anti-reflective coating layer comprises spraying or rolling the anti-reflective coating layer on the seed layer.10.The method of claim 9, wherein the photoresist layer is a liquid photoresist layer, and wherein forming the photoresist layer includes on the anti-reflective coating layer Spray or roll coat the photoresist layer.11.The method according to claim 9, wherein forming the anti-reflective coating layer includes using an organic compound and a dye.12.The method of claim 7, wherein forming both the seed layer and the metal layer includes using copper.13.The method of claim 7, wherein removing the second portion of the seed layer to provide the series of metal lines includes forming each line of the series of metal lines to have a diameter of less than 5 microns Width, and the interval between each line is less than 5 microns.14.A method for manufacturing fine lines and space wiring, including:Forming a seed layer on a substrate with a dielectric layer;Forming an anti-reflection coating layer on the seed layer;Forming a photoresist layer on the anti-reflective coating layer;Performing mask lithography on the photoresist layer and the anti-reflective coating layer;Developing the photoresist layer and the anti-reflective coating layer in the same processing operation to form a patterned photoresist layer and the patterned anti-reflective coating layer to expose the seed layer The first part of the seed layer, and the second part of the seed layer is still covered;Forming a metal layer between the features of the patterned photoresist layer and the patterned anti-reflective coating layer on the first portion of the seed layer;Removing the patterned photoresist layer and the patterned anti-reflective coating layer; andThe second portion of the seed layer is removed, thereby providing a series of metal lines above the dielectric layer.15.The method of claim 14, wherein the removal of the patterned photoresist layer and the patterned anti-reflective coating layer is performed in the same processing step.16.The method according to claim 14, wherein forming the anti-reflective coating layer comprises spraying or rolling the anti-reflective coating layer on the seed layer.17.The method of claim 16, wherein the photoresist layer is a liquid photoresist layer, and wherein forming the photoresist layer includes on the anti-reflective coating layer Spray or roll coat the photoresist layer.18.The method of claim 16, wherein forming the anti-reflective coating layer includes using an organic compound and a dye.19.The method of claim 14, wherein forming both the seed layer and the metal layer includes using copper.20.The method of claim 14, wherein removing the second portion of the seed layer to provide the series of metal lines includes forming each line of the series of metal lines to have less than 5 microns Width, and the interval between each line is less than 5 microns.
Method for manufacturing wire / space wiring between C4 padsTechnical fieldEmbodiments of the present invention belong to the field of semiconductor structures and specifically methods for manufacturing fine line and space (FLS) wiring suitable for high density interconnect (HDI) substrates.Background techniqueFlip chip or controllable collapse chip connection (C4) is a type of mounting used for semiconductor devices or components such as integrated circuit (IC) chips, MEMS, which uses solder bumps instead of bonding wires. Solder bumps are deposited on the C4 pad on the top side of the substrate package. To mount the semiconductor device on the substrate, turn it over, that is, place the active side down on the mounting area. Solder bumps are used to directly connect the semiconductor device to the substrate.C4 solder ball connections have been used for many years to provide flip chip interconnection between semiconductor devices and substrates. Hemispherical C4 solder bumps are formed on the insulating layer and on the exposed surface of each connector pad (also called a bump pad) exposed through one or more through holes in the insulating layer. Next, the solder bump is heated above its melting point until it reflows and forms a connection with the Cu terminal bump of the die. Many different processing techniques can be used to make actual C4 solder bumps, including evaporation, screen printing, and electroplating. Manufacturing by electroplating requires a series of basic operations, usually including the deposition of a metal seed layer, (in accordance with the pattern of C4 solder bumps) imaging photoresist coating, solder electrodeposition, photoresistance Etching of the etchant and sub-etching of the metal seed layer to isolate the C4 bumps.As semiconductor structures become more developed, the need for higher I / O density has resulted in tighter C4 bump pitch. This in turn places strict requirements on the manufacture and size of the lines and spaces.BRIEF DESCRIPTIONFIG. 1 is a flowchart describing the operation of a method for manufacturing fine lines and spaced wiring in an organic substrate package according to an embodiment of the present invention.2A-2H show cross-sectional views depicting operations of a method for manufacturing fine lines and spaced wiring in an organic substrate package according to an embodiment of the present invention.detailed descriptionA method for manufacturing fine lines and space wiring in an organic substrate package will be described. In the following description, many specific details will be explained, such as integrated planning and material systems, to provide a thorough understanding of embodiments of the present invention. It is obvious to those skilled in the art that the embodiments of the present invention can be practiced without these specific details. In other cases, well-known features such as integrated circuit design layout are not specifically described in order to avoid unnecessary ambiguity for embodiments of the present invention. In addition, it should be understood that the various embodiments shown in the figures are merely exemplary representations, and are not necessarily drawn to scale.Disclosed herein is a method for manufacturing fine lines and space wiring. A substrate having a dielectric layer and a seed layer provided thereon can be provided. In an embodiment, an anti-reflective coating layer and a photoresist layer are formed on the seed layer. Thereafter, the photoresist layer and the anti-reflective coating layer are patterned to form a patterned photoresist layer and a patterned anti-reflective coating layer, thereby exposing the first portion of the seed layer, and allowing the seed crystal The second part of the layer is still covered. On the first portion of the seed layer, a metal layer is formed between the features of the patterned photoresist layer and the patterned anti-reflective coating layer. Next, the patterned photoresist layer and the patterned anti-reflective coating layer are removed. In one embodiment, the second portion of the seed layer is next removed to provide a series of metal lines above the dielectric layer.According to an embodiment of the present invention, the coating using the anti-reflection coating layer controls the amount of reflection from the seed layer by absorbing reflected light during the photolithography process. By absorbing the reflected light, the exposure amount of the area of the photoresist layer subjected to the patterning process can be better controlled. For example, in one embodiment, by applying an anti-reflective coating layer between the seed layer and the photoresist layer for the photolithography process, even if the undesired exposure to the area of the photoresist layer cannot be eliminated, It can also make it fully reduced. Accordingly, the line width variation between features in the patterned photoresist layer can be mitigated compared to the line width variation caused by scattering in the absence of the anti-reflective coating layer. In one embodiment, by introducing an anti-reflective coating layer in an integrated solution for manufacturing fine lines and spaces, the density of such wiring can be increased, and at the same time, the line width of each line in such wiring can be reduced to achieve Such scaling of wiring always adapts to ever-increasing I / O density. According to an embodiment of the present invention, the use of an anti-reflective coating layer between the seed layer and the photoresist layer for the photolithography process reduces the line edge roughness common to processes that do not use the anti-reflective coating layer (e.g. , Reflective notch).According to an embodiment of the present invention, an anti-reflection coating layer is used in a method for manufacturing fine lines and space wiring. 1 is a flowchart 100 describing the operation of a method for manufacturing fine lines and spaced wiring in an organic substrate package according to an embodiment of the present invention. 2A-2H show cross-sectional views depicting operations of a method for manufacturing fine lines and spaced wiring in an organic substrate package according to an embodiment of the present invention.Referring to operation 102 of the flowchart 100 and the corresponding FIG. 2A, a built-in layer 202 having a dielectric layer 204 disposed thereon is provided. According to an embodiment of the present invention, the built-in layer 202 and the dielectric layer 204 form the laminate 200 included in the organic substrate package. For example, in one embodiment, the laminated body 200 may include any built-in layer requiring fine lines and spaced wiring. In an embodiment, the dielectric layer 204 has a rough surface 206, for example, subjecting the dielectric layer 204 to surface contamination removal treatment, as shown in FIG. 2A.The dielectric layer 204 may be a layer suitable for isolating the devices and interconnections on the surface of the built-in layer 202 from the fine line / space wiring provided next or above the dielectric layer 204. In an embodiment, the dielectric layer 204 is composed of an epoxy-based material with silica filler. In one embodiment, the dielectric layer 204 has a rough surface 206 whose average surface roughness is approximately in the range of 0.5-0.6 microns, that is, the average depth of the V grooves in the rough surface 206 is approximately in the range of 0.5-0.6 microns Inside. In an embodiment, the dielectric layer 204 is roughened to have a rough surface 206 to better adhere to the metal layer deposited next, for example, the metal layer may be an electrolessly deposited metal layer described below. In an embodiment, the rough surface 206 of the dielectric layer 204 is formed by laser drilling and subsequent surface contamination removal treatment. In one embodiment, the surface of the dielectric layer 204 is not roughened.The built-in layer 202 may be composed of a material suitable for semi-additive process (SAP) manufacturing. In one embodiment, the built-in layer 202 is an epoxy-based dielectric material with silica filler. In another embodiment, the built-in layer 202 includes a copper plane.Referring again to operation 102 of flowchart 100 and the corresponding FIG. 2B, a seed layer 208 is provided on the dielectric layer 204. According to an embodiment of the present invention, the seed layer 208 is conformally formed with the dielectric layer 204. For example, the seed layer 208 has the same or similar surface morphology as the rough surface 206, as shown in FIG. 2B. In an embodiment, any top surface of the seed layer 208 partially or completely fills the dielectric layer 204 is rough, thereby providing the seed layer 208 with a sufficiently flat top surface. In an embodiment, the seed layer 208 has a thickness approximately in the range of 0.5-1 micrometer. In an embodiment, the seed layer 208 has a thickness of about 0.7 microns. The seed layer 208 may be a layer suitable for the subsequent electrolytic plating of a metal film on its surface. In an embodiment, the seed layer 208 is composed of a metal or a metal-containing alloy, such as but not limited to copper, silver, nickel, aluminum. In an embodiment, a seed layer 208 is formed on the dielectric layer 204 by an electroless deposition process. Metal sputtering is an alternative metal deposition process that can be used.Referring to operation 104 of flowchart 100 and the corresponding FIG. 2C, an anti-reflective coating layer 210 is formed over the seed layer 208. According to an embodiment of the present invention, an anti-reflective coating layer 210 is formed on the seed layer 208 to absorb light reflected from the metal surface and the rough surface topography of the seed layer 208 in the subsequent photolithography process, as shown in FIG. 2C. Moreover, in one embodiment, as shown in FIG. 2C, the anti-reflective coating layer 210 fills the surface roughness of the seed layer 208, thereby providing the anti-reflective coating layer 210 with a photoresist deposited thereon The flat surface of the layer. In an embodiment, the anti-reflective coating layer 210 has a thickness measured from the top surface of the seed layer 208 that is approximately in the range of 1-2 microns. In an embodiment, the anti-reflective coating layer 210 has a thickness of approximately 1.5 microns measured from the top surface of the seed layer 208.The anti-reflection coating layer 210 may be composed of a material that sufficiently absorbs scattered light generated during the photolithography process. According to an embodiment of the present invention, the anti-reflection coating layer 210 is composed of an organic compound and a dye. In one embodiment, the anti-reflective coating layer 210 is composed of materials such as, but not limited to, a water-soluble polymer Aquazol, an organosiloxane-based film. In an embodiment, the composition of the anti-reflective coating layer 210 is selected to be chemically compatible with the photoresist layer that is subsequently formed on the surface of the anti-reflective coating layer 210.The anti-reflective coating layer 210 may be formed on the seed layer 208 by a technique suitable for uniformly covering the seed layer 208 and providing a flat surface on which the photoresist layer is next deposited. In one embodiment, the anti-reflective coating layer 210 may be formed by processes such as but not limited to spray coating or roll coating. In another embodiment, the anti-reflective coating layer 210 is formed by a spin-on process. In an embodiment, in the process of applying the anti-reflective coating layer 210 onto the surface of the dielectric layer 204, a solvent is used to assist. After the anti-reflective coating layer 210 is formed, in order to remove the solvent, the next step is to However, it is not limited to the baking treatment of the anti-reflective coating layer 210 at a temperature of 150 degrees Celsius.Referring again to operation 104 of flowchart 100 and the corresponding FIG. 2D, a photoresist layer 212 is formed over the anti-reflective coating layer 210. The photoresist layer 212 may be composed of a material suitable for undergoing a photolithography process. According to an embodiment of the present invention, the photoresist layer 212 is composed of a dry film resist or a liquid resist. In an embodiment, the photoresist layer 212 is composed of negative tone liquid photoresist. In one embodiment, the photoresist layer 212 is composed of a two-component DQN resist including a photosensitive diazoquinone ester (Dazaquinone DQ) and a novolac resin (N). The photoresist layer 212 may be formed on the anti-reflective coating layer 210 by a technique suitable for uniformly covering the anti-reflective coating layer 210 and providing a flat top surface to which a photolithography process is applied. In one embodiment, the photoresist layer 212 is a liquid photoresist layer formed by a process such as, but not limited to, spraying or rolling onto the surface of the anti-reflective coating layer 210. In another embodiment, the photoresist layer 212 is formed by a lamination process, and is a dry film photoresist layer. In one embodiment, the dry film photoresist layer is based on cyclized poly (cis isoprene) resin. In an embodiment, the photoresist layer 212 has a thickness approximately in the range of 10-15 microns. In an embodiment, the photoresist layer 212 is a negative or positive photoresist layer. In an embodiment, the composition of the photoresist layer 212 is selected to be chemically compatible with the anti-reflective coating layer 210.Referring to operation 106 of the flowchart 100 and the corresponding FIG. 2E, the photoresist layer 212 and the anti-reflective coating layer 210 are patterned to form the patterned photoresist layer 214 and the patterned anti-reflective coating, respectively Layer 216 to expose the first portion of the seed layer 208 and still cover the second portion of the seed layer 208. According to an embodiment of the present invention, the photoresist layer 212 and the anti-reflective coating layer 210 are patterned through a mask lithography process to form the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216. In this embodiment, the photoresist layer 212 and the anti-reflective coating layer 210 are exposed to the light source through mask lithography, which exposes the photoresist layer 212 and the anti-reflective coating layer 210 Some changes. In an embodiment, the anti-reflective coating layer 210 absorbs light scattered by the seed layer 208 during the lithography exposure operation. In one embodiment, the anti-reflective coating layer 210 is patterned in the same development process used to pattern the photoresist layer 212 to form the patterned anti-reflective coating layer 216. In this embodiment, the photoresist layer 212 is first subjected to a mask lithography process. Next, in the same processing step, the photoresist layer 212 and the anti-reflective coating layer 210 are developed to form the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216, respectively. In an embodiment, the photoresist layer 212 and the anti-reflective coating layer 210 are developed by a solution such as but not limited to 1% by weight of Na 2 CO 3 or tetramethyl ammonium hydroxide (TMAH). In another embodiment, the anti-reflective coating layer 210 is patterned in a processing step different from the processing steps employed in the patterning of the photoresist layer 212 to form the patterned anti-reflective coating layer 216. In an embodiment, the photoresist layer 212 is first subjected to mask lithography and development processing to form a patterned photoresist layer 214. Next, using the patterned photoresist layer 214 as a mask, the anti-reflective coating layer 210 is dry or wet etched to form a patterned anti-reflective coating layer 216.Referring to operation 108 of flowchart 100 and the corresponding FIG. 2F, a metal layer 218 is formed between the features of the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 on the exposed portion of the seed layer 208 . According to an embodiment of the present invention, a metal layer 218 is formed on the exposed portion of the seed layer 208 through an electrolytic deposition process. The metal layer 218 may be composed of a metal suitable for strong adhesion to the seed layer 208 and proper conductivity to form a conductive line. In an embodiment, both the seed layer 208 and the metal layer 218 are composed of copper.Referring to operation 110 of the flowchart 100 and the corresponding FIG. 2G, the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 are removed. According to an embodiment of the present invention, the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 are removed by a photoresist stripping solution. In an embodiment, the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 are removed by an amine-based stripping solution. In one embodiment, the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 are removed in the same processing step. In one embodiment, the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 are removed through separate processing steps.Referring to operation 112 of the flowchart 100 and the corresponding FIG. 2H, the portion of the seed layer 208 that was previously covered by the patterned photoresist layer 214 and the patterned anti-reflective coating layer 216 is removed. According to an embodiment of the present invention, a series of metal lines 220 are provided above the dielectric layer 204 by removing this part of the seed layer 208. In one embodiment, the width of each line in the series of lines 220 is less than about 5 microns, and the spacing between each line in the series of lines 220 is less than about 5 microns. The portion of the seed layer 208 that needs to be removed may be removed by a global dry or wet etching process. In an embodiment, a portion of the seed layer 208 is removed in an etching solution based on H2O2 / H2SO4. In one embodiment, the global etching process also reduces the height of each line in the series of lines 220, as shown in FIGS. 2G and 2H.So far, a method for manufacturing fine lines and space wiring is disclosed. According to an embodiment of the present invention, the method includes first providing a substrate having a dielectric layer and a seed layer provided thereon. After that, an anti-reflection coating layer and a photoresist layer are formed on the seed layer. Patterning the photoresist layer and the anti-reflective coating layer to form a patterned photoresist layer and the patterned anti-reflective coating layer, thereby exposing the first portion of the seed layer and allowing the seed crystal The second part of the layer is still covered. After that, on the first part of the seed layer, a metal layer is formed between the features of the patterned photoresist layer and the patterned anti-reflective coating layer. Thereafter, the patterned photoresist layer and the patterned anti-reflective coating layer are removed. Finally, the second part of the seed layer is removed to provide a series of metal lines above or below the dielectric layer. In one embodiment, the photoresist layer and the anti-reflective coating layer are patterned by separate processing steps. First, mask photolithography and development processes are performed on the photoresist layer to form a patterned photoresist layer. Next, the anti-reflective coating layer is etched to form a patterned anti-reflective coating layer. In another embodiment, the photoresist layer and the anti-reflective coating layer are patterned in the same processing step. First, the photoresist layer and the anti-reflection coating layer are subjected to mask lithography. Next, both the photoresist layer and the antireflective coating layer are developed to form a patterned photoresist layer and a patterned antireflective coating layer.
Methods, apparatuses, and systems are provided to process search queries initiated at a mobile computing device based, at least in part, on a state of the mobile computing device as indicated by one or more of travel speed, travel direction, and geographic location of the mobile computing device.
CLAIMS What is claimed is: 1. A method, comprising: obtaining a search query initiated at a mobile device, said search query comprising one or more search terms; obtaining an inertial state of said mobile device indicated by one or more inertial sensor measurements obtained at said mobile device; and processing said search query to obtain one or more search results responsive to said one or more search terms, said one or more search results limited to a geographic scope that is based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements. 2. The method of claim 1, wherein processing said search query comprises: initiating transmission of a search request from said mobile device to a search service via a wireless network, said search request including said one or more search terms of said search query and said inertial state indicated by said one or more inertial sensor measurements; and receiving said one or more search results at said mobile device from said search service via said wireless network responsive to said search request. 3. The method of claim 2, further comprising, processing said one or more inertial sensor measurements at said mobile device to estimate said inertial state. 4. The method of claim 1, wherein obtaining said search query comprises receiving said search query from said mobile device via a wireless network; wherein obtaining said inertial state comprises receiving an indication of said inertial state from said mobile device via said wireless network; and wherein processing said search query further comprises initiating transmission of said search results to said mobile device via said wireless network. 5. The method of claim 4, wherein receiving said indication of said inertial state includes receiving said one or more inertial sensor measurements from said mobile device; and wherein the method further comprises processing said one or more inertial sensor measurements to estimate said inertial state. 6. The method of claim 1, wherein processing said search query comprises: obtaining said one or more search results limited to said geographic scope from a database residing at said mobile device based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements and responsive to said one or more search terms. 7. The method of claim 1, wherein said inertial state of said mobile device includes one or more of a travel speed and/or a travel direction of said mobile device. 8. The method of claim 7, wherein processing said search query further comprises: limiting said geographic scope of said one or more search results based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements. 9. The method of claim 8, wherein limiting said geographic scope of said one or more search results comprises: limiting said one or more search results to within a larger geographic search region if said travel speed indicated by said one or more inertial sensor measurements is a higher value; and limiting said one or more search results to within a smaller geographic search region if said travel speed indicated by said one or more inertial sensor measurements is a lower value. 10. The method of claim 8, wherein limiting said geographic scope of said one or more search results comprises: limiting said one or more search results to a first geographic search region having a first geometric shape if said travel speed indicated by said one or more inertial sensor measurements is a higher value; and limiting said one or more search results to a second geographic search region having a second geometric shape different from said first geometric shape if said travel speed indicated by said one or more inertial sensor measurements is a lower value. 11. The method of claim 8, further comprising: obtaining an indication of a predetermined route of travel of said mobile device; and wherein limiting said geographic scope of said one or more search results comprises: limiting said one or more search results to a first geographic search region if said travel speed indicated by said one or more inertial sensormeasurements is a higher value, said first geographic search region following at least a portion of said predetermined route of travel; and limiting said one or more search results to a second geographic search region if said travel speed indicated by said one or more inertial sensor measurements is a lower value, said second geographic search region following at least a portion of said predetermined route of travel, and said second geographic search region having a width and/or length relative to said predetermined route of travel that is different from said first geographic search region. 12. The method of claim 1, further comprising: receiving an indication of a change of said inertial state of said mobile device; varying said geographic scope of said one or more search results responsive to said change of said inertial state of said mobile device to obtain one or more updated search results. 13. The method of claim 12, wherein varying said geographic scope of said search results responsive to said change of said inertial state comprises: increasing said geographic scope if said change of said inertial state indicates a speed increase of said mobile device; and decreasing said geographic scope if said change of said inertial state indicates a speed decrease of said mobile device. 14. The method of claim 12, wherein varying said geographic scope of said search results responsive to said change of said inertial state comprises:limiting said one or more search results to within a geographic search region; and varying a shape of said geographic search region responsive to a change of travel speed of said mobile device indicated by said change of said inertial state. 15. The method of claim 1, further comprising: varying said geographic scope of said one or more search results based on said one or more search terms of said search query. 16. The method of claim 15, wherein varying said geographic scope of said one or more search results based on said one or more search terms of said search query further comprises: categorizing said one or more search terms into one or more search categories; and varying said geographic scope of said one or more search results based on said one or more search categories. 17. The method of claim 1, further comprising: limiting said geographic scope of said one or more search results by: defining a geographic search region based on said inertial state of said mobile device indicated by said one or more inertial sensor measurements; and identifying said one or more search results from within said geographic search region responsive to said one or more search terms of said search query. 18. The method of claim 17, wherein said inertial state of said mobile device includes a travel direction of said mobile device; and wherein said method further comprises: obtaining an indication of geographic location of said mobile device; and orientating said geographic search region relative to said geographic location of said mobile device based on said travel direction of said mobile device. 19. The method of claim 18, further comprising: varying an orientation of said geographic search region relative to said geographic location responsive to a change of said travel direction of said mobile device as indicated by said one or more inertial sensor measurements. 20. The method of claim 18, wherein orientating said geographic search region relative to said geographic location comprises: aligning an axis of symmetry of said geographic search region with said travel direction of said mobile device; and offsetting said geographic search region from said geographic location in a direction indicated by said travel direction of said mobile device. 21. The method of claim 20, further comprising: varying an offset of said geographic search region relative to said geographic location responsive to a travel speed indicated by said inertial state of said mobile device. 22. An article, comprising: a storage medium having stored thereon instructions executable by a computing platform to: obtain an inertial state of a mobile device indicated by one or more inertial sensor measurements; and process a search query initiated at said mobile device to obtain one or more search results responsive to one or more search terms of said search query, said one or more search results limited to a geographic scope that is based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements. 23. The article of claim 22, wherein to process said search query, said instructions are further executable by said computing platform to: initiate transmission of a search request from said mobile device to a search service via a wireless network, said search request including said one or more search terms of said search query and said inertial state indicated by said one or more inertial sensor measurements; and receive said one or more search results at said mobile device from said search service via said wireless network responsive to said search request. 24. The article of claim 22, wherein to obtain said search query, said instructions are further executable by said computing platform to receive said search query from said mobile device via a wireless network; wherein to obtain said inertial state, said instructions are further executable by said computing platform to receive an indication of said inertial state from said mobile device via said wireless network; andwherein to process said search query, said instructions are further executable by said computing platform to initiate transmission of said search results to said mobile device via said wireless network. 25. The article of claim 22, wherein to process said search query, said instructions are further executable by said computing platform to: obtain said one or more search results limited to said geographic scope from a database residing at said mobile device based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements and responsive to said one or more search terms. 26. The article of claim 22, wherein said inertial state includes travel speed of said mobile device; and wherein said instructions are further executable by said computing platform to: limit said geographic scope of said one or more search results as a function of said travel speed. 27. The article of claim 26, wherein said instructions are further executable by said computing platform to: receive an indication of a change of said travel speed of said mobile device; vary said geographic scope of said search results responsive to said change of said travel speed to obtain one or more updated search results. 28. The article of claim 22, wherein said instructions are further executable by said computing platform to vary a parameter of said geographic search region responsive tosaid inertial state indicated by said one or more inertial sensor measurements, wherein said parameter includes one or more of a shape of said geographic search region and/or a size of said geographic search region. 29. The article of claim 28, wherein said instructions are further executable by said computing platform to: obtain an indication of a predetermined route of travel of said mobile device; and vary said parameter of said geographic search region relative to said predetermined route of travel. 30. An apparatus, comprising: a mobile computing device, comprising: one or more inertial sensors to obtain inertial sensor measurements at said mobile computing device; an input device to receive a data input; an output device to present a data output; one or more processors to execute instructions; and a storage medium having stored thereon instructions executable by said one or more processors to: obtain one or more inertial sensor measurements via said one or more inertial sensors, said one or more inertial sensor measurements indicating an inertial state of said mobile computing device; obtain a search query comprising one or more search terms via said input device;process said search query to obtain one or more search results responsive to said one or more search terms of said search query, said one or more search results limited to a geographic scope that is based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements; and present said one or more search results via said output device. 31. The apparatus of claim 30, wherein said storage media further having stored thereon a database including said one or more search results; and wherein to process said search query, said instructions are further executable by said one or more processors to: obtain said one or more search results limited to said geographic scope from said database based, at least in part, on said inertial state indicated by said one or more inertial sensor measurements and responsive to said one or more search terms. 32. The apparatus of claim 30, wherein to process said search query, said instructions are further executable by said one or more processors to: initiate transmission of a search request from said mobile device to a search service via a wireless network, said search request including said one or more search terms of said search query and said inertial state indicated by said one or more inertial sensor measurements; and receive said one or more search results at said mobile device from said search service via said wireless network responsive to said search request.33. The apparatus of claim 30, wherein said inertial state of said mobile device includes one or more of a travel speed and/or a travel direction of said mobile device. 33. The apparatus of claim 30, wherein said inertial state of said mobile device includes one or more of a travel speed and/or a travel direction of said mobile device. 34. The apparatus of claim 33, wherein said one or more processors are further programmed with instructions to: vary a parameter of said geographic search region responsive to said travel speed of said mobile device, wherein said parameter includes one or more of a geometric shape of said geographic search region and a size of said geographic search region. 35. The apparatus of claim 34, wherein said one or more processors are further programmed with instructions to: obtain an indication of a predetermined route of travel of said mobile device; and vary said parameter of said geographic search region relative to said predetermined route of travel responsive to said travel speed. 36. An apparatus, comprising: means for obtaining a search query initiated at a mobile device, said search query comprising one or more search terms; means for obtaining an indication of an inertial state of said mobile device from one or more inertial sensor measurements obtained at said mobile device; and means for processing said search query to obtain one or more search results responsive to said one or more search terms, said one or more search results limited to a geographic scope that is based, at least in part, on said indication of said inertial state. 37. The apparatus of claim 36, comprising: means for limiting said geographic scope to a geographic search region based, at least in part, on said indication of said inertial state; wherein said inertial state of said mobile device includes one or more of a travel speed and/or a travel direction of said mobile device.
STATE DRIVEN MOBILE SEARCH BACKGROUND 1. Field [0001] The subject matter disclosed herein relates to electronic devices, and more particularly to methods, apparatuses, and systems for use in and/or with mobile searching of electronic information. 2. Information [0002] Wireless communication systems are fast becoming prevalent technologies in the digital information arena. Satellite and cellular telephone services and other like wireless communication networks already span the globe. Additionally, new wireless systems (e.g., networks) of various types and sizes are added each day to provide connectivity among a plethora of computing platforms, both fixed and mobile. Many of these wireless systems are coupled together through other communication systems and resources to promote even more communication and sharing of information. [0003] One popular and increasingly important wireless technology includes navigation systems and in particular those that are enabled for use with a satellite positioning system (SPS) that includes, for example, the global positioning system (GPS) and/or other like Global Navigation Satellite Systems (GNSSs). SPS enabled devices, for example, may receive wireless SPS signals that are transmitted by transmitters affixed to one or more orbiting satellites to determine geographic location of the device. Similarly, some devices may receive wireless signals from terrestrial based navigation systems to determine geographic location.[0004] Furthermore, information in the form of electronic data continues to be generated or otherwise identified, collected, stored, shared, and analyzed. Databases and other like data repositories are common place, as are related communication networks and computing resources that provide access to such information. As one example, the World Wide Web provided by the Internet continues to grow with seemingly continual addition of new information. [0005] To provide access to such information, tools and services have been provided which allow for copious amounts of information to be searched through. For example, service providers may allow for users to search the World Wide Web or other like networks using search engines. Similar tools or services may allow for one or more databases or other like data repositories to be searched. However, with so much information being available, there is a continuing need for relevant information to be identified and presented in an efficient manner. SUMMARY [0006] Implementations relating to mobile searching of electronic information are provided. In one implementation, a method is provided that comprises obtaining a search query initiated at a mobile computing device ("mobile device") in which the search query comprises one or more search terms. The method further comprises obtaining an inertial state of the mobile device indicated by one or more inertial sensor measurements obtained at the mobile device. The method further comprises processing the search query to obtain one or more search results responsive to the one or more search terms in which the one or more search results are limited to a geographic scope that is based, at least in part, on the inertial state indicated by the one or more inertial sensor measurements. It should be understood, however, that this is merely an example implementation, and that claimed subject matter is not limited to this particular implementation. BRIEF DESCRIPTION OF DRAWINGS [0007] Non-limiting and non-exhaustive aspects are described with reference to the following figures, wherein like reference numerals refer to like parts throughout the various figures unless otherwise specified. [0008] FIG. 1 is a schematic block diagram of an example network environment according to one implementation. [0009] FIG. 2 is a schematic block diagram illustrating aspects of a search service and a mobile device according to one particular implementation. [0010] FIG. 3 is a flow diagram illustrating an example process for processing a search query initiated at a mobile device according to one implementation. [0011] FIG. 4 is a flow diagram illustrating an example process for updating search results responsive to a change of the travel speed of the mobile device according to one implementation. [0012] FIG. 5 is a schematic diagram of an example geographic environment depicting how a geographic search region may be orientated relative to a geographic location and/or travel direction of a mobile device according to one implementation. [0013] FIG. 6 is a schematic diagram of an example geographic environment depicting how a geographic search region may be orientated relative to a geographic travel direction of a mobile device according to one implementation. [0014] FIG. 7 is a schematic diagram of an example geographic environment depicting how a geographic search region may be updated responsive to a change of travel speed of a mobile device and/or a change of search terms according to one implementation.[0015] FIG. 8 is a schematic diagram of another example geographic environment according to another implementation. [0016] FIG. 9 is a schematic diagram of another example geographic environment according to another implementation. [0017] FIG. 10 is a schematic diagram of another example geographic environment according to another implementation. DETAILED DESCRIPTION [0018] Mobile searching of electronic information is disclosed in which a state of a mobile device (e.g., as indicated by travel speed, travel direction, and/or geographic location) is used to identify search results responsive to search queries initiated at the mobile device. By processing search queries based on the state of the mobile device, more relevant search results may be provided to the mobile device user. As one example, search results that are appropriate to the travel capability and/or mobility of the mobile device user (e.g., as indicated by the travel speed of the mobile device) may be identified. For example, mobile device users moving at a higher travel speed, such as by automobile, train, bicycle, etc. may be provided with different search results than mobile devices users moving at a lower travel speed, such as when walking. As another example, travel direction of the mobile device may be used to orientate geographic search regions from which search results may be identified so that geographic points of interest may be presented to the mobile device user that reside along the travel route in the direction of travel of the mobile device. Search results may be periodically or continuously updated (e.g., in real-time) responsive to a state change of the mobile device, including changes to travel speed, travel direction, and/or geographic location of the mobile device. [0019] To obtain such state information for the mobile device, inertial sensor measurements obtained at the mobile device may used in conjunction with navigation information obtained by either SPS or terrestrial based navigation systems to acquire travel speed, travel direction, and/or geographic location of the mobile device. Such inertial sensor measurements may provide more accurate and/or more rapid acquisition of state information than if SPS or terrestrial based navigation systems were alone used.In this way, the relevancy of search results may be improved for mobile device users as a result of improved acquisition of state information. [0020] FIG. 1 is a schematic block diagram of an example network environment 100 according to one implementation. In network environment 100, a mobile device 110 may be operated by a human operator (e.g., a user) to initiate a search query at the mobile device. In at least some implementations, search queries that are initiated at mobile device 110 (e.g., via a user interface) may be processed, at least in part, by mobile device 110 and/or a search service 112 responsive to search terms of the search query. [0021] Network environment 100 may include a navigation system such as a satellite positioning system (SPS) 114 comprising a plurality of satellite based transmitters 116, including example transmitter 118. SPS signals transmitted by one or more of transmitters 116 may be received at SPS receiver 120 of mobile device 110. SPS signals received at mobile device 110 may be used to estimate a geographic location of the mobile device. However, other techniques capable of providing a location estimate or "position fix" may be used. One approach, called Advanced Forward Link Trilateration (AFLT) may utilize CDMA or Enhanced Observed Time Difference (EOTD) in GSM or Observed Time Difference of Arrival (OTDOA) in WCDMA, measures at mobile device 110 relative times of arrival of wireless signals transmitted from terrestrial wireless transmitters. Another approach may include associating a MAC address from a WLAN access point within service range of mobile device 110 with a known location of the WLAN access point. It should be understood, however, that these are merely examples of techniques that may be employed at a mobile device for determining a location and claimed subject matter is not limited in this respect.[0022] Mobile device 110 may include a communication interface 122 for communicating wirelessly with terrestrial communication system 124 via a wireless network. For example, communication interface 122 may include one or more wireless transceivers that may communicate wirelessly with a plurality of terrestrial based wireless transceivers 126 of terrestrial communication system 124, including wireless transceiver 128. Terrestrial communication system 124 may comprise an access point (e.g., a cellular base station) in some implementations for directing wireless communications between mobile device 110 and network 130. Additionally, terrestrial communication system 124 may provide a terrestrial based navigation system via one or more of wireless transceivers 126 by applying known triangulation and/or proximity sensing methods. SPS 114 may be omitted in implementations where terrestrial based navigation systems are used. Mobile device 110 may further include other components 124 that will be described in greater detail with reference to FIG. 2. [0023] Network 130 may comprise one or more wide area networks (e.g., the Internet), local area networks (e.g., an intranet), and personal area networks. It will be appreciated that network 130 may support any suitable communication protocol, including the TCP/IP Internet protocol suite. Further it will be appreciated that the communication protocol for communicating on network 130 may differ from that of the communication protocol used by mobile device 110 to communicate wirelessly with terrestrial communication system 124. Search service 112 may include a communication interface 132 for communicating with network 130, a search engine 134 for processing search queries, at least in part, to obtain search results, and other components 136 which will be described in greater detail with reference to FIG. 2. In some implementations, search service 112 may be omitted, such as where a searchengine for processing search queries resides at the mobile device as depicted, for example, in FIG. 2. [0024] In some implementations, network environment 100 may further include a location server 140. Location server 140 may provide an indication of a geographic location of mobile device 110 to search service 112, which may be used to identify relevant search results for mobile device 110. In some implementations, location server 140 may be operated by a different entity than search service 112. [0025] FIG. 2 is a schematic block diagram illustrating aspects of a search service and a mobile device according to one particular implementation. In FIG. 2, satellite positioning system 114, terrestrial communication system 124, and location server 140 of FIG. 1 have been omitted for clarity. As such, mobile device 110 and search service 112 are depicted in FIG. 2 communicating via network 130 through their respective communication interfaces 122 and 132. [0026] Mobile device 110 may comprise a mobile computing platform such as a mobile telephone, digital media player, personal digital assistant, portable navigation device (e.g., GPS navigation device), a laptop or notebook computer, or mobile workstation, just to name a few examples. Accordingly, other components 124 of mobile device 110 may include one or more processors such as processor 210 to execute instructions, storage media 212 for holding instructions 220 executable by one or more processors including processor 210, sensor subsystem 214 for identifying a state (e.g., an inertial state) of the mobile device, input device 216 for receiving user inputs (e.g., from a mobile device user), and an output device 218 for presenting information (e.g., to a mobile device user). In some implementations, storage media 212 may further have search engine 227 and a database 229 stored thereon to enable processing of search queries to be performed locally at mobile device 110 without necessarily requiringcommunication with search service 112. Database 229 may comprise geographic points of interest including one or more search results that may be obtained from the database by search engine 227 responsive to a search query initiated at the mobile device. [0027] In some implementations, instructions 220 may include one or more programs, software modules, and/or databases. For example, instructions 220 may include one or more of a geographic location determination module 222 to determine or estimate a geographic location of mobile device 110, a travel direction determination module 224 to determine or estimate a travel direction and/or a predetermined route of travel of mobile device 110, a travel speed determination module 226 to determine or estimate a travel speed of mobile device 110, search engine 227, and a user interface 228 to facilitate user interaction with mobile device 110. In other examples, one or more of modules 222, 224, and 228 may alternatively reside at search service 112 depending on implementation as will be described in greater detail below. [0028] Sensor subsystem 214 may include one or more inertial sensors such as inertial sensor 230 for obtaining inertial sensor measurements at the mobile device. As a non-limiting example, inertial sensor 230 may comprise an accelerometer, gyroscope, compass, strain gauge, or other suitable inertial measurement device for detecting and/or measuring acceleration of the mobile device. A plurality of inertial sensors of sensor subsystem 214 may be implemented as a multi-axis accelerometer in some examples to obtain acceleration measurements along a plurality of different coordinate axes. It will be appreciated in light of the present disclosure that inertial measurements obtained at the mobile device from one or more inertial sensors may used to determine or estimate a geographic location and/or a velocity of the mobile device (e.g., by integration), including a travel speed component and a travel direction component. Travel speed andtravel direction may be estimated from inertial measurements by travel speed determination module 226 and travel direction determination module 224, respectively. [0029] Output device 218 may comprise one or more of a graphical display, an audio loudspeaker, a haptic feedback device, etc., just to name a few examples. As a non-limiting example, user interface 228 may be presented via a graphical display of output device 218. Input device 216 may comprise one or more of a keyboard, a microphone, a touch-sensitive graphical display, a pointing device such as a mouse, joystick, controller, etc., just to name a few examples. [0030] Search service 112 may comprise one or more computing platforms such one or more network servers, server systems, or workstations, among other suitable computing platforms. Accordingly, other components 136 of search service 112 depicted in FIG. 1 may include one or more processors such as processor 240, and storage media 242. Storage media 242 may have instructions 244 stored thereon that are executable by one or more processors including processor 240 to perform one or more of the operations described herein with respect to the flow diagrams of FIGS. 3 and 4. As a non- limiting example, instructions 244 may include one or more of search engine 134 as well as one or more of previously described geographic location determination module 222, travel direction determination module 224, and travel speed determination module 226. However, in other implementations, one or more of modules 222, 224, and 226 may alternatively reside at mobile device 110 as previously described. As yet another example, geographic location determination module 222 may reside at a location server as previously described in FIG. 1 with reference to location server 140. [0031] Storage media 242 may further have a database 246 stored thereon. Search engine 134 may reference database 246 while processing search queries to determinerelevant search results responsive to the search terms of the search query. As a non- limiting example, database 246 may comprise a plurality of index items such as geographic points of interest that search engine 134 may retrieve and identify as search results. In implementations where at least some search results are obtained from a database residing at mobile device 110 (e.g., database 229), a search engine (e.g., search engine 227) may alternatively or additionally reside at mobile device 110 as instructions held in storage media 212. In such implementations, search engine 227 residing at mobile device 110 may obtain one or more search results from database 229 responsive to search terms of a search query initiated at the mobile device. [0032] FIG. 3 is a flow diagram illustrating an example process 300 for processing a search query initiated at a client mobile device (e.g., mobile device 110) according to one implementation. It will be appreciated that the operations depicted by flow diagram 300 may be controlled and/or directed by execution of instructions stored on a storage medium by a processor to result in one or more of the described operations. It will further be appreciated that the various processes, methods, and operations described herein may be performed by one or more computing platforms depending on the implementation. As one example, process 300 may be performed by search service 112 of network environment 100 except where indicated at operations 310 and 330. As another example, process 300 may be performed by mobile device 110 without requiring communication with other network clients such as search service 112. Hence, one or more computing platforms such as mobile device 110 and/or search service 112 may comprise means for performing one or more of the various operations described with reference to process 300. [0033] A search query may be initiated at a mobile device at operation 310. The search query initiated at operation 310 may comprise one or more search terms (e.g.,from alphanumeric character strings). For example, in the context of mobile device 110 of network environment 100, a mobile device user may submit a search query via input device 216. As a non-limiting example, a mobile device user may initiate a search query to locate nearby geographic points of interest such as restaurants, gas stations, hotels, etc. The search query may be executable by a search engine (e.g., of search service 112) to obtain one or more search results responsive to the one or more search terms, and transmit the one or more search results to the mobile computing device. [0034] Operation 312 may be performed to obtain a search query initiated at a mobile device. As one example, in the context of network environment 100, mobile device 110 may transmit one or more electrical signals representative of the search query via communication interface 122. Search service 112 may receive the one or more electrical signals representative of the search query at communication interface 132 via a wireless network provided by terrestrial communication system 124 and network 130. Such search queries may not be transmitted by a mobile device to other network clients such as search service 112 for implementations where search queries are processed at the mobile device, such as by search engine 227. [0035] At 313, the search query may be processed to obtain one or more search results responsive to the one or more search terms by limiting a geographic scope of the one or more search results based, at least in part, on a state (e.g., an inertial state) of the mobile device. The geographic scope to which search results are limited may be defined, at least in part, by a size, a shape, an orientation, and an offset of a geographic search region from which the search results may be obtained as will be described in greater detail with reference to operations 314 - 326. As previously described, search queries may processed by one or more of the mobile device and/or search service depending on implementation.[0036] Operation 314 may be performed to obtain an inertial state of the mobile device. The inertial state of a mobile device may be estimated based on one or more satellite positioning system signals and/or one or more inertial sensor measurements obtained from inertial sensors on-board a mobile device. [0037] As one example, in the context of network environment 100, where travel speed determination module 226 resides at mobile device 110, travel speed may be estimated on-board the mobile device based, at least in part, on one or more satellite positioning system signals obtained from satellite positioning system 1 14 and/or one or more inertial sensor measurements obtained from the inertial sensors of sensor subsystem 214. The estimated travel speed may be utilized at the mobile device to process search queries or may be transmitted by mobile device 110 to search service 112 where it may be used by the search service to process search queries initiated at the mobile device. [0038] As another example, where travel speed determination module 226 resides at search service 112, travel speed may be estimated by module 226 at search service 112 based, at least in part, on one or more satellite positioning system signals obtained from satellite positioning system 114 and/or one or more inertial sensor measurements obtained at mobile device 110 and transmitted to the search service. Such estimated travel speed may not be transmitted to search service 112 for implementations where search queries are processed at mobile device 110 without communicating with search service 112. [0039] As yet another example, where a travel speed determination module resides at a remote computing resource such as location server 140, location server 140 may estimate travel speed based, at least in part, on one or more satellite positioning signals and/or inertial sensor measurements obtained from mobile device 110. The locationserver may then transmit an indication of travel speed of the mobile device to search service 112 where it may be used to process search requests. [0040] It will be appreciated that state information (e.g., inertial state information) of a mobile device including indications of travel speed, travel direction, and/or geographic location of the mobile device may be obtained using a variety of approaches. As one example, state information or a tag indicating a location where the state information may be retrieved by a search service may be appended to a search query. For example, a search request including one or more search terms of the search query and the inertial state indicated by one or more inertial sensor measurements may be transmitted from the mobile device to the search service. Such a location where state information may be retrieved may include a location server, web service, or other suitable network location in which case the tag may include a universal resource locator (URL), an Internet protocol (IP) address, or other suitable logical network address. As another example, the state information may be polled from the mobile device and/or location server by the search service (e.g., by search engine 134) according to a predetermined polling schedule. The mobile device and/or location server may be adapted to respond to such a request by the search service for state information by transmitting requested state information to the search service. As yet another example, the state information may be transmitted to the search service by the mobile device and/or location server responsive to a change of state information of the mobile device or according to a predetermined reporting schedule. It should be understood, however, that these are merely examples of how a search service may obtain an indication of state information of a mobile device and claimed subject matter is not limited in this respect. [0041] Operation 316 may be performed to define and/or vary one or more geometric parameters of a geographic search region based, at least in part, on a state of amobile device such as a travel speed, direction of travel, and/or location of the mobile device. The one or more geometric parameters may include a geometric shape of the geographic search region and/or a size of the geographic search region as will be described in greater detail with reference to the process flow of FIG. 4. Such shape and size parameters of the geographic search region at least partially define the geographic scope of search results that may be obtained responsive to a search query. [0042] As a non-limiting example, a geographic search region of a larger size may be defined to limit the one or more search results to within a larger geographic search region (e.g., a larger geographic area) responsive to a mobile device traveling at a first travel speed. A geographic search region of a smaller size may be defined to limit the one or more search results to within a smaller geographic search region (e.g., a smaller geographic area) responsive to the mobile device traveling at a second travel speed that is different from the first travel speed. As another non-limiting example, a geographic search region having a first shape may be defined responsive to a first travel speed of a mobile device and a geographic search region having a second shape different from the first shape may be defined responsive to a second travel speed of the mobile device. Hence, operation 316 may be performed by a search service or by the mobile device depending on implementation to vary a geographic scope of one or more search results based on an inertial state of a mobile device. In this way, search results appropriate to the travel capability and/or mobility of the mobile device user (e.g., as indicated by travel speed of the mobile device) may be considered in limiting the geographic scope of search results. [0043] In addition to or as an alternative to operation 316, operation 318 may be performed to define and/or vary one or more geometric parameters of the geographic search region based, at least in part, on the one or more search terms of the searchquery. In some implementations, the search engine may be adapted to categorize search terms into two or more search categories. For example, a particular search term may be representative of a particular category of geographic points of interest. As a non- limiting example, a mobile device user may search among a number of different search categories of geographic points of interest, including gas stations, restaurants, retail stores, hotels, transportations services, etc. by submitting a search query. The search engine may then categorize the one or more search terms into one or more of the various search categories. Hence, operation 318 may be performed by a search service or by the mobile device to vary a geographic scope of one or more search results based on one or more search categories indicated by the search terms. [0044] As one example, operation 318 may be performed to define a geographic search region having a larger size and/or first shape responsive to one or more search terms representing a first search category and define a geographic search region having smaller size and/or second shape different from the first shape responsive to the one or more search terms representing a second search category. As a non-limiting example, search queries for gas stations may encompass a broader geographic search region than search queries for restaurants, for example. [0045] It will be appreciated that any number and/or type of search categories may be used for categorizing search terms beyond the previously described examples. For example, search terms may be categorized based on price of products (e.g., price of fuel) sold at corresponding geographic points of interest (e.g., fueling station) as indicated by the search query. As another example, search terms may be categorized based on a predefined ranking, whereby more important (e.g., higher ranked) search terms may be utilized in association with a broader geographic search region than less important (e.g., lower ranked) search terms.[0046] In at least some implementations, travel speed and search terms may influence different geometric parameters of the geographic search region in different ways. As one example, travel speed may have greater or less influence on defining the size and/or shape of the geographic search region than search terms. As another example, travel speed may have greater or less influence on size of the geographic search region than search terms, whereas search terms may have greater or less influence on geometric shape of the geographic search region than travel speed. [0047] Operation 320 may be performed to obtain an indication of travel direction of the mobile device and/or a predetermined route of travel of the mobile device. In the context of network environment 100, the travel direction and/or the predetermined route of travel of mobile device 110 may be obtained by search service 112 in a variety of ways depending on where travel direction determination module 224 resides. For example, if travel direction determination module 224 resides at mobile device 110 then a travel direction or predetermined route of travel of the mobile device may be estimated at the mobile device (e.g., from one or more inertial sensor measurements, SPS signals, and/or user defined target destinations) where it may be received by search service 112 via network 130. Alternatively, if travel direction determination module 224 instead resides at search service 112 then the one or more inertial sensor measurements, SPS signals, and/or user defined target destinations may be received at search service 112 from mobile device 110 where the information may be used to estimate the travel direction and/or predetermined route of travel of the mobile device. [0048] Operation 322 may be performed to obtain an indication of geographic location of the mobile device. In some implementations, the indication of geographic location may be obtained by receiving the indication of geographic location from a location server independent of the one or more inertial sensor measurements. Forexample, SPS or terrestrial based navigation systems may be used to acquire a geographic location of the mobile device. In the context of network environment 100, the geographic location of mobile device 110 may be obtained by search service 112 in a variety of ways depending on where geographic location determination module 222 resides. For example, if geographic location determination module 222 resides at mobile device 110, then a geographic location of the mobile device may be estimated at the mobile device (e.g., from one or more inertial sensor measurements and/or SPS signals) where it may be received by search service 112 via network 130. Alternatively, if geographic location determination module 222 instead resides at search service 112 then one or more inertial sensor measurements and/or SPS signals may be received at search service 112 from mobile device 110 where it may be used to estimate geographic location of the mobile device. If geographic location determination module 222 instead resides at location server 140, then the indication of geographic location of the mobile device may be obtained from the location server by the search service. As previously described, state information including an indication of geographic location of the mobile device may be obtained by the search service in a variety of ways. For example, an indication of geographic location or a tag indicating a network location where the indication of geographic location may be retrieved by the search service may be appended to a search query initiated by the mobile device. As another example, the indication of geographic location may be polled from the mobile device and/or location server by the search service according to a predetermined polling schedule. As yet another example, the indication of geographic location may be transmitted to the search service by the mobile device and/or location server responsive to a change of in an estimated geographic location of the mobile device or according to a predetermined reporting schedule.[0049] Operation 324 may be performed to orientate and/or offset the geographic search region (e.g., as defined at operation 216) relative to the geographic location of the mobile device based, at least in part, on a state of the mobile device. Such orientation and offset parameters of the geographic search region may further define the geographic scope of search results that may be obtained responsive to a search query. [0050] In some examples, a geographic search region may be orientated relative to a geographic location of a mobile device by aligning an axis of symmetry of the geographic search region with a travel direction of the mobile device and/or by offsetting the geographic search region from the geographic location of the mobile device in a direction indicated by the travel direction of the mobile device. Hence, an orientation of a geographic search region may be varied relative to a geographic location of a mobile device responsive to a change of the travel direction of the mobile device. In this way, a search region may be projected primarily in front of the mobile device, in at least some examples, as indicated by the direction of travel. [0051] Furthermore, in some examples, the magnitude of the offset may be varied responsive to a travel speed of the mobile device. As a non-limiting example, this offset may be increased as travel speed increases and may be decreased as travel speed decreases. Hence, operation 324 may be performed to vary an offset of a geographic search region relative to a geographic location of a mobile device responsive to the travel speed of the mobile device. Example search regions are described in greater detail with reference to FIGS. 5 - 10. [0052] Operation 326 may be performed to identify one or more search results from within the geographic search region based, at least in part, on the one or more search terms. For example, in the context of network environment 100, search engine 134 may be adapted to identify the one or more search results by referencing geographic points ofinterest stored in database 246. Alternatively, where processing of the search query is performed at the mobile device without communicating with a search service, search engine 227 may be adapted to identify the one or more search results by referencing geographic points of interest stored in database 229. These geographic points of interest may be associated with geographic coordinates which may be compared to the geographic search region as defined at operation 316 for the orientation identified at operation 324. The search engine, in identifying the search results may demonstrate a preference for search results associated with geographic points of interest that are located within the geographic search region. As one example, the search engine may provide the search results as a hierarchical ranking of geographic points of interest, whereby the higher ranked geographic points of interest are located within the geographic search region and the lower ranked geographic points of interest are located external the geographic search region. As another example, the search engine may exclude geographic points of interest from the search results that are located external the geographic search region such that the search results include only those geographic points of interest that are located within the geographic search region. [0053] Operation 328 may be performed to initiate transmission of the search results to the mobile device for implementations where the search query is processed, at least in part, by a search service. The mobile device may in turn receive the search results from the search service that are responsive to a search request initiated by the mobile device. For example, in the context of network environment 100, one or more electrical signals representative of the search results may be transmitted by search service 112 to mobile device 110 via network 130 and the wireless network provided by terrestrial communication system 124. It will be appreciated that instructions held in storage media of the search service may be executable by one or more processors of the searchservice to initiate the transmission of the search results to the mobile device. These one or more electrical signals may be interpreted by mobile device 110 to present the search results at operation 330 (e.g., via output device 218). As a non-limiting example, the one or more search results may be presented on a graphical display of the mobile device as a hierarchical ordered list. It will be appreciated that the one or more search results that are transmitted to the mobile device may be associated with a rank indicator that may be interpreted by the mobile device to present the search results in the appropriate order in the hierarchical ordered list. As another example, the one or more search results may be presented in conjunction with a graphical depiction of a map of the geographic region surrounding the geographic location of the mobile device as depicted in FIGS. 5 - 10, for example. Geographic points of interest associated with the search results may be presented on the map as icons at their respective geographic locations. [0054] It will be appreciated that the search results may be periodically or continuously updated (e.g., in real-time) through application of process 300 as one or more of the geographic location, travel direction, predetermined route of travel, travel speed, and search terms change over time. As a non-limiting example, a process for obtaining updated search results will be described in greater detail with reference to FIG. 4. [0055] FIG. 4 is a flow diagram illustrating an example process 400 for updating search results responsive to a change of travel speed of a mobile device according to one implementation. Process 400 may be performed to increase the geographic scope of the search results if a change of the travel speed of the mobile device indicates a speed increase, and decrease the geographic scope of the search results if a change of the travel speed indicates a speed decrease. However, in other examples, the geographic scope of the search results may be reduced in response to a speed increase and increasedin response to a speed decrease. It will be appreciated that the operations depicted by the flow diagram of FIG. 4 may be controlled and/or directed by execution of instructions stored on a storage medium by a processor to result in one or more of the described operations. In the context of network environment 100, process 400 may be controlled and/or directed, at least in part, by search engine 134 of search service 112. However, in other implementations, process 400 may be performed by a search engine residing at mobile device 110 (e.g., search engine 227) without communicating with a search service such as search service 112. [0056] Operation 410 may be performed to obtain one or more electrical signals indicating a change of travel speed of a mobile device. As one example, in the context of network environment 100, travel speed determination module 226 may be adapted to receive SPS signals and/or inertial sensor measurements from inertial sensors of mobile device 110 and determine an updated travel speed responsive to an indication of a change of travel speed of the mobile device. [0057] Operation 412 may be performed to update the geographic search region (e.g., as previously defined at operation 316) by varying one or more geometric parameters of the geographic search region responsive to a change of the travel speed. For example, if the change of the travel speed indicates a speed increase at operation 414 then operation 416 may be performed to increase the size of the geographic search region responsive to the speed increase. Hence, in at least some implementations, the size of the geographic search region may be increased as an increasing function of the travel speed of the mobile device, at least within some ranges. Alternatively, if the change of the travel speed indicates a speed decrease at operation 418 then operation 420 may be performed to decrease the size of the geographic search region responsive to the speed decrease. Hence, in at least some implementations, the size of thegeographic search region may be decreased as the travel speed of the mobile device decreases, at least within some ranges. [0058] Furthermore, in at least some implementations, a size of the geographic search region may be varied by increasing or decreasing one or more length dimensions of the geographic search region to increase or decrease an area of the geographic search region. Alternatively or additionally, the shape of the geographic search region may be varied responsive to an increase or decrease of the travel speed of the mobile device. As a non-limiting example, the shape of the geographic search region may be changed from a circle to an oval or from a square to a rectangle responsive to an indication of a speed increase or a speed decrease. However, it will be appreciated that any suitable shape may be used for the geographic search region. For example, FIGS. 9 and 10 depict how a geographic search region may follow a contour of a predetermined route of travel of a mobile device. [0059] Operation 422 may be performed to identify one or more updated search results from within the updated geographic search region based, at least in part, on the one or more search terms of the search query. For example, if the size of the geographic search region has been increased at operation 416, then the search engine may select one or more updated search results from within the larger updated geographic search region. Alternatively, if the size of the geographic search region has been decreased at 416, then the search engine may select one or more updated search results from within the smaller updated geographic search region. In this way, the updated search results may demonstrate a preference for geographic points of interest that are within the updated geographic search region. [0060] Operation 424 may be performed to transmit one or more electrical signals representing the one or more updated search results to the mobile device. Theupdated search results may be presented to a mobile device user at mobile device 110 via output device 218. For example, user interface 228 may be updated to reflect the updated search results. In this way, the mobile device user may be provided with updated search results (e.g., periodically or in real-time) as a travel speed of the mobile device changes. [0061] It will be appreciated that while FIG. 4 is described in the context of a change in travel speed, such change of travel speed may be evaluated with respect to one or more travel speed thresholds that define one or more travel speed ranges. For example, a search service may utilize two, three, four, or any suitable number of travel speed ranges each having a corresponding set of geographic search region parameters. Accordingly, a change in travel speed of a mobile device from one travel speed range to another travel speed range may trigger the search service to update the geographic search region applied to the search query. [0062] FIG. 5 is a schematic diagram of an example geographic environment 500 depicting how a geographic search region 514 may be orientated relative to a geographic location and/or travel direction of a mobile device 510 according to one implementation. For example, FIG. 5 depicts mobile device 510 traveling through an example street environment along travel direction 512. As previously described with reference to operation 324 of FIG. 3, the geographic search region may be orientated relative to the geographic location of the mobile device by aligning an axis of symmetry 516 of the geographic search region with the travel direction 512 of the mobile device. Additionally or alternatively, a centroid 518 of the geographic search region may be offset from the geographic location in a direction indicated by the travel direction 512 of the mobile device as depicted in FIG. 5 by reference numeral 520. Such orientation and offset of the geographic search region relative to a geographic location of a mobiledevice may be varied responsive to travel speed of the mobile device and/or search terms of a search query initiated at the mobile device as previously described with reference to FIG. 3. As one example, offset 520 may be increased or decreased responsive to an increase or decrease in travel speed of the mobile device. As another example, offset 520 may be set to a first value responsive to a search query for restaurants whereas offset 520 may be set to a second value different from the first value responsive to a search query for hotels. [0063] FIG. 6 is a schematic diagram of an example geographic environment 600 depicting how a geographic search region may be orientated relative to a geographic travel direction of a mobile device according to one implementation. In FIG. 6, a first instance of a mobile device is indicated at 610 traveling in a travel direction indicated at 612, whereby a geographic search region 614 is provided. A second instance of a mobile device provided at a later time is indicated at 616 travelling in a different travel direction 618, whereby an updated geographic search region 620 is provided. [0064] FIG. 7 is a schematic diagram of an example geographic environment 700 depicting how a geographic search region may be updated responsive to a change of travel speed of a mobile device and/or responsive to a change of one or more search terms of a search query according to one implementation. Mobile device 710 is shown traveling along travel direction 712 whereby a first geographic search region 714 is provided responsive to mobile device 710 traveling at a first travel speed or for a first set of search terms. FIG. 7 further depicts a second geographic search region 722 provided responsive to mobile device 710 travelling at a second travel speed different from the first travel speed or for a second set of search terms different from the first set of search terms. For geographic search region 714, geographic points of interest 716 may be identified as the search results to be transmitted to the mobile device by thesearch service, because geographic points of interest 716 are within geographic search region 714. By contrast, geographic points of interest 718 and 720 may be excluded from the search results or may be subordinated to geographic points of interest 716 in a hierarchical order when mobile device 710 is travelling at the first travel speed or where the search terms are associated with a first search category. For geographic search region 722, geographic points of interest 716 and 718 may be identified as search results to be transmitted to a mobile device by a search service, because geographic points of interest 716 and 718 are within geographic search region 722. By contrast, geographic points of interest 720 may be excluded from search results or may be subordinated to geographic points of interest 716 and 718 in a hierarchical order if mobile device 710 is travelling at the second travel speed or where the search terms are associated with a second search category. [0065] FIG. 8 is a schematic diagram of a geographic environment 800 depicting how a geographic search region may be updated responsive to a change of travel speed of a mobile device 810 and/or a change of search terms according to another implementation. In FIG. 8, geographic search region 812 has a circular shape and geographic search region 814 has an ovular shape. Thus, FIG. 8 depicts how a shape of the geographic search region may be varied, for example, in response to one or more of a change to the travel speed of the mobile device and/or a change to the search terms comprising the search query. FIG. 8 also depicts how an offset of a geographic search region may be varied relative to a geographic location of the mobile device. For example, geographic search region 814 is depicted with a greater offset (e.g., relative to a centroid of the geographic search region) as compared to geographic search region 812. For example, the greater offset applied to geographic search region 814 may be responsive to the mobile device traveling at a higher travel speed, whereas the lesseroffset applied to geographic search region 812 may be responsive to the mobile device traveling at a lower travel speed. [0066] FIGS. 7 and 8 further depict how a geographic search region may have a variety of different shapes. It will be appreciated that the shape of a geographic search region is not limited to regular geometric shapes, but instead a geographic search region may have an irregular shape. Furthermore, in some implementations, the shape of the geographic search region may be contoured to physical characteristics of the geographic environment, such as streets, buildings, bodies of water, land formations, etc. [0067] FIG. 9 is a schematic diagram of another example geographic environment 900 depicting how a geographic search region may be updated responsive to a change of travel speed of a mobile device and/or a change of search terms according to another implementation. In FIG. 9, mobile device 910 is travelling along a route of travel 914 as indicated by velocity vector 912. As one example, route of travel 914 may be a predetermined route of travel based, at least in part, on a user defined target destination. A first geographic search region 916 follows the contours of route of travel 914 by projecting outward from route of travel 914 by a first width (e.g., lateral distance from the route of travel). A second geographic search region 918 also follows the contours of route of travel 914, but projects outward from route of travel 914 by a second width (e.g., later distance from the route of travel) that is greater than the first width of first geographic search region 916. [0068] Hence, as shown in FIG. 9, the second geographic search region is wider or broader than the first geographic search region relative to route of travel 914. Geographic point of interest 920 within the first geographic search region may be included in search results returned in response to a search query, whereas geographic points of interest 922 and 924 that are located outside of the first geographic searchregion may be excluded from the search results. By contrast, each of geographic points of interest 920 and 922 are within the second geographic search region and may be included in search results returned in response to a search query, whereas geographic points of interest 924 may be excluded from the search results. [0069] As previously described, search results may be limited to different geographic search regions based on travel speed of the mobile device and/or search terms of a search query initiated by the mobile device. For example, a width or breadth of a geographic search region relative to a route of travel of a mobile device may be increased or decreased responsive to travel speed of the mobile device and/or search terms of a search query initiated at the mobile device. Hence, for example, search results may be limited to geographic search region 916 responsive to a first travel speed of a mobile device, whereas search results may be limited to geographic search region 918 responsive to a second travel speed of the mobile device that is greater or lesser in magnitude than the first travel speed. As another example, search results may be limited to geographic search region 916 responsive to a search query for restaurants, whereas search results may be limited to geographic search region 918 responsive to a search query for hotels. [0070] FIG. 10 is a schematic diagram of another example geographic environment 1000 depicting how a geographic search region may be updated responsive to a change of travel speed of a mobile device and/or a change of search terms according to another implementation. In FIG. 10, mobile device 1010 is travelling along a route of travel 1014 as indicated by velocity vector 1012. For example, route of travel 1014 may be a predetermined route of travel. Geographic search region 1016 is depicted following the contours of route of travel 1014. In contrast to FIG. 9, FIG. 10 shows how a length dimension rather than a width dimension of a geographic search region may be variedresponsive to travel speed of a mobile device and/or search terms of a search query initiated at the mobile device. [0071] For example, a length of geographic search region 1016 along route of travel 1014 may be varied between lengths 1018, 1020, and 1022 to include or exclude different geographic points of interest from the search results that are returned to the mobile device. For example, geographic point of interest 1024 may be included in the search results if geographic search region 1016 is limited to length 1018, whereas geographic points of interest 1026, 1028, and 1030 may be excluded from the search results. As another example, geographic points of interest 1024, 1026, and 1028 may be included in the search results if geographic search region 1016 is limited to length 1022. [0072] It will be appreciated in light of FIGS. 9 and 10 that in some examples, both width and length parameters of a geographic search region that follows a route of travel of a mobile device may be varied responsive to a state of the mobile device and/or search terms comprising a search query. As a non-limiting example, a length of the geographic search region along a predetermined route of travel may be increased and a width of the geographic search region extending outward from the predetermined route of travel may be reduced responsive to an increase of travel speed. In response to a decrease in travel speed, the length of the geographic search region along the predetermined route of travel may be reduced and the width of the geographic search region may be increased, for example. [0073] The mobile devices described herein may be enabled for use with various wireless communication networks such as a wireless wide area network (WW AN), a wireless local area network (WLAN), a wireless personal area network (WPAN), and so on. The term "network" and "system" may be used interchangeably herein. A WW AN may be a Code Division Multiple Access (CDMA) network, a Time Division MultipleAccess (TDMA) network, a Frequency Division Multiple Access (FDMA) network, an Orthogonal Frequency Division Multiple Access (OFDMA) network, a Single-Carrier Frequency Division Multiple Access (SC-FDMA) network, and so on. A CDMA network may implement one or more radio access technologies (RATs) such as cdma2000, Wideband-CDMA (W-CDMA), to name just a few radio technologies. Here, cdma2000 may include technologies implemented according to IS-95, IS-2000, and IS-856 standards. A TDMA network may implement Global System for Mobile Communications (GSM), Digital Advanced Mobile Phone System (D-AMPS), or some other RAT. GSM and W-CDMA are described in documents from a consortium named "3rd Generation Partnership Project" (3GPP). Cdma2000 is described in documents from a consortium named "3rd Generation Partnership Project 2" (3GPP2). 3GPP and 3GPP2 documents are publicly available. A WLAN may include an IEEE 802.1 lx network, and a WPAN may include a Bluetooth network, an IEEE 802.15x, for example. [0074] Techniques described herein may be used with an "SPS" that includes any one of several global navigation satellite systems (GNSS) and/or combinations of GNSS. Furthermore, such techniques may be used with positioning systems that utilize pseudolites or a combination of SVs and pseudolites. Pseudolites may include ground- based transmitters that broadcast a PN code or other ranging code (e.g., similar to a GPS or CDMA cellular signal) modulated on an L-band (or other frequency) carrier signal, which may be synchronized with system time (e.g., an SPS time). Such a transmitter may be assigned a unique PN code so as to permit identification by a remote receiver. Pseudolites may be useful, for example, to augment an SPS in situations where SPS signals from an orbiting SV might be unavailable, such as in tunnels, mines, buildings, urban canyons or other enclosed areas. Another implementation of pseudolites isknown as radio-beacons. The term "SV", as used herein, is intended to include pseudolites, equivalents of pseudolites, and possibly others. The terms "SPS signals" and/or "SV signals", as used herein, is intended to include SPS-like signals from pseudolites or equivalents of pseudolites. [0075] The methodologies described herein may be implemented in different ways and with different configurations depending upon the particular application. For example, such methodologies may be implemented in hardware, firmware, and/or combinations thereof, along with software. In a hardware implementation, for example, a processing unit may be implemented within one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other devices units designed to perform the functions described herein, and/or combinations thereof. [0076] The herein described storage media may comprise primary, secondary, and/or tertiary storage media. Primary storage media may include memory such as random access memory and/or read-only memory, for example. Secondary storage media may include mass storage such as a magnetic or solid state hard drive. Tertiary storage media may include removable storage media such as a magnetic or optical disk, a magnetic tape, a solid state storage device, etc. In certain implementations, the storage media or portions thereof may be operatively receptive of, or otherwise configurable to couple to, other components of a computing platform, such as a processor. In at least some implementations, one or more portions of the herein described storage media may store signals representative of data and/or information as expressed by a particular state of the storage media. For example, an electronic signalrepresentative of data and/or information may be "stored" in a portion of the storage media (e.g., memory) by affecting or changing the state of such portions of the storage media to represent data and/or information as binary information (e.g., ones and zeros). As such, in a particular implementation, such a change of state of the portion of the storage media to store a signal representative of data and/or information constitutes a transformation of storage media to a different state or thing. [0077] Some portions of the preceding detailed description have been presented in terms of algorithms or symbolic representations of operations on binary digital electronic signals stored within a memory of a specific apparatus or special purpose computing device or platform. In the context of this particular specification, the term specific apparatus or the like includes a general purpose computer once it is programmed to perform particular functions pursuant to instructions from program software. Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, is considered to be a self-consistent sequence of operations or similar signal processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated as electronic signals representing information. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals, information, or the like. It should be understood, however, that all of these or similar terms are to be associated with appropriate physical quantities and are merely convenient labels.[0078] Unless specifically stated otherwise, as apparent from the above description, it is appreciated that throughout this specification discussions utilizing terms such as "processing," "computing," "calculating,", "identifying", "determining", "establishing", "obtaining", and/or the like refer to actions or processes of a specific apparatus, such as a special purpose computer or a similar special purpose electronic computing device. In the context of this specification, therefore, a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals, typically represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the special purpose computer or similar special purpose electronic computing device. [0079] Reference throughout this specification to "one example", "an example", "certain examples", or "exemplary implementation" means that a particular feature, structure, or characteristic described in connection with the feature and/or example may be included in at least one feature and/or example of claimed subject matter. Thus, the appearances of the phrase "in one example", "an example", "in certain examples" or "in certain implementations" or other like phrases in various places throughout this specification are not necessarily all referring to the same feature, example, and/or limitation. Furthermore, the particular features, structures, or characteristics may be combined in one or more examples and/or features. In the preceding detailed description, numerous specific details have been set forth to provide a thorough understanding of claimed subject matter. [0080] While there has been illustrated and described what are presently considered to be example features, it will be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing fromclaimed subject matter. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter may also include all aspects falling within the scope of appended claims, and equivalents thereof.
Various embodiments of methods and systems for implementing a microprocessor (100) that includes a trace cache (160) and attempts to transition fetching from instruction cache (106) to trace cache (160) only on label boundaries are disclosed. In one embodiment, a microprocessor (100) may include an instruction cache (106), a branch prediction unit (132), and a trace cache (160). The prefetch unit (108) may fetch instructions from the instruction cache (106) until the branch prediction unit (132) outputs a predicted target address for a branch instruction. When the branch prediction unit (132) outputs a predicted target address, the prefetch unit (108) may check for an entry (162) matching the predicted target address in the trace cache (160). If a match is found, the prefetch unit (108) may fetch one or more traces (166) from the trace cache (160) in lieu of fetching instructions from the instruction cache (106).
WHAT IS CLAIMED IS: 1. A microprocessor (100), comprising: an instruction cache (106) configured to store instructions; a branch prediction unit (132); a trace cache (160) configured to store a plurality of traces (166) of instructions; and a prefetch unit (108) coupled to the instruction cache (106), the branch prediction unit (132), and the trace cache (160); wherein the prefetch unit (108) is configured to fetch instructions from the instruction cache (106) until the branch prediction unit (132) outputs a predicted target address; and wherein if the prefetch unit (108) identifies a match for the predicted target address in the trace cache (160), the prefetch unit (108) is configured to fetch one or more of the plurality of traces (166) from the trace cache (160). 2. The microprocessor (100) of claim 1, further comprising a trace generator (170), wherein the trace generator (170) is configured to begin a trace (166) with an instruction corresponding to a label boundary. 3. The microprocessor (100) of claim 2, wherein the trace generator (170) is configured to check the trace cache (160) for a duplicate copy of the trace (166) that the trace generator (170) is constructing. 4. The microprocessor (100) of claim 1, wherein each of the plurality of traces (166) comprises partially-decoded instructions. 5. The microprocessor (100) of claim 1, wherein each of the plurality of traces (166) is associated with a tag (164) comprising the address of an earliest instruction, in program order, stored within that trace (166). 6. The microprocessor (100) of claim 1, wherein each of the plurality of traces (166) is associated with a flow control field (168) comprising a label for an instruction to which control will pass for each branch operation comprised in that trace (166). 7. A computer system (400), comprising: a system memory (404); and a microprocessor (100) coupled to the system memory (404); characterized in that the microprocessor (100) comprises: an instruction cache (106) configured to store instructions; a branch prediction unit (132); a trace cache (160) configured to store a plurality of traces (166) of instructions; and a prefetch unit (108) coupled to the instruction cache (106), the branch prediction unit (132), and the trace cache (160); wherein the prefetch unit (108) is configured to fetch instructions from the instruction cache (106) until the branch prediction unit (132) outputs a predicted target address; and <Desc/Clms Page number 15> wherein if the prefetch unit (108) identifies a match for the predicted target address in the trace cache (160), the prefetch unit (108) is configured to fetch one or more of the plurality of traces (166) from the trace cache (160). 8. A method, comprising: fetching instructions from an instruction cache (106); continuing to fetch instructions from the instruction cache (106) until a branch target address is generated; if a branch target address is generated, searching a trace cache (160) for an entry (162) corresponding to the branch target address. 9. The method of claim 8, further comprising fetching one or more traces (166) from the trace cache (160) if an entry (162) is identified in the trace cache (160) corresponding to the branch target address. 10. The method of claim 8, further comprising: receiving a retired instruction; starting construction of a new trace (166) if the received instruction is associated with a branch label; if the previous trace (166) under construction duplicates a trace (166) in the trace cache (160), delaying construction of the new trace (166) until the received instruction corresponds to a branch label.
<Desc/Clms Page number 1> TITLE : TRANSITIONING FROM INSTRUCTION CACHE TO TRACE CACHE ON LABELBOUNDARIES Technical Field [0001] This invention is related to the field of microprocessors, and more particularly, to microprocessors having trace caches. Background Art [0002] Instructions processed in a microprocessor are encoded as a sequence of ones and zeros. For some microprocessor architectures, instructions may be encoded with a fixed length, such as a certain number of bytes. For other architectures, such as the x86 architecture, the length of instructions may vary. The x86 microprocessor architecture specifies a variable length instruction set (i. e. , an instruction set in which various instructions are each specified by differing numbers of bytes). For example, the 80386 and later versions of x86 microprocessors employ between 1 and 15 bytes to specify a particular instruction. Instructions have an opcode, which may be 1-2 bytes, and additional bytes may be added to specify addressing modes, operands, and additional details regarding the instruction to be executed. In some microprocessor architectures, each instruction may be decoded into one or more simpler operations prior to execution. Decoding an instruction may also involve accessing a register renaming map in order to determine the physical register to which each logical register in the instruction maps and/or to allocate a physical register to store the result of the instruction. [0004] Typically, instructions are fetched from system memory into instruction cache in contiguous blocks. The instructions included in these blocks are stored in the instruction cache in compiled order. During program execution, instructions are often executed in a different order, such as when a branch is taken within the code. In such cases the instructions following the taken branch cannot generally be fetched from the instruction cache during the same cycle as the branch instruction because they are stored in non-contiguous locations. To attempt to overcome this instruction fetch bandwidth limitation, many superscalar microprocessors incorporate a trace cache. [0005] Trace cache differs from instruction cache in that instructions stored in trace cache are typically stored in execution order as opposed to compiled order. Storing operations in execution order allows an instruction sequence containing a taken branch operation to be accessed during a single cycle from trace cache, whereas accessing the same sequence from instruction cache would require several cycles. [0006] Superscalar microprocessors typically decode multiple instructions per clock cycle. The amount of hardware needed to match the addresses of each instruction within a group being decoded with the starting addresses of traces in the trace cache may be prohibitive. This may greatly increase the difficulty of determining a hit in the trace cache in some cases. DISCLOSURE OF INVENTION [0007] Various embodiments of methods and systems for implementing a microprocessor that includes a trace cache and attempts to transition fetching from instruction cache to trace cache only on label boundaries are disclosed. In one embodiment, a microprocessor may include an instruction cache, a branch prediction unit, and a trace cache. The prefetch unit may fetch instructions from the instruction cache until the branch prediction unit outputs a predicted target address for a branch instruction. When the branch prediction unit outputs a predicted <Desc/Clms Page number 2> target address, the prefetch unit may check for an entry matching the predicted target address in the trace cache. If a match is found, the prefetch unit may fetch one or more traces from the trace cache in lieu of fetching instructions from the instruction cache. [0008] The branch prediction unit may output a predicted target address when it encounters a branch instruction for which the branch is predicted to be taken. For example this would be the case for any unconditional branch instruction or any conditional branch instruction for which the branch condition is predicted to be satisfied. The branch prediction unit may also output a predicted target address when any component of the microprocessor discovers that a branch misprediction has occurred. When a conditional branch instruction has entered the execution pipeline, a functional unit may evaluate the associated branch condition when the necessary data is valid. In some instances this evaluation may cause the branch to be taken even though it was predicted to be not taken when then instruction was fetched. The converse situation may occur as well, and either case may result in a branch misprediction that may cause the branch prediction unit to output a branch target address. The microprocessor may also include a trace generator. In some embodiments, the trace generator may construct traces from instructions that have been executed and retired. In other embodiments, the trace generator may construct traces from decoded or partially decoded instructions prior to execution. In some embodiments a trace may be associated with a tag, which includes the address of the earliest instruction, in program order, stored within the trace. The trace may also include a flow control field that includes a label for an instruction to which control will pass for each branch instruction included in the trace. The trace generator may wait until it receives an instruction corresponding to a branch target address before beginning the construction of a new trace. Once the construction of a trace has commenced, the trace generator may check the trace cache for a duplicate copy of the trace and if such a copy is found, the trace generator may discard the trace under construction. In some embodiments, when the trace generator identifies a duplicate copy of the trace under construction in trace cache, it may check the trace cache for an entry corresponding to the next trace to be generated and if such an entry is found, the trace generator may discard the trace under construction. BRIEF DESCRIPTION OF DRAWINGS [0011] A better understanding of the present invention can be obtained when the following detailed description is considered in conjunction with the following drawings, in which: [0012] FIG. 1 shows a microprocessor incorporating a trace cache, according to one embodiment. [0013] FIG. 2 illustrates an exemplary trace cache entry, according to one embodiment. [0014] FIG. 3 is a flowchart for a method for fetching instructions from an instruction cache or traces from a trace cache, according to one embodiment. FIG. 4 is a flowchart for a method for constructing traces, according to one embodiment. [00164 FIG. 5 shows one embodiment of a computer system. [00171 FIG. 6 shows another embodiment of a computer system. [0018] While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention as defined by the appended claims. Note, the <Desc/Clms Page number 3> headings are for organizational purposes only and are not meant to be used to limit or interpret the description or claims. Furthermore, note that the word"may"is used throughout this application in a permissive sense (i. e., having the potential to, being able to), not a mandatory sense (i. e. , must). The term"include"and derivations thereof mean"including, but not limited to."The term"connected"means"directly or indirectly connected, "and the term"coupled"means"directly or indirectly coupled." MODE (S) FOR CARRYING OUT THE INVENTION [0019] FIG. 1 is a block diagram of logical components included in one embodiment of a microprocessor 100 which transition from instruction cache 106 to trace cache 160 on label boundaries. Microprocessor 100 is configured to execute instructions stored in a system memory 200. Many of these instructions operate on data stored in system memory 200. Note that system memory 200 may be physically distributed throughout a computer system and may be accessed by one or more microprocessors 100. In some embodiments, the microprocessor 100 may be designed to be compatible with the x86 architecture. Note that microprocessor 100 may also include and/or be coupled to many other components in addition to those shown here. For example, additional levels of cache may be included (internal and/or external to microprocessor 100) between microprocessor 100 and system memory 200. Similarly, microprocessor 100 may include a memory controller configured to control system memory 200 in some embodiments. Additionally, the interconnections between logical components may vary between embodiments. [0020j Microprocessor 100 may include an instruction cache 106 and a data cache 128. Microprocessor 100 may include a prefetch unit 108 coupled to the system memory 200. Prefetch unit 108 may prefetch instruction code from the system memory 200 for storage within instruction cache 106. In one embodiment, prefetch unit 108 may be configured to burst code from the system memory 200 into instruction cache 106. Prefetch unit 108 may employ a variety of specific code prefetching techniques and algorithms. Prefetch unit 108 may also fetch instructions from instruction cache 106 and traces from trace cache 160 into dispatch unit 104. Instructions may be fetched from instruction cache 106 in response to a given instruction address missing in trace cache 160. Likewise, instructions may be fetched from system memory 200 in response to a given address missing in instruction cache 106. [0021] A dispatch unit 104 may be configured to receive instructions from instruction cache 106 and to receive decoded and/or partially decoded operations from trace cache 160. The dispatch unit 104 may include a decode unit 140 to decode instructions received from instruction cache 106. The dispatch unit 104 may also include a microcode unit for use when handling microcoded instructions. [0022] The dispatch unit 104 is configured to dispatch operations to scheduler (s) 118. One or more schedulers 118 may be coupled to receive dispatched operations from dispatch unit 104 and to issue operations to one or more execution cores 124. Execution core (s) 124 may include a load/store unit 126 configured to perform accesses to data cache 128. Results generated by execution core (s) 124 may be output to a result bus 130. These results may be used as operand values for subsequently issued instructions and/or stored to register file 116. A retire queue 102 may be coupled to scheduler (s) 118 and dispatch unit 104. The retire queue may be configured to determine when each issued operation may be retired. [00231 Instruction cache 106 may temporarily store instructions prior to their receipt by dispatch unit 104. Instruction code may be provided to instruction cache 106 by prefetching code from the system memory 200 through prefetch unit 108. Instruction cache 106 may be implemented in various configurations (e. g. , set- associative, fully-associative, or direct-mapped). <Desc/Clms Page number 4> [0024] Dispatch unit 104 may output signals including bit-encoded operations executable by the execution core (s) 124 as well as operand address information, immediate data and/or displacement data. Decode unit 140 may be used to decode certain instructions into one or more operations executable within execution core (s) 124. Simple instructions may correspond to a single operation. More complex instructions may correspond to multiple operations. Upon receiving an operation that involves the update of a register, the dispatch unit 104 may reserve a register location within register file 116 to store speculative register states (in an alternative embodiment, a reorder buffer may be used to store one or more speculative register states for each register). A register map may translate logical register names of source and destination operands to physical register names in order to facilitate register renaming. Such a register map may track which registers within register file 116 are currently allocated and unallocated. [0025] When operations are handled by dispatch unit 104, if a required operand is a register location, register address information may be routed to a register map or a reorder buffer. For example, in the x86 architecture, there are eight 32-bit logical registers (e. g. , EAX, EBX, ECX, EDX, EBP, ESI, EDI and ESP). Physical register file 116 (or a reorder buffer) includes storage for results that change the contents of these logical registers, allowing out of order execution. A physical register in register file 116 may be allocated to store the result of each operation that modifies the contents of one of the logical registers. Therefore, at various points during execution of a particular program, register file 116 (or, in alternative embodiments, a reorder buffer) may have one or more registers that contain the speculatively executed contents of a given logical register. [0026] A register map may assign a physical register to a particular logical register specified as a destination operand for an operation. Register file 116 may have one or more previously allocated physical registers assigned to a logical register specified as a source operand in a given operation. The register map may provide a tag for the physical register most recently assigned to that logical register. This tag may be used to access the operand's data value in the register file 116 or to receive the data value via result forwarding on the result bus 130. If the operand corresponds to a memory location, the operand value may be provided on the result bus (for result forwarding and/or storage in register file 116) through load/store unit 126. Operand data values may be provided to execution core (s) 124 when the operation is issued by one of the scheduler (s) 118. Note that in alternative embodiments, operand values may be provided to a corresponding scheduler 118 when an operation is dispatched (instead of being provided to a corresponding execution core 124 when the operation is issued). [0027] The microprocessor 100 of FIG. 1 supports out of order execution. A retire queue 102 (or, alternatively, a reorder buffer) may keep track of the original program sequence for register read and write operations, allow for speculative instruction execution and branch misprediction recovery, and facilitate precise exceptions. In many embodiments, retire queue 102 may function similarly to a reorder buffer. However, unlike a typical reorder buffer, retire queue 102 may not provide any data value storage. In alternative embodiments, retire queue 102 may function more like a reorder buffer and also support register renaming by providing data value storage for speculative register states. In some embodiments, retire queue 102 may be implemented in a first-infirst-out configuration in which operations move to the"bottom"of the buffer as they are validated, thus making room for new entries at the"top"of the queue. As operations are retired, retire queue 102 may deallocate registers in register file 116 that are no longer needed to store speculative register states and provide signals to a register map indicating which registers are currently free. By maintaining speculative register states within register file 116 (or, in alternative embodiments, within a reorder buffer) until the operations that generated those states are validated, <Desc/Clms Page number 5> the results of speculatively-executed operations along a mispredicted path may be invalidated in the register file 116 if a branch prediction is incorrect. [0028] Retire queue 102 may also provide signals identifying program traces to trace generator 170. Trace generator 170 may also be described as a fill unit. Trace generator 170 may store traces identified by retire queue 102 into trace cache 160. Each trace may include operations that are part of several different basic blocks. A basic block may be defined as a set of consecutive instructions, wherein if any one of the instructions in a basic block is executed, all of the instructions in that basic block will be executed. One type of basic block may be a set of instructions that begins just after a branch instruction and ends with another branch operation. In some embodiments, the traces stored into trace cache 160 may include several decoded or partially decoded instructions. Decoded or partially decoded instructions may be referred to as operations. As used herein, a"trace"is a group of instructions or operations that are stored within a single trace cache entry in the trace cache 160. [0029] Prefetch unit 108 may fetch operations from trace cache 160 into dispatch unit 104. In some embodiments traces may be constructed from decoded or partially decoded instructions from retire queue 102. When such traces are fetched from the trace cache, the decode unit 140 may be at least partially bypassed, resulting in a decreased number of dispatch cycles for the trace cached operations. Accordingly, the trace cache 160 may allow the dispatch unit 104 to amortize the time taken to partially (or fully) decode the cached operations in decode unit 140 over several execution iterations if traces are executed more than once. [0030] The bit-encoded operations and immediate data provided at the outputs of dispatch unit 104 may be routed to one or more schedulers 118. Note that as used herein, a scheduler is a device that detects when operations are ready for execution and issues ready operations to one or more execution units. For example, a reservation station is a scheduler. Each scheduler 118 may be capable of holding operation information (e. g. , bit encoded execution bits as well as operand values, operand tags, and/or immediate data) for several pending operations awaiting issue to an execution core 124. In some embodiments, each scheduler 118 may not provide operand value storage. Instead, each scheduler may monitor issued operations and results available in register file 116 in order to determine when operand values will be available to be read by execution core (s) 124 (from register file 116 or result bus 130). In some embodiments, each scheduler 118 may be associated with a dedicated execution core 124. In other embodiments, a single scheduler 118 may issue operations to more than one of the execution core (s) 124. [0031] Schedulers 118 may be provided to temporarily store operation information to be executed by the execution core (s) 124. As stated previously, each scheduler 118 may store operation information for pending operations. Additionally, each scheduler may store operation information for operations that have already been executed but may still reissue. Operations are issued to execution core (s) 124 for execution in response to the values of any required operand (s) being made available in time for execution. Accordingly, the order in which operations are executed may not be the same as the order of the original program instruction sequence. [0032] In one embodiment, each of the execution core (s) 124 may include components configured to perform integer arithmetic operations of addition and subtraction, as well as shifts, rotates, logical operations, and branch operations. A floating point unit may also be included to accommodate floating point operations. One or more of the execution core (s) 124 may be configured to perform address generation for load and store memory operations to be performed by load/store unit 126. [0033] The execution core (s) 124 may also provide information regarding the execution of conditional branch instructions to branch prediction unit 132. If information from the execution core 124 indicates that a branch prediction is incorrect, the branch prediction unit 132 may flush instructions subsequent to the mispredicted branch <Desc/Clms Page number 6> that have entered the instruction processing pipeline and redirect prefetch unit 108. The redirected prefetch unit 108 may then begin fetching the correct set of instructions from instruction cache 106, trace cache 160, and/or system memory 200. In such situations, the results of instructions in the original program sequence that occurred after the mispredicted branch instruction may be discarded, including those which were speculatively executed and temporarily stored in load/store unit 126 and/or register file 116. [0034] Results produced by components within execution core (s) 124 may be output on the result bus 130 to the register file 116 if a register value is being updated. If the contents of a memory location are being changed, the results produced within execution core (s) 124 may be provided to the load/store unit 126. Trace Cache [0035] Trace generator 170 may be configured to receive basic blocks of retired operations from retire queue 102 and to store those basic blocks within traces in trace cache 160. Note that in alternative embodiments, trace generator 170 may be coupled to the front-end of the microprocessor (e. g. , before or after the dispatch unit) and configured to generate traces from basic blocks detected within the pipeline at that point within the microprocessor. During trace construction, trace generator 170 may perform transformations on basic blocks of operations received from retire queue 102 to form traces. In some embodiments, these transformations may include reordering of operations and elimination of operations. [0036] FIG. 2 illustrates one embodiment of trace cache 160 along with some components of microprocessor 100 which are coupled to and/or interact with the trace cache. Trace cache 160 may include several trace cache entries 162. Each trace cache entry 162 may store a group of operations referred to as a trace 166. In addition to trace 166, each trace cache entry 162 may also include an identifying tag 164 and flow control (F. C. ) information 168. Trace cache entry 162 may include a flow control field 168 for each branch included in the trace. Each control field 168 may include address information to determine which instruction is to be executed next for the cases that the branch is taken and not taken. For example, flow control field 168A may correspond to a first branch instruction included in trace cache entry 162. This first branch may be conditional and flow control field 168A may contain two addresses. One of the addresses may be the address of the instruction to be executed after the branch instruction in the case that the condition is true. The other address may indicate the instruction to be executed next in the event that the branch condition is false. Flow control field 168B 168A may correspond to a second branch instruction included in trace cache entry 162. This branch may be unconditional, and therefore, flow control field 168B may include only the address of the instruction to which control flow should pass under all circumstances. [0037] Tag 164 may be similar to a tag in instruction cache 106, allowing prefetch unit 108 to determine whether a given operation hits or misses in trace cache 160. For example, tag 164 may include all or some of the address bits identifying an operation within the trace cache entry (e. g. , the tag may include the address of the earliest operation, in program order, stored within that trace). In some embodiments, the tag may include enough information so that some operations may be independently addressable within a trace. For example, the first operation within each basic block may be addressable through information stored in the tag. In other embodiments, only the first operation within a trace may be addressable. [0038] In some embodiments, flow control information 168 may include a label for each branch operation included within the trace. The label may be an indication identifying the address to which control should branch. For example, a section of assembly language code may include a branch instruction to transfer control of the flow of execution to an instruction other than the instruction that immediately follows the branch in the order the code is <Desc/Clms Page number 7> written. As a convenience to the coder, some compilers may allow for the inclusion of one or more alpha-numeric symbols with the branch instruction. This label may also be included in the assembly code immediately preceding the instruction targeted by the branch instruction. During compilation of the assembly code, the compiler may determine the address of the instruction targeted by the branch instruction and may substitute this address for the alpha-numeric symbols included with the branch instruction, and the address of the targeted instruction now may become the label. In other embodiments, labels may be used to identify any basic block of instructions. A label boundary then, may be any point in the code at which the flow of control is transferred to an instruction whose address is a label. The creation of traces and attempts to hit in trace cache may occur with the execution of instructions at label boundaries. [0039] In many implementations, a trace cache entry 162 may include multiple branch instructions and multiple flow control fields 168. Each field of flow control information 168 may be associated with a particular branch operation. For example, in one embodiment, one flow control information storage location 168A within a trace may be associated with the first branch operation in the trace and the other flow control information storage location 168B may be associated with the second branch in the trace. Alternatively, the flow control information may include tags or other information identifying the branch operation with which that flow control information is associated. In yet other embodiments, a branch prediction and/or information identifying which flow control information corresponds to a branch operation may be stored with that branch operation within operation storage 166. Instruction/Trace Fetching [0040] Prefetch unit 108 may fetch a line of instructions from memory 200 and store the line in instruction cache 106. Instructions may be stored in instruction cache 106 in compiled order. Depending on run-time conditions, the execution order for instructions in instruction cache 106 may frequently vary from their compiled order. For example, the execution of a branch instruction from instruction cache 106 may cause the flow of control to jump to an instruction that is separated from the branch instruction by many intervening instructions according to compiled order. The target of the branch instruction may not be resident in instruction cache 106. This may cause prefetch unit 108 to fetch another line of instructions from system memory 200. During the time in which the next line of instructions is being loaded into instruction cache, execution cores 124 may be idle waiting for the next operations. [0041] In some embodiments, prefetch unit 108 may use a portion of the branch target address to index into trace cache 160. If a valid trace cache entry 162 exists at the indexed location, the prefetch unit may compare tag field 164 with the branch target address. If the tag matches the target address, then prefetch unit 108 may fetch trace 166 to dispatch unit 104 for execution. Dependent upon information received from the execution cores and/or the branch prediction unit, prefetch unit 108 may continue to fetch traces from trace cache 160 to dispatch unit 104 until no entry can be found whose tag field corresponds to the address of the next instruction to be executed. Prefetch unit 108 may then resume fetching instructions from instruction cache 106. [0042] FIG. 3 is a flowchart for a method for fetching instructions from an instruction cache or traces from a trace cache, according to one embodiment. As shown in block 301, one or more instructions may be fetched from the instruction cache. In some cases the processing of the fetched instructions may not result in the generation of a branch target address. For example, this may be true in cases where no branch operations are decoded from the <Desc/Clms Page number 8> instructions or decoded branch operations are not taken. Under such circumstances, as shown at 303, instruction fetching from the instruction cache will continue. [0043] In other cases the processing of the fetched instructions may result in the generation of a branch target address. For example, if the condition for a conditional branch is predicted to be satisfied or if an unconditional branch is encountered, or if a branch target misprediction occurs a branch target address may be generated. In these cases, a search of the trace cache may be performed. A portion of the generated branch target address may be used to index into the trace cache, and if a valid entry is stored at the corresponding location, the tag field of the entry may be compared to another portion of the branch target address, as shown at 307. If a match is made, the prefetch unit may fetch the trace from the corresponding entry in the trace cache to the dispatch unit, as shown at 309. The prefetch unit may continue to fetch traces until it encounters an address that misses in the trace cache. At this point, fetching may continue from the instruction cache. Trace Construction [0044] As stated previously, the fetching of a branch instruction from the instruction cache for which the branch is predicted to be taken may result in the prefetch unit initiating the fetch of the line that includes the branch target instruction. This may result in significant delay in providing instructions to the dispatch unit, particularly when the line storing the branch target instruction is not resident in the instruction cache. [0045] Upon retirement of the branch and subsequent instructions, trace generator 170 may construct a trace that spans the branch label boundary. Even though the branch and target instructions may have been stored in different lines in the instruction cache, they may be retired coincidently, and trace generator 170 may construct a trace that includes the operations corresponding to both instructions. [00461 If the portion of code including the branch instruction is subsequently traversed again, the prefetch unit may fetch the corresponding trace from trace cache 160 rather than fetching the instructions from instruction cache 106. Since the operations targeted by the branch instruction are already incorporated into the trace, the trace may be executed significantly faster than executing the parent instructions from instruction cache 106. [0047] The increase in microprocessor performance gained by fetching from trace cache 160 rather than instruction cache 106 may be proportional to the length of the trace (the number of operations the trace includes). Therefore, it may be desirable to construct traces that contain as many operations as possible. In some embodiments, trace generator 170 may construct traces from retired operations that are fetched from instruction cache 106. When the prefetch unit switches from fetching instructions from the instruction cache to fetching traces from the trace cache, the trace generator may terminate trace construction. Therefore, it may be desirable to limit the rate at which fetching switches from instruction cache to trace cache occur. This may be done by limiting the attempts, by the prefetch unit to hit in trace cache, to label boundaries. [0048] The length of traces constructed by trace generator 170 may be inversely proportional to the frequency with which prefetch unit 108 attempts to hit in trace cache 160. For example, if an attempt is made to hit in the trace cache for each instruction fetched from instruction cache, the prefetch unit may frequently identify corresponding traces and switch from fetching instructions to fetching traces. The trace generator may end trace construction and produce a trace that includes those operations retired since the previous switch. If the previous hit in trace cache occurred within a few instructions, then the number of operations retired in the interim will be small as well, resulting in the production of a trace including a small number of operations. <Desc/Clms Page number 9> [0049] The fetching and execution of a short trace formed as described above may result in a further increase in the frequency with which the prefetch unit 108 switches between instruction and trace caches. For example, when the trace generation unit 170 terminates the construction of a trace due to a fetching switch from instruction cache to trace cache, some operations that could have been incorporated into the terminated trace may not be, perhaps because they had not retired prior to the time of the switch. The execution of the prematurely terminated trace may result in a switch from the trace cache to the instruction cache in order to fetch the missing instructions. The execution of short traces may be of little benefit in terms of improving microprocessor efficiency as compared to executing the parent instructions from instruction cache. [0050] In some embodiments, the prefetch unit 108 may delay an attempt to hit in the trace cache until the branch prediction unit 132 generates the address of a target instruction. The fetch of a branch instruction from instruction cache may cause the branch prediction unit 132 to predict whether the branch will be taken or not taken when executed. If the prediction is that the branch will be taken, then the branch prediction unit may generate the address of the instruction that is targeted by the branch instruction. The branch prediction unit 132 may also generate the address of the next instruction to be executed after a branch instruction in the case where a branch mispredict occurs. For example, if a conditional branch instruction is fetched and the branch prediction unit 132 predicts that the branch will be taken, but upon resolution of the condition it is determined that the branch should not be taken, the prefetch unit 108 may use the pre-generated address of the next instruction following the conditional branch in compiled order as the address of the next instruction to be fetched. By delaying the attempt to hit in trace cache 160 until the branch target for either branches that are predicted to be taken or branch mispredicts is available, longer traces may be generated. [0051] In embodiments where the prefetch unit waits for a label boundary before attempting to hit in the trace cache, the address used for matching may normally be a branch target. As described previously, a fetching switch may be made from trace cache to instruction cache at any time in order to fetch instructions missing from the trace cache. Therefore, the stream of retired instructions to the trace generator 170 may begin at any point with regard to label boundaries. In embodiments where attempts to hit in trace cache are made only on label boundaries, the beginning of trace construction may be delayed to coincide with label boundaries as well. This may insure that the addresses of the first instructions of traces will be labels. [0052] When the trace generator performs a search of the trace cache, if an existing entry is found which matches the tag of the newly completed trace, the matching entry may be invalidated, the newly completed trace may be discarded, and the trace generator may wait for operations from the next branch boundary to be retired before beginning construction of a new trace. In some embodiments, when the trace generator identifies a duplicate copy of the trace under construction in trace cache, it may check the trace cache for an entry corresponding to the next trace to be generated and if such an entry is found, the trace generator may discard the trace under construction. In other embodiments, the trace generation unit may wait until two or more sequentially generated trace entries duplicate existing entries in the trace cache before discarding the traces and delaying the start of new construction until a label boundary is reached. In yet other embodiments, when duplicate existing entries are identified in the trace cache, those entries may be invalidated. [0053] FIG. 4 is a flowchart for a method for constructing traces, according to one embodiment. Block 351 shows an instruction being received. At 353, if a trace or traces duplicating the trace under construction and/or the next trace to be constructed have not been identified in the trace cache, the operations corresponding to the instruction may be used to fill vacant operation positions for a trace, as shown at 355. On the other hand, if a <Desc/Clms Page number 10> duplicate trace or traces have been identified at decision block 353, the instruction may be checked to determine whether it corresponds to a branch label. If it is determined at decision block 357 that the instruction does not correspond to a branch label, the instruction may be discarded. Instructions may continue to be discarded until an instruction corresponding to a branch label is received. [0054] As illustrated at 357, if one of the received operations is determined to be the first operation at a branch label, then the filling of operation positions in a new trace may commence, as indicated at 359. Block 361 indicates that when a trace is completed, the trace cache may be searched, as shown at 363 to identify corresponding entries. If a matching entry is identified, the just-completed trace may be discarded, as shown at 367. If no duplicate entry is found at block 363, the new trace may be stored in a trace cache entry. In some embodiments, the duplicate entry may not be discarded until several duplicate successive duplicate entries are found. Exemplary Computer Systems [0055] FIG. 5 shows a block diagram of one embodiment of a computer system 400 that includes a microprocessor 100 coupled to a variety of system components through a bus bridge 402. Microprocessor 100 may include an embodiment of a trace cache generator 170 as described above. Other embodiments of a computer system are possible and contemplated. In the depicted system, a main memory 200 is coupled to bus bridge 402 through a memory bus 406, and a graphics controller 408 is coupled to bus bridge 402 through an AGP bus 410. Several PCI devices 412A-412B are coupled to bus bridge 402 through a PCI bus 414. A secondary bus bridge 416 may also be provided to accommodate an electrical interface to one or more EISA or ISA devices 418 through an EISA/ISA bus 420. In this example, microprocessor 100 is coupled to bus bridge 402 through a CPU bus 424 and to an optional L2 cache 428. In some embodiments, the microprocessor 100 may include an integrated LI cache (not shown). [00561 Bus bridge 402 provides an interface between microprocessor 100, main memory 404, graphics controller 408, and devices attached to PCI bus 414. When an operation is received from one of the devices connected to bus bridge 402, bus bridge 402 identifies the target of the operation (e. g. , a particular device or, in the case of PCI bus 414, that the target is on PCI bus 414). Bus bridge 402 routes the operation to the targeted device. Bus bridge 402 generally translates an operation from the protocol used by the source device or bus to the protocol used by the target device or bus. [0057] In addition to providing an interface to an ISA/EISA bus for PCI bus 414, secondary bus bridge 416 may incorporate additional functionality. An input/output controller (not shown), either external from or integrated with secondary bus bridge 416, may also be included within computer system 400 to provide operational support for a keyboard and mouse 422 and for various serial and parallel ports. An external cache unit (not shown) may also be coupled to CPU bus 424 between microprocessor 100 and bus bridge 402 in other embodiments. Alternatively, the external cache may be coupled to bus bridge 402 and cache control logic for the external cache may be integrated into bus bridge 402. L2 cache 428 is shown in a backside configuration to microprocessor 100. It is noted that L2 cache 428 may be separate from microprocessor 100, integrated into a cartridge (e. g. , slot 1 or slot A) with microprocessor 100, or even integrated onto a semiconductor substrate with microprocessor 100. [0058] Main memory 200 is a memory in which application programs are stored and from which microprocessor 100 primarily executes. A suitable main memory 200 may include DRAM (Dynamic Random Access Memory). For example, a plurality of banks of SDRAM (Synchronous DRAM) or Rambus DRAM (RDRAM) may be suitable. <Desc/Clms Page number 11> [0059] PCI devices 412A-412B are illustrative of a variety of peripheral devices such as network interface cards, video accelerators, audio cards, hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards. Similarly, ISA device 418 is illustrative of various types of peripheral devices, such as a modem, a sound card, and a variety of data acquisition cards such as GPIB or field bus interface cards. [0060] Graphics controller 408 is provided to control the rendering of text and images on a display 426. Graphics controller 408 may embody a typical graphics accelerator generally known in the art to render three- dimensional data structures that can be effectively shifted into and from main memory 200. Graphics controller 408 may therefore be a master of AGP bus 410 in that it can request and receive access to a target interface within bus bridge 402 to thereby obtain access to main memory 200. A dedicated graphics bus accommodates rapid retrieval of data from main memory 404. For certain operations, graphics controller 408 may further be configured to generate PCI protocol transactions on AGP bus 410. The AGP interface of bus bridge 402 may thus include functionality to support both AGP protocol transactions as well as PCI protocol target and initiator transactions. Display 426 is any electronic display upon which an image or text can be presented. A suitable display 426 includes a cathode ray tube ("CRT"), a liquid crystal display ("LCD"), etc. [0061] It is noted that, while the AGP, PCI, and ISA or EISA buses have been used as examples in the above description, any bus architectures may be substituted as desired. It is further noted that computer system 400 may be a multiprocessing computer system including additional microprocessors (e. g. , microprocessor 100a shown as an optional component of computer system 400). Microprocessor 100a may be similar to microprocessor 100. More particularly, microprocessor 100a may be an identical copy of microprocessor 100 in one embodiment. Microprocessor 100a may be connected to bus bridge 402 via an independent bus (as shown in FIG. 5) or may share CPU bus 224 with microprocessor 100. Furthermore, microprocessor 100a may be coupled to an optional L2 cache 428a similar to L2 cache 428. [0062] Turning now to FIG. 6, another embodiment of a computer system 400 that may include a trace cache generator 170 as described above is shown. Other embodiments are possible and contemplated. In the embodiment of FIG. 6, computer system 400 includes several processing nodes 612A, 612B, 612C, and 612D. Each processing node is coupled to a respective memory 614A-614D via a memory controller 616A-616D included within each respective processing node 612A-612D. Additionally, processing nodes 612A-612D include interface logic used to communicate between the processing nodes 612A-612D. For example, processing node 612A includes interface logic 618A for communicating with processing node 612B, interface logic 618B for communicating with processing node 612C, and a third interface logic 618C for communicating with yet another processing node (not shown). Similarly, processing node 612B includes interface logic 618D, 618E, and 618F; processing node 612C includes interface logic 618G, 618H, and 618I ; and processing node 612D includes interface logic 618J, 618K, and 618L. Processing node 612D is coupled to communicate with a plurality of input/output devices (e. g. , devices 620A-620B in a daisy chain configuration) via interface logic 618L. Other processing nodes may communicate with other I/O devices in a similar fashion. [0063] Processing nodes 612A-612D implement a packet-based link for inter-processing node communication. In the present embodiment, the link is implemented as sets of unidirectional lines (e. g. , lines 624A are used to transmit packets from processing node 612A to processing node 612B and lines 624B are used to transmit packets from processing node 612B to processing node 612A). Other sets of lines 624C-624H are used to transmit packets between other processing nodes as illustrated in FIG. 6. Generally, each set of lines 624 may <Desc/Clms Page number 12> include one or more data lines, one or more clock lines corresponding to the data lines, and one or more control lines indicating the type of packet being conveyed. The link may be operated in a cache coherent fashion for communication between processing nodes or in a non-coherent fashion for communication between a processing node and an I/O device (or a bus bridge to an I/O bus of conventional construction such as the PCI bus or ISA bus). Furthermore, the link may be operated in a non-coherent fashion using a daisy-chain structure between I/O devices as shown. It is noted that a packet to be transmitted from one processing node to another may pass through one or more intermediate nodes. For example, a packet transmitted by processing node 612A to processing node 612D may pass through either processing node 612B or processing node 612C as shown in FIG. 6. Any suitable routing algorithm may be used. Other embodiments of computer system 400 may include more or fewer processing nodes then the embodiment shown in FIG. 6. [0064] Generally, the packets may be transmitted as one or more bit times on the lines 624 between nodes. A bit time may be the rising or falling edge of the clock signal on the corresponding clock lines. The packets may include command packets for initiating transactions, probe packets for maintaining cache coherency, and response packets from responding to probes and commands. Processing nodes 612A-612D, in addition to a memory controller and interface logic, may include one or more microprocessors. Broadly speaking, a processing node includes at least one microprocessor and may optionally include a memory controller for communicating with a memory and other logic as desired. More particularly, each processing node 612A-612D may include one or more copies of microprocessor 100. External interface unit 18 may includes the interface logic 618 within the node, as well as the memory controller 616. [0066] Memories 614A-614D may include any suitable memory devices. For example, a memory 614A- 614D may include one or more RAMBUS DRAMs (RDRAMs), synchronous DRAMs (SDRAMs), static RAM, etc. The address space of computer system 400 is divided among memories 614A-614D. Each processing node 612A-612D may include a memory map used to determine which addresses are mapped to which memories 614A- 614D, and hence to which processing node 612A-612D a memory request for a particular address should be routed. In one embodiment, the coherency point for an address within computer system 400 is the memory controller 616A- 616D coupled to the memory storing bytes corresponding to the address. In other words, the memory controller 616A-616D is responsible for ensuring that each memory access to the corresponding memory 614A-614D occurs in a cache coherent fashion. Memory controllers 616A-616D may include control circuitry for interfacing to memories 614A-614D. Additionally, memory controllers 616A-616D may include request queues for queuing memory requests. [0067] Interface logic 618A-618L may include a variety of buffers for receiving packets from the link and for buffering packets to be transmitted upon the link. Computer system 400 may employ any suitable flow control mechanism for transmitting packets. For example, in one embodiment, each interface logic 618 stores a count of the number of each type of buffer within the receiver at the other end of the link to which that interface logic is connected. The interface logic does not transmit a packet unless the receiving interface logic has a free buffer to store the packet. As a receiving buffer is freed by routing a packet onward, the receiving interface logic transmits a message to the sending interface logic to indicate that the buffer has been freed. Such a mechanism may be referred to as a"coupon-based"system. [0068] I/O devices 620A-620B may be any suitable I/O devices. For example, I/O devices 620A-620B may include devices for communicate with another computer system to which the devices may be coupled (e. g. , network interface cards or modems). Furthermore, I/O devices 620A-620B may include video accelerators, audio cards, <Desc/Clms Page number 13> hard or floppy disk drives or drive controllers, SCSI (Small Computer Systems Interface) adapters and telephony cards, sound cards, and a variety of data acquisition cards such as GPIB or field bus interface cards. It is noted that the term"I/O device"and the term"peripheral device"are intended to be synonymous herein. [0069] As used herein, the terms"clock cycle"or"cycle"refer to an interval of time in which the various stages of the instruction processing pipelines complete their tasks. Instructions and computed values are captured by memory elements (such as registers or arrays) according to a clock signal defining the clock cycle. For example, a memory element may capture a value according to the rising or falling edge of the clock signal. [0070] Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications. Industrial Applicability This invention may generally be applicable to the field of microprocessors.
Power saving systems and methods for Universal Serial Bus (USB) systems are disclosed. When a USB physical layer (PHY) enters a U3 low power state, not only are normal elements powered down, but alsocircuitry within the USB PHY associated with detection of a low frequency periodic signal (LFPS) wake up signal is powered down. A low speed reference clock signal is still received by the USB PHY, and a medium speed clock within the USB PHY is activated once per period of the low speed reference clock signal. The medium speed clock activates the signal detection circuitry and samples a line for the LFPS. If no LFPS is detected, the signal detection circuitry and the medium speed clock return to low power until the next period of the low speed reference clock signal. If the LFPS is detected, the USB PHY returns to a U0 active power state.
1.A method for reducing power consumption in the physical layer of the Universal Serial Bus during the U3 power state, wherein when entering the U3 power state, the signal detection circuit is turned off and the intermediate frequency clock is placed in a low power state, the Methods include:During the U3 power state, the reference clock signal is received at the physical layer of the universal serial bus;During the U3 power state, wake up the intermediate frequency clock in the universal serial bus physical layer when an edge in the reference clock signal is received;During the U3 power state and based on receiving the clock signal of the intermediate frequency clock, wake up the signal detection circuit; andThe signal detection circuit is used to detect low-frequency periodic signals on the line.2.5. The method of claim 1, wherein receiving the reference clock signal comprises receiving a 32 kilohertz (32 kHz) reference clock signal.3.5. The method of claim 1, wherein receiving the reference clock signal comprises receiving the reference clock signal from a power management integrated circuit (PMIC) having a crystal oscillator.4.The method of claim 1, further comprising entering the U3 power state.5.The method of claim 1, wherein waking up the intermediate frequency clock includes waking up a frequency locked loop (FLL) clock source.6.8. The method of claim 1, wherein waking up the intermediate frequency clock in the universal serial bus physical layer upon receiving an edge in the reference clock signal comprises waking up the intermediate frequency clock on a rising edge.7.8. The method of claim 1, wherein waking up the intermediate frequency clock in the universal serial bus physical layer when receiving an edge in the reference clock signal comprises waking up the intermediate frequency clock on a falling edge.8.8. The method according to claim 1, further comprising: waking the USB physical layer to the U0 active power state when the low-frequency periodic signal is detected.9.The method according to claim 8, wherein waking up the USB physical layer to the U0 active power state comprises generating an interrupt at the control system in the USB physical layer and The interrupt is passed to the universal serial bus physical layer controller.10.The method according to claim 1, wherein waking up the intermediate frequency clock comprises waking up the intermediate frequency clock for 4 microseconds.11.The method of claim 10, wherein detecting the low-frequency periodic signal on the line comprises sampling the line for 1 microsecond.12.10. The method of claim 10, wherein waking up the intermediate frequency clock comprises allowing the intermediate frequency clock to stabilize in 2 microseconds of the 4 microseconds.13.A universal serial bus physical layer, including:Input, which is configured to receive a reference clock signal;Line input, which is configured to receive low-frequency periodic signals;Intermediate frequency clockA signal detection circuit configured to detect a low-frequency periodic signal on the line input; andControl system, which is configured to:When entering the U3 power state, turn off the signal detection circuit and place the intermediate frequency clock in a low power state;During the U3 low power state, wake up the intermediate frequency clock when an edge in the reference clock signal is received;During the U3 low power state and based on receiving the clock signal of the intermediate frequency clock, waking up the signal detection circuit;An indication is received from the signal detection circuit that the low-frequency periodic signal is detected on the line input.14.The universal serial bus physical layer of claim 13, wherein the intermediate frequency clock comprises a frequency locked loop (FLL) clock source.15.The universal serial bus physical layer of claim 13, wherein the reference clock signal comprises a 32 kilohertz (32 kHz) clock signal.16.The universal serial bus physical layer of claim 13, wherein the control system is further configured to: when the low-frequency periodic signal is detected, output instructing the universal serial bus physical layer controller to The physical layer of the universal serial bus wakes up to the interrupt of the active power state of U0.17.The universal serial bus physical layer of claim 13, wherein the control system is configured to wake up the intermediate frequency clock for 4 microseconds.18.The universal serial bus physical layer of claim 17, wherein the signal detection circuit is configured to sample the line input for the low-frequency periodic signal for 1 microsecond after waking up.19.The universal serial bus physical layer of claim 13, wherein the universal serial bus physical layer is integrated in an integrated circuit (IC).20.The universal serial bus physical layer of claim 13, wherein the universal serial bus physical layer is integrated in a device selected from the group consisting of: a set-top box; an entertainment unit; a navigation device; a communication device; a fixed location Data Unit; Mobile Location Data Unit; Mobile Phone; Cellular Phone; Smart Phone; Tablet Device; Tablet Phone; Server; Computer; Portable Computer; Desktop Computer; Personal Digital Assistant (PDA); Monitor; Computer Monitor; TV; Tuners; radios; satellite radios; music players; digital music players; portable music players; digital video players; video players; digital video disc (DVD) players; portable digital video players; and automobiles.21.A universal serial bus physical layer, including:A device for receiving a reference clock signal;Devices for receiving low-frequency periodic signals;Intermediate frequency clockA device for detecting a low-frequency periodic signal on the device for receiving a low-frequency periodic signal; andControl system, which is configured to:When entering the U3 power state, turn off the device for detecting low-frequency periodic signals and place the intermediate frequency clock in a low-power state;During the U3 low power state, wake up the intermediate frequency clock when an edge in the reference clock signal is received;During the U3 low power state and based on receiving the clock signal of the intermediate frequency clock, wake up the device for detecting low frequency periodic signals;An indication is received from the device for detecting low-frequency periodic signals that the low-frequency periodic signal is detected on the device for receiving low-frequency periodic signals.
Power saving system and method for universal serial bus (USB) systemPriority applicationThis application requires a U.S. patent application entitled "POWER SAVING SYSTEMS AND METHODSFOR UNIVERSAL SERIAL BUS (USB) SYSTEMS (Power Saving System and Method for Universal Serial Bus (USB) System)" filed on May 10, 2016 S/N. 15/150,586 priority, the application is hereby incorporated by reference.backgroundI. Public domainThe technology of the present disclosure generally relates to power saving technology during low-power operation of the Universal Serial Bus (USB) physical layer (PHY).II. Background TechnologyThe increased functionality allows computing devices to be used in many environments that were never considered when computing devices were introduced into the commercial market. In addition to increased functionality, the types of computing devices have proliferated. Among the most popular computing devices are battery-powered mobile computing devices (such as smart phones and tablets). With the increase in the number of mobile computing devices, the demand for the devices to perform multiple functions has also increased, so that devices that might have been initially conceived as simple cellular phones are now fully multimedia phones and entertainment devices capable of Internet access.As mentioned above, mobile computing devices are usually battery powered. The increase in functionality and the corresponding use of such functions results in a corresponding consumption of the battery of the mobile computing device. Consumers obviously find it inconvenient to run out of batteries and power off, and the computing industry generally promotes the introduction of more efficient batteries while looking for ways to save power to improve battery life. One such power saving technique is to place circuits that are not being actively used in low power or sleep mode.A popular standard for implementing device-to-device communication is the Universal Serial Bus (USB) standard. The USB standard defines three low-power states that de-energize more and more circuits when non-use conditions exceed certain thresholds. Specifically, the U0 state is considered to be a generally active state, while U1-U3 reflect the low power state and U3 is the least power-consuming. In these low power states, necessary and sufficient circuits must remain active to detect wake-up events to return the circuit to the normally active U0 state. Although the USB standard provides ample power saving opportunities through the use of low power states, it should be appreciated that in order to improve battery life, further improvements in power consumption are always welcome.Public overviewThe various aspects disclosed in the detailed description include power saving systems and methods for universal serial bus (USB) systems. In an exemplary aspect, when the USB physical layer (PHY) enters the U3 low power state, not only the normal components are powered off, but also the circuit in the USB PHY associated with the detection of the low frequency periodic signal (LFPS) wake-up signal (this circuit is sometimes The circuit called signal detection (or sigdet) is also powered off. The USB PHY still receives the low-speed reference clock signal, and the medium-speed clock in the USB PHY is activated once every cycle of the low-speed reference clock signal. The medium-speed clock activates the signal detection circuit and samples the line used for LFPS. If no LFPS is detected, the signal detection circuit and the medium-speed clock return to low power until the next cycle of the low-speed reference clock signal. If LFPS is detected, the USB PHY returns to the U0 active power state. During the U3 low power state, the signal detection circuit currently consumes more than half of the power used by the USB PHY. By turning off the signal detection circuit for most of the period of the low-speed reference clock signal during the U3 low-power state, significant power savings are achieved. In addition, by turning on the signal detection circuit once every cycle of the low-speed reference clock signal, the signal detection circuit has a sufficient amount of time to detect even the shortest LFPS and keep the waiting time associated with returning to the active power state of U0 at a possible Accepted low level.In this regard, in one aspect, a method for reducing power consumption in a USB PHY during the U3 power state is disclosed. The method includes receiving a reference clock signal at the USB PHY during the U3 power state. The method further includes: during the U3 power state, waking up the intermediate frequency clock in the USB PHY when an edge in the reference clock signal is received. Waking up the intermediate frequency clock includes waking up the intermediate frequency clock for about 4 microseconds. Waking up the intermediate frequency clock includes allowing the intermediate frequency clock to stabilize in about 2 microseconds out of the about 4 microseconds. The method further includes: waking up the signal detection circuit during the U3 power state and according to the operation of the intermediate frequency clock. The method also includes using a signal detection circuit to detect the LFPS on the line.In another aspect, a USB PHY is disclosed. The USB PHY includes an input configured to receive a reference clock signal. The USB PHY also includes a line input configured to receive LFPS. The USB PHY also includes an intermediate frequency clock. The USB PHY also includes a signal detection circuit configured to detect LFPS on the line input. The USB PHY also includes a control system. The control system is configured to wake up the intermediate frequency clock when an edge in the reference clock signal is received during the U3 low power state. The control system is configured to wake up the intermediate frequency clock for approximately 4 microseconds. The control system is also configured to wake up the signal detection circuit during the U3 low power state and based on receiving a wake-up to the intermediate frequency clock. The control system is also configured to receive an indication from the signal detection circuit that LFPS has been detected on the line input.In another aspect, a USB PHY is disclosed. The USB PHY includes a device for receiving a reference clock signal. The USB PHY also includes a device for receiving LFPS. The USB PHY also includes an intermediate frequency clock. The USB PHY also includes a device for detecting LFPS on the device for receiving LFPS. The USB PHY also includes a control system. The control system is configured to wake up the intermediate frequency clock when an edge in the reference clock signal is received during the U3 low power state. The control system is also configured to: during the U3 low power state and based on receiving a wake-up of the intermediate frequency clock, wake up the device for detecting LFPS. The control system is further configured to receive an indication from the device for detecting LFPS that LFPS is detected on the device for receiving LFPS.Brief description of the drawingsFIG. 1 is a perspective view of an exemplary mobile computing device having a universal serial bus (USB) connector that can incorporate an exemplary power saving technique according to the present disclosure;2 is a simplified block diagram of a circuit in a mobile computing device that cooperates with the USB connector of FIG. 1;3 is a simplified block diagram of the USB physical layer (PHY) associated with the USB connector of FIG. 1;4 is a simplified block diagram of the components within the USB PHY of FIG. 3, which illustrates an exemplary power saving technique according to the present disclosure;Figure 5 is a flowchart of a process associated with an exemplary power saving technique of the present disclosure;Fig. 6 is a second flowchart illustrating additional steps associated with the exemplary power saving technique of the present disclosure; andFIG. 7 is a block diagram of an exemplary processor-based system that may include the power saving USB PHY of FIG. 3.A detailed descriptionReferring now to the drawings, several exemplary aspects of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any aspect described herein as "exemplary" need not be construed as being superior or superior to other aspects.The various aspects disclosed in the detailed description include power saving systems and methods for universal serial bus (USB) systems. In an exemplary aspect, when the USB physical layer (PHY) enters the U3 low power state, not only the normal components are powered off, but also the circuit in the USB PHY associated with the detection of the low-frequency periodic signal (LFPS) wake-up signal (this circuit is sometimes The circuit called signal detection (or sigdet) is also powered off. The USB PHY still receives the low-speed reference clock signal, and the medium-speed clock in the USB PHY is activated once every cycle of the low-speed reference clock signal. The medium-speed clock activates the signal detection circuit and samples the line for LFPS. If no LFPS is detected, the signal detection circuit and the medium-speed clock return to low power until the next cycle of the low-speed reference clock signal. If LFPS is detected, the USB PHY returns to the U0 active power state. During the U3 low power state, the signal detection circuit currently consumes more than half of the power used by the USB PHY. By turning off the signal detection circuit for most of the cycle of the low-speed reference clock signal during the U3 low-power state, significant power savings are achieved. In addition, by turning on the signal detection circuit once every cycle of the low-speed reference clock signal, the signal detection circuit has a sufficient amount of time to detect even the shortest LFPS and keep the waiting time associated with returning to the active power state of U0 at a possible Accepted low level.In this regard, FIG. 1 is a perspective view of an exemplary mobile computing device 100. The mobile computing device 100 may be a cellular phone, a smart phone, a tablet, a laptop computer, or the like. As explained, the mobile computing device 100 is a smart phone with a housing 102 having a touch screen display 104 and command buttons 106 that form a user interface of the mobile computing device 100. Although not specifically explained, it should be appreciated that the mobile computing device 100 may include one or more speakers and one or more microphones as part of the user interface. In addition to the user interface, the housing 102 can support the USB socket 108. In an exemplary aspect, the USB slot 108 may be a micro-A or micro-B slot configured to support USB 3.0 or USB 3.1. As mentioned elsewhere, the mobile computing device 100 can operate in an active power mode and a sleep or low power mode. In addition, even if parts of the mobile computing device 100 are active, other parts may be put into sleep or low power mode based on operational requirements and usage. In an exemplary aspect of the present disclosure, the USB PHY associated with the USB socket 108 may be placed in one of multiple low power modes.Specifically, the USB standard defines U0 as an active power state and U1-U3 as a low power state, in which an increased number of components are not powered when the USB PHY transitions from U1 to U3. In a conventional system, when the USB PHY is in the U3 state, the USB PHY maintains power to the signal detection circuit that detects the low-frequency periodic signal (LFPS) on the super-speed channel of the USB bus. When the signal detection circuit detects the LFPS, the signal detection circuit wakes up the USB PHY and initiates the process of returning the USB PHY to the active state of U0. In the context of the U3 state, the signal detection circuit consumes a relatively large amount of power. In some cases, more than half of the power consumed in the U3 state is consumed by the signal detection circuit. The exemplary aspects of the present disclosure allow the signal detection circuit to be powered down most of the time that the USB PHY spends in the U3 state. In addition, the IF clock inside the USB PHY can also be powered down to provide further power savings. These components are selectively activated by low-frequency clock signals and allow ultra-high-speed lines to be sampled to detect LFPS. If no LFPS is detected, both components return to the reduced power state until the next sample instance. Therefore, significant power savings can be achieved.In this regard, FIG. 2 illustrates a simplified block diagram of the elements within the mobile computing device 100 of FIG. 1. Specifically, FIG. 2 illustrates a circuit board 200 with an integrated circuit (IC) and a USB socket 108 located thereon. The USB socket 108 is coupled to a system on chip (SoC) 202, and the SoC 202 is in turn coupled to a power management IC (PMIC) 204. The PMIC 204 includes a crystal oscillator 206 that generates a low-frequency clock signal 208, and the low-frequency clock signal 208 is provided to the SoC 202. In an exemplary aspect, the low frequency clock signal 208 is a 32 kilohertz (32 kHz) clock signal. In some alternative aspects, PMIC 204 may be incorporated into SoC 202.With continued reference to FIG. 2, the SoC 202 may include a control system 210 and a USB PHY 212. In an exemplary aspect, the control system 210 controls the USB PHY 212, and may be referred to as a USB PHY controller or a MAC controller. The control system 210 may be coupled to the USB PHY 212 through a system-on-chip network (SNoC) (not illustrated) or other connections as needed or desired. In addition, it should be appreciated that the low frequency clock signal 208 may be passed to the control system 210 or the USB PHY 212, or both. It should be appreciated that since the crystal oscillator 206 is external to the USB PHY 212, the low-frequency clock signal 208 may sometimes be referred to as an external clock signal. In addition, the power state change of the USB PHY 212 does not affect the crystal oscillator 206, and therefore, the low-frequency clock signal 208 is available even when the USB PHY 212 has transitioned to a low power state (such as U1-U3).Digging into more details, Figure 3 illustrates the USB PHY 212 of Figure 2. The USB PHY 212 may include a clock input 300 that receives a low-frequency clock signal 208. The clock input 300 is sometimes referred to as a device for receiving a clock signal. The low frequency clock signal 208 is shared from the clock input 300 to the control system 302 and the frequency locked loop (FLL) 304. The control system 302 is bidirectionally communicatively connected to the FLL 304 and the signal detection (sometimes referred to as sigdet) circuit 306. Specifically, the control system 302 passes the FLL enable signal (sometimes labeled as fll_en) 308 to the FLL 304. The FLL enable signal 308 enables the FLL 304, thereby waking up the FLL 304 from sleep or low power mode. The control system 302 receives the FLL clock signal (sometimes denoted as fll_clk) 310 from the FLL 304. The control system 302 also transmits the receiving circuit enable signal (sometimes marked as rx_sigdet_en) 312 to the signal detection circuit 306 and receives a received signal detection signal (sometimes marked as rx_sigdet) 314. The signal detection circuit 306 is sometimes referred to as a device for detecting LFPS. The signal detection circuit 306 is coupled to the line input 316, which is configured to receive a SuperSpeed signal (such as a differential superspeed signal) from the differential line 318. Line input 316 is sometimes referred to as a device for receiving LFPS. It should be appreciated that the USB standard defines the way that the USB PHY 212 will receive LFPS. Specifically, LFPS is provided on the differential line 318. FLL 304 can generate a clock signal. As used herein, the clock signal from FLL 304 is an intermediate frequency clock signal. In this case, the "intermediate frequency" is because the signal is about 10-20 MHz (10-20MHz), which is higher than the 32kHz of the low-frequency clock signal 208, but much lower than the 5 GHz (5GHz) of the USB 3.0 super-speed data. ).FIG. 4 illustrates the control system 302 of FIG. 3. Specifically, the control system 302 includes a low-frequency clock timing part 400, an FLL timing part 402, and a sampling logic 404. The low-frequency clock timing part 400 receives the low-frequency clock signal 208 and selectively outputs the FLL enable signal 308 for turning on the FLL 304. The FLL timing part 402 receives the FLL clock signal 310 from the FLL 304 and outputs the receiving circuit enable signal 312, which turns on the signal detection circuit 306. In addition, the FLL timing section 402 receives a timer control signal 406 from a control register (not explained). The sampling logic 404 also receives the FLL clock signal 310, which together with the received signal detection signal 314 from the signal detection circuit 306 is used by the finite state machine (FSM) 408 to determine whether to output the LFPS detection signal to the USB PHY controller 210 (sometimes Marked as LFPS_DET)410. Then, the USB PHY controller 210 controls the wake-up from U3.In the context of the aforementioned hardware, FIG. 5 illustrates a flowchart of a process 500 according to an exemplary aspect of the present disclosure. The process 500 starts (block 502) and asks whether the signal detection circuit 306 is disabled (block 504). If the answer to block 504 is no, then process 500 loops until a positive answer is generated at block 504. Once the answer to block 504 is yes, process 500 waits until signal detection circuit 306 is enabled (block 506 and block 508). The process of enabling the signal detection circuit 306 is studied in more detail below with reference to the process 600 and FIG. 6. Then, the process 500 waits for the signal detection start timer (illustrated but not labeled in FIG. 4) to complete (block 510). The process 500 queries whether the signal detection start timer is complete (block 512), and once the answer to block 512 is yes, the sampling logic 404 samples the received signal detection signal 314 from the signal detection circuit 306 (block 514). When the signal detection circuit 306 is active (block 516) and activity is sensed (ie, there is an LFPS at the line input 316), the process 500 queries whether the sampling timer is complete (block 518). If there is an LFPS for the entire sampling timer, it is determined that the LFPS is valid. That is, once the sampling timer is complete, the process 500 outputs an LFPS interrupt (IRQ) (block 520), and the process 500 ends (block 522). However, if the signal detection circuit 306 ever senses inactivity while the sampling timer is still counting (the no branch from block 516), the LFPS is terminated or invalidated, and the process 500 restarts at the beginning.The process 600 illustrated in Figure 6 provides additional details on entering and leaving the U3 power state and how to detect LFPS. In this regard, the process 600 begins with the USB PHY 212 entering the U3 power state according to the USB standard (block 602). Upon entering the U3 power state, the USB PHY 212 de-energizes many components, and specifically, turns off the signal detection circuit 306 (block 604) and turns off the FLL 304 (block 606). However, the USB PHY 212 still receives the low frequency clock signal 208 from the crystal oscillator 206 (block 608). On the edge (rising or falling edge) of the low-frequency clock signal 208, the control system 302 wakes up the FLL 304 with the FLL enable signal 308 (block 610). The output of FLL 304 is passed back to control system 302, and control system 302 wakes up signal detection circuit 306 with receiving circuit enable signal 312 (block 612). In an exemplary aspect, the signal detection circuit 306 takes about 2 microseconds to stabilize. Accordingly, the process 600 allows the signal detection circuit 306 to stabilize (block 614). Then, the signal detection circuit 306 samples the ultra high speed input during the sampling time (block 616). In an exemplary aspect, the sampling time is 1 microsecond. If LFPS is not detected within the entire duration of the sampling time at block 618, process 600 turns off FLL 304 and signal detection circuit 306 by returning to block 604. However, if the LFPS is detected within the entire duration of the sampling time at block 618, the signal detection circuit 306 generates the LFPS detection signal 410 (block 620), and the USB PHY controller 210 causes the USB PHY 212 to enter the U0 active state (block 622) ).It should be appreciated that LFPS is defined by the USB standard as a duration between 80 microseconds and 10 milliseconds. If the low-frequency clock signal 208 is 32 kHz, it means that the signal detection circuit 306 should wake up and sample at least twice in any 80 microsecond period. This sampling frequency provides a redundancy check to detect LFPS even if the LFPS is in the shortest period. Accordingly, this arrangement provides a high probability of detecting LFPS with acceptable waiting time. In addition, by de-energizing the signal detection circuit 306 and FLL 304 most of the time, significant power savings are achieved.The power saving system and method according to various aspects disclosed herein can be provided in or integrated into any processor-based device including a USB PHY. Non-limiting examples include: set-top boxes, entertainment units, navigation devices, communication devices, fixed location data units, mobile location data units, mobile phones, cellular phones, smart phones, tablet devices, tablet phones, servers, computers, portable computers, Desktop computer, personal digital assistant (PDA), monitor, computer monitor, TV, tuner, radio, satellite radio, music player, digital music player, portable music player, digital video player, video player , Digital video disc (DVD) players, portable digital video players, and automobiles.In this regard, FIG. 7 illustrates an example of a processor-based system 700 that can employ the USB PHY 212 illustrated in FIG. 2. In this example, the processor-based system 700 includes one or more central processing units (CPU) 702, and each central processing unit includes one or more processors 704. The CPU(s) 702 may have a cache memory 706 coupled to the processor(s) 704 for quick access to temporarily stored data. The CPU 702 is coupled to the system bus 708, and can interactively couple the master device and the slave device included in the processor-based system 700. As is well known, CPU(s) 702 communicate with these other devices by exchanging address, control, and data information on the system bus 708. For example, the CPU 702(s) may communicate a bus transaction request to the memory controller 710 as an example of a slave device. Although not illustrated in FIG. 7, multiple system buses 708 may be provided, where each system bus 708 constitutes a different texture.Other master devices and slave devices can be connected to the system bus 708. As illustrated in FIG. 7, as an example, these devices may include a memory system 712, one or more input devices 714, one or more output devices 716, one or more network interface devices 718, and one or more The display controller 720. The input device(s) 714 may include any type of input device, including but not limited to input keys, switches, voice processors, and the like. The output device(s) 716 may include any type of output device, including but not limited to audio, video, other visual indicators, and the like. The network interface device(s) 718 may be any device configured to allow data exchange to and from the network 722. The network 722 may be any type of network, including, but not limited to, a wired or wireless network, a private or public network, a local area network (LAN), a wireless local area network (WLAN), a wide area network (WAN), a Bluetooth™ network, and the Internet. The network interface device(s) 718 may be configured to support any type of communication protocol desired. The memory system 712 may include one or more memory cells 724 (0-N).The CPU 702(s) may also be configured to access the display controller(s) 720 on the system bus 708 to control the information sent to one or more displays 726. The display controller(s) 720 sends the information to be displayed to the display(s) 726 via one or more video processors 728, and the video processor 728 processes the information to be displayed into a format suitable for the display(s) 726. The display(s) 726 may include any type of display, including but not limited to a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display, a light emitting diode (LED) display, and the like.Those skilled in the art will further appreciate that the various illustrative logic blocks, modules, circuits, and algorithms described in conjunction with the various aspects disclosed herein can be implemented as electronic hardware, stored in a memory or in another computer-readable medium, and configured by Instructions executed by a processor or other processing device, or a combination of the two. As an example, the master device and slave device described herein can be used in any circuit, hardware component, IC, or IC chip. The memory disclosed herein can be any type and size of memory, and can be configured to store any type of information required. In order to clearly explain this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been generally described above in their functional form. How such functionality is implemented depends on specific applications, design choices, and/or design constraints imposed on the overall system. Technicians may implement the described functionality in different ways for each specific application, but such implementation decisions should not be construed as causing a departure from the scope of the present disclosure.Various illustrative logic blocks, modules, and circuits described in conjunction with the various aspects disclosed in this article can be designed to perform the functions described in this article as a processor, digital signal processor (DSP), application specific integrated circuit (ASIC), field A programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof can be implemented or executed. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices (for example, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in cooperation with a DSP core, or any other such configuration).The various aspects disclosed herein can be embodied as hardware and instructions stored in the hardware, and can reside in, for example, random access memory (RAM), flash memory, read-only memory (ROM), electrically programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from and write information to the storage medium. Alternatively, the storage medium may be integrated into the processor. The processor and the storage medium may reside in the ASIC. The ASIC may reside in the remote station. In the alternative, the processor and storage medium may reside as discrete components in a remote station, base station, or server.It is also noted that the operating steps described in any exemplary aspect herein are described for the purpose of providing examples and discussion. The operations described can be performed in many different orders in addition to the illustrated order. In addition, the operations described in a single operation step can actually be performed in several different steps. In addition, one or more of the operating steps discussed in the exemplary aspects may be combined. It should be understood that, as is obvious to those skilled in the art, the operation steps illustrated in the flowchart can be modified in many different ways. Those skilled in the art will also understand that any of a variety of different techniques and techniques can be used to represent information and signals. For example, the data, instructions, commands, information, signals, bits, symbols, and chips that may be mentioned throughout the above description can be voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or any combination thereof To represent.The previous description of the present disclosure is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the present disclosure will be readily apparent to those skilled in the art, and the general principles defined herein can be applied to other modifications without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the examples and designs described herein, but should be granted the broadest scope consistent with the principles and novel features disclosed herein.
A system comprises one or more slice-aggregated cryptographic slices each configured to perform a plurality of operations on an incoming data transfer at a first processing rate by aggregating one or more individual cryptographic slices each configured to perform the plurality of operations on a portion of the incoming data transfer at a second processing rate. Each of the individual cryptographic slices comprises in a serial connection an ingress block configured to take the portion of the incoming data transfer at the second processing rate, a cryptographic engine configured to perform the operations on the portion of the incoming data transfer, an egress block configured to process a signature of the portion and output the portion of the incoming data transfer once the operations have completed. The first processing rate of each slice-aggregated cryptographic slices equals aggregated second processing rates of the individual cryptographic slices in the slice- aggregated cryptographic slice.
CLAIMSWhat is claimed is:1 . A system comprising:one or more slice-aggregated cryptographic slices each configured to perform a plurality of operations on an incoming data transfer at a first processing rate by aggregating one or more individual cryptographic slices each cryptographic slice is configured to perform the plurality of operations on a portion of the incoming data transfer at a second processing rate, wherein each of the one or more individual cryptographic slices comprises at least the following components in a serial connection:an ingress block configured to take the portion of the incoming data transfer at the second processing rate, inform a cryptographic engine of the operations needed, and feed the portion of the incoming data transfer to the cryptographic engine;said cryptographic engine configured to perform the operations on the portion of the incoming data transfer; and an egress block configured to insert or remove a signature of the portion of the incoming data transfer and output the portion of the incoming data transfer once the operations have completed;wherein the first processing rate of each of the one or more slice- aggregated cryptographic slices equals aggregated second processing rates of the one or more individual cryptographic slices in the slice-aggregated cryptographic slice.2. The system of Claim 1 , wherein the plurality of operations is one or more of generating or checking the signature of the data transfer for integrity and encrypting or decrypting the data transfer for confidentiality.3. The system of Claim 1 , wherein each slice of the one or more slice-aggregated cryptographic slices is configured tosplit and feed portions of the incoming data transfer into the ingress blocks of the one or more individual cryptographic slices in the slice- aggregated cryptographic slice; andmerge outputs from the egress blocks of the one or more individual cryptographic slices into one data output for further transfer and/or processing once the individual cryptographic slices have completed processing their respective portions of the data transfer, wherein the portion of the incoming data transfer processed on one slice-aggregated cryptographic slice is physically isolated from the portions of the incoming data transfer processed on others slice-aggregated cryptographic slices.4. The system of Claim 1 , wherein at least one of the one or more slice-aggregated cryptographic slices comprises more than one individual cryptographic slices, wherein the first processing rate of the at least one of the one or more slice-aggregated cryptographic slices is higher than the second processing rate of the one individual cryptographic slice.5. The system of Claim 4, wherein at least one of the one or more slice-aggregated cryptographic slices comprises only one individual cryptographic slice wherein the one or more slice-aggregated cryptographic slices are configured to perform the plurality of operations on different portions of the incoming data transfer at different processing rates at the same time.6. The system of Claim 1 further comprising:one or more cross-slice channels among the slice-aggregated cryptographic slices, wherein each cross-slice channel of the one or more cross-slice channels is configured to propagate information generated from one slice-aggregated cryptographic slice to another slice-aggregated cryptographic slice to achieve slice aggregation, wherein each cross-slice channel of the one or more cross-slice channels is further configured to broadcast commonly-used information to all individual cryptographic slices within a slice-aggregated cryptographic slice wherein per-slice resources are cascaded, shared, and operated together among the individual cryptographic slices based on configuration of the slice-aggregated cryptographic slice.7. The system of Claim 1 , wherein one of the one or more slice- aggregated cryptographic slices is configured to aggregate only a part of the one or more individual cryptographic slices.8. The system of Claim 1 , wherein aggregating the one or more individual cryptographic slices into a slice-aggregated cryptographic slice includes sharing information among the components of the individual cryptographic slices.9. The system of Claim 8, wherein relevant information of the ingress blocks is propagated from one individual cryptographic slice to the next cryptographic slice during aggregation of the individual cryptographic slices wherein the ingress blocks of the individual cryptographic slices are finite-state-machine based.10. The system of Claim 8, wherein operation results by the cryptographic engines are propagated from one individual cryptographic slice to the next individual cryptographic slice during aggregation of the individual cryptographic slices wherein the cryptographic engines are mathematic-logic based.1 1 . The system of Claim 8, wherein relevant information of the egress blocks is propagated from one individual cryptographic slice to the next individual cryptographic slice wherein the egress blocks of the individual cryptographic slices are finite-state-machine based.12. The system of Claim 8, wherein current data framing information and end results are broadcasted to all individual cryptographic slices for resource sharing.13. The system of Claim 8, wherein each slice of the one or more slice-aggregated cryptographic slices is configured to handle and share the information based on configuration of the slice-aggregated cryptographic slice.14. A method comprising:aggregating one or more individual cryptographic slices into a slice- aggregated cryptographic slice wherein each slice is configured to perform a plurality of operations on an incoming data transfer at a first processing rate, wherein each individual cryptographic slice of the one or more individual cryptographic slices comprises at least the following components in a serial connection:an ingress block configured to take the portion of the incoming data transfer at the second processing rate, inform a cryptographic engine of the operations needed, and feed the portion of the incoming data transfer to the cryptographic engine;said cryptographic engine configured to perform the operations on the portion of the incoming data transfer; andan egress block configured to insert or remove a signature of the portion of the incoming data transfer and output the portion of the incoming data transfer once the operations have completed;splitting and feeding portions of the incoming data transfer into the ingress blocks of the one or more individual cryptographic slices in the slice- aggregated cryptographic slice, wherein each of the one or more individual cryptographic slices is configured to perform the plurality of operations on a portion of the incoming data transfer at a second processing rate, wherein the first processing rate of the slice-aggregated cryptographic slices equals aggregated second processing rates of the one or more individual cryptographic slices in the slice-aggregated cryptographic slice; andmerging output from the egress blocks of the one or more individual cryptographic slices in the slice-aggregated cryptographic slice into one data output for further transfer and/or processing once the individual cryptographic slices have completed processing their respective portions of the data transfer.15. The method of Claim 14 further comprising: physically isolating the portion of the incoming data transfer processed on one slice-aggregated cryptographic slice from the portions of the incoming data transfer processed on others slice-aggregated cryptographic slices.16. The method of Claim 15 further comprising:propagating information generated from one individual cryptographic slice to another individual cryptographic slice to achieve slice aggregation; andbroadcasting commonly-used information to all individual cryptographic slices within the slice-aggregated cryptographic slice wherein per-slice resources are cascaded, shared, and operated together among the individual cryptographic slices based on configuration of the slice-aggregated cryptographic slice.
SLICE-AGGREGATED CRYPTOGRAPHIC SYSTEM AND METHODTECHNICAL FIELDThe disclosure generally relates to cryptographic system for protection of data transfer over communication networks, and systems and circuits implementing the proposed slice-aggregated cryptographic system.BACKGROUNDIn the current era of big data, quintillion bytes of data are created from edge devices and uploaded to storages and/or servers in the cloud every day. The edge devices providing entry points into enterprises or service providers’ core networks can include, for non-limiting examples, routers, switches, multiplexers, and a variety of network access devices. For data protection, cryptography is often used in networking and storage of such data to make sure that the data is transmitted from the edge device and stored in the cloud securely. Different edge devices may need different processing rates.For a non-limiting example, a 400G (or gbps) cryptographic engine is configured to encrypt and decrypt a data transfer at processing rate of 100 gbps (i.e., billions of bits per second), 200 gbps, and 400 gbps. An exemplary cryptographic system comprising four 100G cryptographic engines, two 200G cryptographic engines, and one 400G engine then picks one or more of the cryptographic engines based on the required processing rate of each data transfer. For non-limiting examples:4 x 100G cryptographic engines for four 100 gbps data transfers, or2 x 200G cryptographic engines for two 200 gbps data transfers, or1 x 400G cryptographic engine for one 400 gbps data transfers, or1 x 200G cryptographic engine and 2 x 100 cryptographic engines for one 200 gbps data transfer and two 100 gbps data transfers.Such cryptographic system however, is extremely costly because of redundant cryptographic engines being included in the cryptographic system.Another conventional approach is to use time division multiplexing (TDM) on one single, e.g., 400G, cryptographic engine with ingress and egress buffers for each data transfer. The configurability is achieved by rearranging these additional ingress and egress buffers with some overhead mechanisms to switch data transfers among the buffers based on the required processing rates. However, this approach has cost and latency penalty because of the additional ingress and egress buffers, the lack of physical isolation, and power gating due to the sharing one single cryptographic engine.SUMMARYAccordingly, a need has arisen for a high-speed and configurablecryptographic system. A new cryptographic system is proposed where the cryptographic system includes a plurality of low processing rate (e.g., 100G) slices. Each slice may be configured to perform cryptographic operations on a data transfer at certain processing rate. The cryptographic system allows various aggregation/configuration among the plurality of low-rate processing slices to form processing units at various higher processing rates for integrity and/orconfidentiality of data encryption/decryption. Such slice-aggregated cryptographic system achieves cost efficiency by adopting reusable and configurablecomponents/slices, power efficiency by turning on only the slices that are needed for the current data transfer task, and secured design for data integrity since the slices used for the data transfer are physical isolated.A system comprises one or more slice-aggregated cryptographic slices where each slice may be configured to perform a plurality of operations on an incoming data transfer at a first processing rate by aggregating one or more individual cryptographic slices where each slice may be configured to perform the plurality of operations on a portion of the incoming data transfer at a second processing rate. Each of the individual cryptographic slices comprises in a serial connection an ingress block configured to take the portion of the incoming data transfer at the second processing rate, a cryptographic engine configured to perform the operations on the portion of the incoming data transfer, an egress block configured to insert or remove a signature of the portion of the incoming data transfer and output the portion of the incoming data transfer once the operations have completed. The first processing rate of each slice-aggregated cryptographic slices equals aggregated second processing rates of the individual cryptographic slices in the slice-aggregated cryptographic slice.It is appreciated that the plurality of operations is one or more of generating or checking the signature of the data transfer for integrity and encrypting or decrypting the data transfer for confidentiality. The portion of the incoming data transfer processed on one slice-aggregated cryptographic slice is physically isolated from the portions of the incoming data transfer processed on others slice-aggregated cryptographic slices. In some embodiments, the system further comprises one or more cross-slice channels among the slice-aggregated cryptographic slices, wherein each of the one or more cross-slice channels is configured to propagate information generated from one slice-aggregated cryptographic slice to another slice-aggregated cryptographic slice to achieve slice aggregation. In some embodiments, each of the one or more cross-slice channels is further configured to broadcast commonly-used information to all individual cryptographic slices within a slice-aggregated cryptographic slice so that per- slice resources are cascaded, shared, and operated together among the individual cryptographic slices based on configuration of the slice-aggregated cryptographic slice.These and other aspects may be understood with reference to the following detailed description.BRIEF DESCRIPTION OF THE DRAWINGSSo that the manner in which the above recited features can be understood in detail, a more particular description, briefly summarized above, may be had by reference to example implementations, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical example implementations and are therefore not to be considered limiting of its scope.Figure 1 depicts an example of an architecture of a slice-aggregated cryptographic system comprising a plurality of individual cryptographic slices, according to some examples.Figure 2 depicts an example of an architecture of a slice-aggregated cryptographic system comprising two slice-aggregated cryptographic slices each having two individual 100G cryptographic slices, respectively, according to some examples.Figure 3 depicts an example of an architecture of a slice-aggregated cryptographic system comprising one slice-aggregated cryptographic slice having four individual 100G cryptographic slices, according to some examples. Figure 4 depicts an example of an architecture of a hybrid slice-aggregated cryptographic system comprising one individual 100G cryptographic slice, one 2- slice-aggregated cryptographic slice comprising two individual 100G cryptographic slices, and one powered off individual 100G cryptographic slice, according to some examples.Figure 5 is a sequence diagram illustrating an example of operations for metadata sharing among cryptographic slices, according to some examples.Figure 6 is a block diagram depicting a programmable integrated circuit (IC), according to some examples.Figure 7 is a field programmable gate array (FPGA) implementation of the programmable IC, according to some examples.To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements of one example may be beneficially incorporated in other examples.DETAILED DESCRIPTIONExamples described herein relate to efficient and configurable slice- aggregated cryptographic system. Various features are described hereinafter with reference to the figures. It should be noted that the figures may or may not be drawn to scale and that the elements of similar structures or functions are represented by like reference numerals throughout the figures. It should be noted that the figures are only intended to facilitate the description of the features. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. For example, various methods according to some examples can include more or fewer operations, and the sequence of operations in various methods according to examples may be different than described herein. In addition, an illustrated example need not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular example is not necessarily limited to that example and can be practiced in any other examples even if not so illustrated or if not so explicitly described.The slice-aggregated cryptographic system described hereinafter uses a non-limiting example of four 100G cryptographic engines (a.k.a. slice) to illustrate the proposed approach to aggregate each slice based on the required processing rates. For non-limiting examples, the system is configured to aggregate two 100G cryptographic slices into one aggregated 200G cryptographic slice, or four 100G cryptographic slices into one aggregated 400G cryptographic slice. The system has no redundant slice for cost efficiency and provides per-slice controllability for power efficiency. The system also achieves physical isolation on each aggregated slice for data security. Although a four-slice cryptographic system is used as a non-limiting example to illustrate the proposed approach, same approach is also applicable to cryptographic systems having various number of cryptographic engines at varying data processing rates.Referring now to Figures 1 -4, which are block diagrams depicting examples of configurable slice-aggregated cryptographic systems for performingcryptographic operations on data transfers at various processing rates are shown.Figure 1 depicts an example of an architecture of a slice-aggregated cryptographic system 100. The slice-aggregated cryptographic system 100 comprises a plurality of individual cryptographic slices 102 1 , ..., 102 4, e.g., Slice #1 to Slice #4 as shown in Figure 1 . It is appreciated that each individual cryptographic slice 102 may be configured to process an incoming data transfer at 100G or l OOgbps. As shown in Figure 1 , each cryptographic slice 102 includes at least the following components in a chain: an ingress block 104, which is in a serial connection with a cryptographic engine 106, which is in a serial connection an egress block 108. The ingress block 104 is configured to take an incoming data transfer at a certain processing rate, e.g., 100G, inform the cryptographic engine 106 of processing (e.g., cryptographic operations) needed for the data transfer, and feed the data into the cryptographic engine 106 accordingly for such processing. Upon receiving the data from the ingress block 104, the cryptographic engine 106 is configured to perform one or more cryptographic processing and/or operations on the data. The cryptographic operations include but are not limited to generating or checking a signature of the data transfer for integrity and/or encrypting or decrypting the data for confidentiality. Once the cryptographic operations are completed by the cryptographic engine 106, the egress block 108 is configured to insert or remove a signature of the data transfer and transmit the encrypted or decrypted data for further processing or transmission. In some embodiments, a slice-aggregated cryptographic system is configured to aggregate multiple individual cryptographic slices 102s into one slice- aggregated cryptographic slice to meet the required processing rate of an incoming data transfer. Figure 2 depicts an example of an architecture of a slice-aggregated cryptographic system 200. The slice-aggregated cryptographic system 200 comprises two slice-aggregated cryptographic slices 202 1 and 202 2, each comprising two individual 100G cryptographic slices 102 1/102 2 and102 3/102 4, respectively. As a result, each 2-slice-aggregated cryptographic slice 202 is configured to process an incoming data transfer at 200G or 200gbps, i.e., aggregated processing rates of the two individual cryptographic slices 102s in the slice-aggregated cryptographic slice 202. When a data transfer at 200G is received by the slice-aggregated cryptographic slice 202, the incoming data transfer is split between and fed into the ingress blocks 104s of the two individual cryptographic slices 102s of the slice-aggregated cryptographic slice 202 and processed by components in each of the individual cryptographic slices 102s in parallel as discussed above. Once the two individual cryptographic slices 102s have both completed processing their respective portions of the data transfer, the slice- aggregated cryptographic slice 202 is configured to merge output from the egress blocks 108s of its two individual cryptographic slices 102s into one data output for further transfer and/or processing.Figure 3 depicts an example of an architecture of a slice-aggregated cryptographic system 400. The slice-aggregated cryptographic system 400 comprises one slice-aggregated cryptographic slice 302 comprising four individual 100G cryptographic slices 102 1 , ..., 102 4. As a result, the 4-slice-aggregated cryptographic slice 302 is configured to process an incoming data transfer at 400G or 400gbps, i.e., the aggregated processing rate of the four individual cryptographic slices 102s in the slice-aggregated cryptographic slice 302. When a data transfer at 400G is received by the slice-aggregated cryptographic slice 302, the incoming data is split among and fed into the ingress blocks 104s of the four individual cryptographic slices 102s of the slice-aggregated cryptographic slice 302 and processed by components in each of the individual cryptographic slices 102s in parallel, as discussed above. Once all four individual cryptographic slices 102s have completed processing their respective data, the slice-aggregatedcryptographic slice 302 is configured to merge output from the egress blocks 108s of its four individual cryptographic slices 102s into one data output for further transfer and/or processing.In some embodiments, a slice-aggregated cryptographic system is configured to include a mix of both one or more individual cryptographic slices 102s as well as one or more slice-aggregated cryptographic slices in order toaccommodate multiple data transfers at different processing rates. Figure 4 depicts an example of an architecture of a hybrid slice-aggregated cryptographic system 600. The hybrid slice-aggregated cryptographic system 600 includes two individual 100G cryptographic slices 102 1 and 102 4, and one 2-slice-aggregated cryptographic slice 402 that includes two individual 100G cryptographic slices 102 2 and 102 3. Under such configuration, the hybrid slice-aggregated cryptographic system 600 is configured to process multiple incoming data transfers at different required processing rates at the same time, e.g., a first data transfer at 100G via the individual 100G cryptographic slicel 02 1 and a second data transfer at 200G via the 2-slice-aggregated cryptographic slice 402 including two individual 100G cryptographic slices 102 2 and 102 3. Note that the individualcryptographic slice 102 4 in the example of Figure 4 is not used and can be powered off (inactive) for power efficiency as discussed in step 512 of Figure 5 below.In some embodiments, each of the slice-aggregated cryptographic systems discussed above further includes one or more cross-slice channels 1 10 among the individual and/or slice-aggregated cryptographic slices. It is appreciated that each cross-slice channel 1 10 is configured to propagate generated metadata from one cryptographic slice to another cryptographic slice to achieve slice aggregation. In some embodiments, each cross-slice channel 1 10 may further be configured to broadcast commonly-used metadata to all individual cryptographic slices within a slice-aggregated cryptographic slice so that per-slice resources can be cascaded, shared, and operated together among the individual cryptographic slices based on the configuration of the slice-aggregated cryptographic slice. Resource-sharing within the slice-aggregated cryptographic slice, as described above, is cost efficient because resources of a low processing rate cryptographic slice can be reused by other cryptographic slices. In addition, since the slice- aggregated cryptographic systems, as discussed above, do not rely on time- division-multiplexing method no matter whether a cryptographic slice is a single cryptographic slice or aggregated with others into a slice-aggregatedcryptographic slice, such slice-aggregated cryptographic systems provide better data security because the data portion processed on each cryptographic slice or slice-aggregated cryptographic slice is physically isolated from the data portion processed on others cryptographic slices.In some embodiments, aggregating a plurality of the individual cryptographic slices into a slice-aggregated cryptographic slice involves sharing/propagating information/metadata among various components of the individual cryptographic slices 102. In some embodiments, where the ingress blocks 104s of the individual cryptographic slices 102s are finite-state-machine based, relevant information of the ingress blocks 104s including but not limited to next state, internal flags, saved data, and statistic counters of the finite-state- machine based ingress blocks 104s are propagated from one individual cryptographic slice 102 to the next individual cryptographic slice 102 during aggregation of the individual cryptographic slices 102. In some embodiments, where the cryptographic engines 106s are mathematic-logic based, operation results by the cryptographic engines 106s including but not limited to XOR results, multiplier results, and other cryptographic-algorithm-specific results of the mathematic-logic based cryptographic engines 106s are propagated from one individual cryptographic slice to the next individual cryptographic slice during aggregation of the individual cryptographic slices. In some embodiments, where the egress blocks 108s of the individual cryptographic slices are also finite-state- machine based, relevant information of the egress blocks 108s are propagated from one individual cryptographic slice to the next individual cryptographic slice in similar ways as the ingress blocks 104s. In some embodiments, current data framing information and end results are broadcasted to all individualcryptographic slices 102 for resource sharing. In some embodiments, only a part (e.g., one or more components) of the individual cryptographic slices 102 is aggregated by one of the slice-aggregated cryptographic slices, e.g., only the cryptographic engines 106s are aggregated across the individual cryptographic slices 102.In some embodiments, each slice-aggregated cryptographic slice is configured to handle the information/metadata based on configuration of the slice-aggregated cryptographic slice. Figure 5 is a sequence diagram illustrating an example of operations for metadata sharing amongcryptographic slices. Although the figure depicts functional steps in a particular order for purposes of illustration, the processes are not limited to any particular order or arrangement of steps. One skilled in the relevant art will appreciate that the various steps portrayed in this figure could be omitted, rearranged, combined and/or adapted in various ways.As shown by the example of Figure 5, it is first determined at step 502 whether an individual cryptographic slice has been designated as a master slice. If an individual cryptographic slice is a master slice, the master slice isconfigured to generate current data framing information of the cryptographic slice at step 504. It is then determined at step 506 whether the master slice is the only cryptographic slice or a part of a slice-aggregated cryptographic slice, e.g., cryptographic slice 102 1 in slice-aggregated cryptographic slice 202 1 of Figure 2, cryptographic slice 102 1 in slice-aggregated cryptographic slice 302 of Figure 3, and cryptographic slice 102 2 in slice-aggregated cryptographic slice 402 of Figure 4. If the master slice is the only cryptographic slice, it ignores any incoming information/metadata and not necessarily generate any metadata by itself. Otherwise, if the master slice is determined to be a part of a slice- aggregated cryptographic slice having more than one individual cryptographic slice, even if it is not a master slice (step 508), it takes metadata from the previous individual cryptographic slice in the slice-aggregated cryptographic slice and generate its own metadata accordingly at step 510. A non-mastercryptographic slice that is not part of a slice-aggregated cryptographic slice is an invalid configuration, meaning that the individual cryptographic slice is not used and can be powered off for power efficiency at step 512.Figure 6 is a block diagram depicting a programmable integrated circuit (IC) 900 according to an example. The programmable IC 900 can implement the integrated circuit (IC) chip of systems of Figures 1 -5, in whole or in part. The programmable IC 900 includes a processing system 902, programmable logic 904, configuration logic 906, and configuration memory 908. The programmable IC 900 can be coupled to external circuits, such as nonvolatile memory 910, RAM 912, and other circuits 914.The processing system 902 can include microprocessor(s), memory, support circuits, IO circuits, and the like. The programmable logic 904 includes logic cells 916, support circuits 918, and programmable interconnect 920. The logic cells 916 include circuits that can be configured to implement general logic functions of a plurality of inputs. The support circuits 918 include dedicated circuits, such as transceivers, input/output blocks, digital signal processors, memories, and the like. The logic cells and the support circuits 918 can be interconnected using the programmable interconnect 920. Information for programming the logic cells 916, for setting parameters of the support circuits 918, and for programming the programmable interconnect 920 is stored in the configuration memory 908 by the configuration logic 906. The configuration logic 906 can obtain the configuration data from the nonvolatile memory 910 or any other source ( e.g ., the RAM 912 or from the other circuits 914).Figure 7 illustrates an FPGA implementation of the programmable IC 900 that includes a large number of different programmable tiles including configurable logic blocks (“CLBs”) 930, random access memory blocks (“BRAMs”) 932, signal processing blocks (“DSPs”) 934, input/output blocks (“lOBs”) 936, configuration and clocking logic (“CONFIG/CLOCKS”) 938, digital transceivers 940, specialized input/output blocks (“I/O”) 942 {e.g., configuration ports and clock ports), and other programmable logic 944 such as digital clock managers, system monitoring logic, and so forth. The FPGA can also include PCIe interfaces 946, analog-to-digital converters (ADC) 948, and the like.In some FPGAs, each programmable tile can include at least oneprogrammable interconnect element (“INT”) 950 having connections to input and output terminals 952 of a programmable logic element within the same tile, as shown by examples included in Figure 9. Each programmable interconnect element 950 can also include connections to interconnect segments 954 of adjacent programmable interconnect element(s) in the same tile or other tile(s). Each programmable interconnect element 950 can also include connections to interconnect segments 956 of general routing resources between logic blocks (not shown). The general routing resources can include routing channels between logic blocks (not shown) comprising tracks of interconnect segments (e.g., interconnect segments 956) and switch blocks (not shown) for connecting interconnect segments. The interconnect segments of the general routing resources (e.g., interconnect segments 956) can span one or more logic blocks. The programmable interconnect elements 950 taken together with the general routing resources implement a programmable interconnect structure (“programmable interconnect”) for the illustrated FPGA.In an example implementation, a CLB 930 can include a configurable logic element (“CLE”) 960 that can be programmed to implement user logic plus a single programmable interconnect element (“INT”) 950. A BRAM 932 can include a BRAM logic element (“BRL”) 962 in addition to one or more programmable interconnect elements. Typically, the number of interconnect elements included in a tile depends on the height of the tile. In the pictured example, a BRAM tile has the same height as five CLBs, but other numbers (e.g., four) can also be used. A signal processing block 934 can include a DSP logic element (“DSPL”) 964 in addition to an appropriate number of programmable interconnect elements. An IOB 936 can include, for example, two instances of an input/output logic element (“IOL”) 966 in addition to one instance of the programmable interconnect element 950. As will be clear to those of skill in the art, the actual I/O pads connected, for example, to the input/output logic element 966 typically are not confined to the area of the input/output logic element 966.In the pictured example, a horizontal area near the center of the die is used for configuration, clock, and other control logic. Vertical columns 968 extending from this horizontal area or column are used to distribute the clocks andconfiguration signals across the breadth of the FPGA.Some FPGAs utilizing the architecture illustrated in Figure 9 include additional logic blocks that disrupt the regular columnar structure making up a large part of the FPGA. The additional logic blocks can be programmable blocks and/or dedicated logic.Note that Figure 7 is intended to illustrate only an exemplary FPGA architecture. For example, the numbers of logic blocks in a row, the relative width of the rows, the number and order of rows, the types of logic blocks included in the rows, the relative sizes of the logic blocks, and the interconnect/logicimplementations included at the top of Figure 9 are purely exemplary. For example, in an actual FPGA more than one adjacent row of CLBs is typically included wherever the CLBs appear, to facilitate the efficient implementation of user logic, but the number of adjacent CLB rows varies with the overall size of the FPGA. While the foregoing is directed to specific examples, other and further examples may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.
The invention relates to techniques for performing write training on a dynamic random access memory. Various embodiments include a memory device capable of performing write training operations to determine that certain timing conditions are satisfied without storing data patterns in memory. Existing methods for write training involve storing a long data pattern into a memory, followed by reading the long data pattern to determine whether data is correctly written to the memory. Instead, the memory device of the present disclosure generates a data pattern within the memory device that matches a data pattern sent to the memory device by an external memory controller. If the data pattern generated by the memory device matches the data pattern received from the memory controller, the memory device stores a pass state in a register. If the data pattern does not match, the memory device stores the pass state in the register. The memory controller reads the register to determine whether the write training is passed or failed.
1.A computer-implemented method for performing a write training operation on a memory device, the method comprising:initializing a first register on the memory device with a first data pattern;receiving a second data pattern on an input pin of the memory device;comparing the first data pattern and the second data pattern to generate a result value; andThe result value is stored in a second register, wherein the result value indicates whether the write training operation was successful.2.The computer-implemented method of claim 1, further comprising:determining that the first data pattern matches the second data pattern,where the result value indicates a pass result.3.The computer-implemented method of claim 1, further comprising:determining that the first data pattern does not match the second data pattern,where the result value indicates a failure result.4.The computer-implemented method of claim 1, further comprising:receiving a command to read the result value; andThe resulting value is sent to an output pin of the memory device.5.The computer-implemented method of claim 1, further comprising initializing the second register to an initial value after sending the result value.6.The computer-implemented method of claim 1, wherein comparing the first data pattern and the second data pattern comprises performing an exclusive-OR XOR operation on the first data pattern and the second data pattern.7.6. The computer-implemented method of claim 6, wherein the result value is based on an output of the exclusive-OR operation.8.The computer-implemented method of claim 1, wherein initializing the first register comprises:receiving a reset command from the memory controller; andA predetermined value is stored in the first register.9.The computer-implemented method of claim 1, further comprising initializing the second register to an initial value after initializing the first register.10.9. The computer-implemented method of claim 9, wherein the initial value includes a failure status.11.The computer-implemented method of claim 1, further comprising:receive initial values from the memory controller; andThe initial value is stored in the first register.12.The computer-implemented method of claim 1, wherein the first register comprises a linear feedback shift register.13.The computer-implemented method of claim 1, wherein at least one of the first data pattern or the second data pattern comprises a pseudorandom bit sequence.14.A system that includes:a memory controller; anda memory device coupled to the memory controller that:initializing a first register on the memory device with a first data pattern;receiving a second data pattern from the memory controller on an input pin of the memory device;comparing the first data pattern and the second data pattern to generate a result value; andThe resulting value is stored in a second register.15.The system of claim 14, wherein the memory device further:determining that the first data pattern matches the second data pattern,where the result value indicates a pass result.16.The system of claim 14, wherein the memory device further:determining that the first data pattern does not match the second data pattern,where the result value indicates a failure result.17.The system of claim 14, wherein the memory device further:receiving a command to read the result value; andThe resulting value is sent to an output pin of the memory device.18.15. The system of claim 14, wherein after sending the resulting value, the memory device further initializes the second register to an initial value.19.15. The system of claim 14, wherein when the memory device compares the first data pattern and the second data pattern, the memory device executes on the first data pattern and the second data pattern XOR operation.20.19. The system of claim 19, wherein the resulting value is based on an output of the exclusive-OR operation.
Techniques for performing write training on dynamic random access memoryCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims priority to US Provisional Patent Application No. 63/144971, filed February 2, 2021, entitled "TECHNIQUESFOR TRANSFERRING COMMANDS TO A DRAM". This application further claims priority to US Provisional Patent Application No. 63/152814, filed February 23, 2021, entitled "DATASCRAMBLING ON A MEMORY INTERFACE". This application further claims priority to US Provisional Patent Application No. 63/152817, filed February 23, 2021, entitled "DRAM COMMANDINTERFACE TRAINING." This application further claims priority to US Provisional Patent Application No. 63/179954, filed April 26, 2021, entitled "DRAM WRITE TRAINING." The subject matter of these related applications is incorporated herein by reference.Background techniqueField of Various EmbodimentsVarious embodiments relate generally to computer memory devices, and more particularly to techniques for performing write training on dynamic random access memory.Description of Related ArtA computer system typically includes, among other things, one or more processing units, such as a central processing unit (CPU) and/or a graphics processing unit (GPU), and one or more memory systems. One type of memory system is called system memory, which is accessible by the CPU and GPU. Another memory system is graphics memory, which is typically only accessible by the GPU. These memory systems include multiple memory devices. One example memory device used in system memory and/or graphics memory is Synchronous Dynamic Random Access Memory (SDRAM, or more simply, DRAM).Typically, high-speed DRAM memory devices employ multiple interfaces. These interfaces include a command address interface for transferring commands to the DRAM. Such commands include commands to initiate write operations, commands to initiate read operations, and the like. These interfaces also include data interfaces for transferring data to and from the DRAM. The command write operation synchronously transfers the command to the DRAM. During a command write operation, the DRAM samples incoming commands on certain command input pins with respect to the rising or falling edge of the clock signal. Similarly, data write operations transfer data to DRAM synchronously. During a data write transfer, the DRAM samples incoming data on certain data input pins with respect to the rising or falling edge of the clock signal. Further, the data read operation transfers data synchronously from the DRAM. During read and write transfers, the DRAM presents outgoing data on certain data output pins relative to the rising or falling edge of the clock signal. The same or different clock signals may be used for clock signals for commands transferred to the DRAM, data transferred to the DRAM, and data transferred from the DRAM. Further, the data input pins can be the same or different from the data output pins.In order to reliably transfer commands and data to and from the DRAM, certain timing requirements must be met. One timing requirement is the setup time, which defines the minimum amount of time a command or data signal must be stable before the clock edge on which the command or data signal, respectively, is transmitted. Another timing requirement is hold time, which defines the minimum amount of time that a command or data signal must be stable after the clock edge that transmits the command or data signal, respectively. If the set time and/or hold time are not met, commands and/or data with one or more errors may be transmitted, resulting in corrupted command or data messages.As the speed of DRAM memory devices increases, the time between successive clock edges decreases, resulting in a shorter period of time that meets setup and hold times. Further, the timing of clock signals, command signals, and data signals is affected by changes due to process changes at the time of manufacture as well as local changes due to operating temperature, power supply voltage, interference from other signals, and the like. As a result, setup and hold times become more difficult to meet as DRAM device speeds increase. To alleviate this problem, DRAM memory devices typically have skew circuits to alter the timing of command and/or data signals relative to a clock signal. Periodically, a memory controller associated with the DRAM enters the DRAM into a training process for command write operations, data write operations, and/or data read operations. During such a training process, the memory controller changes the skew of one or more command input pins, data input pins, and/or data output pins until the memory controller determines that the DRAMs each reliably perform command write operations , data write operations and/or data read operations. The memory controller periodically repeats these training operations as operating conditions change over time, such as changes in operating temperature, supply voltage, etc., in order to ensure reliable DRAM operation.In particular, with regard to write training, the memory controller writes a write training data pattern, or more briefly, a write data pattern, to a portion of the DRAM memory core. Typically, a data pattern is a pseudorandom sequence of bits suitable for detecting errors on a particular data input of a DRAM memory device. The memory controller then reads the data pattern from the same portion of the DRAM memory core. The training operation is successful if the data pattern read by the memory controller from that portion of the DRAM memory core matches the data pattern that the memory controller previously wrote to that portion of the DRAM memory core. However, if the two data patterns do not match, the memory controller adjusts the skew of the data input pins showing one or more errors. The memory controller iteratively repeats the write training operation and adjusts the skew of the data input pins until the data pattern matches. The memory controller then returns the DRAM to normal operation.One disadvantage of this technique for DRAM write training is that as the speed of DRAM devices increases, so does the length of data patterns required to adequately and reliably perform training operations, both for write training operations and for Read training operations. Long data patterns typically require more time to write to and read from DRAM, increasing the amount of time to write and read data patterns during write training. Likewise, long data patterns typically require more storage capacity of DRAM, while reducing the amount of memory space used to store data other than write training.In some implementations, a separate memory, such as a first-in-first-out (FIFO) memory used to store the data patterns written to the training, is not part of the core portion of the DRAM memory. FIFO memory stores write training patterns instead of DRAM memory cores. The memory controller then reads back the write training pattern from the separate FIFO memory rather than from the DRAM memory core. However, as the size of the data pattern increases, so does the size of the FIFO memory, consuming most of the area of the DRAM die and increasing the cost of the DRAM. While it is possible to reduce the size of the FIFO memory, this will result in only part of the written training data pattern being stored in the FIFO memory, reducing the effectiveness of the write training operation.Furthermore, whether using part of the DRAM memory core or a separate memory such as FIFO memory, the memory controller writes long write training data patterns to DRAM and reads the same long write data from DRAM multiple times during each write training operation. Write training data patterns, thereby reducing the available bandwidth of DRAM for load and store operations other than write training.As mentioned above, what is needed in the art is a more efficient technique for performing signal training of memory devices.SUMMARY OF THE INVENTIONVarious embodiments of the present disclosure set forth a computer-implemented method for performing a write training operation on a memory device. The method includes initializing a first register on the memory device with a first data pattern. The method also includes receiving a second data pattern on an input pin of the memory device. The method also includes comparing the first data pattern and the second data pattern to generate a result value. The method also includes storing the resulting value in a second register. The method also includes the result value indicating whether the write training operation was successful.Other embodiments include, but are not limited to, systems implementing one or more aspects of the disclosed technology, and one or more computer-readable media including commands for performing one or more aspects of the disclosed technology, and A method of performing one or more aspects of the disclosed technology.At least one technical advantage of the techniques of the present disclosure over the prior art is that, with the techniques of the present disclosure, lengthy write training data patterns sent to the memory device during write training operations need not be stored in the memory device or retrieved from the memory device. Read from the memory device to determine if the write training operation was successful. Instead, the memory controller only needs to send the write training data pattern and read the pass/fail results to determine whether the write training operation was successful. Thus, the write training operation is completed in about half the time relative to the prior art which requires reading the write training data pattern from the memory device.Another advantage of the disclosed technique is that all pins of the data interface are trained simultaneously, thus requiring less training time than traditional methods. In contrast, according to the traditional method of writing data patterns to the DRAM memory core and then reading the data patterns back, only the data input/output pins themselves are trained. After the training of the data pins is complete, additional pins of the data interface not stored to the DRAM memory core are trained in a separate training operation. Training time is further reduced by training all pins of the data interface in parallel using a pseudo-random bit sequence (PRBS) pattern checker operating at the input/output pin level. These advantages represent one or more technological advancements over the prior art.Description of drawingsFor a detailed understanding of the manner in which the above-described features of various embodiments are employed, a more detailed description of the inventive concepts briefly summarized above may be obtained by reference to various embodiments, some of which are illustrated in the accompanying drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of the inventive concept and are therefore not to be considered limiting in scope in any way, for other equally effective embodiments exist.1 is a block diagram of a computer system configured to implement one or more aspects of the various embodiments;2 is a block diagram of a training architecture included in a system memory controller and/or a PPS memory controller of the computer system of FIG. 1, according to various embodiments;3 is a block diagram of a training architecture of a memory device included in a system memory and/or parallel processing memory of the computer system of FIG. 1, according to various embodiments;4 is a block diagram of a linear feedback shift register (LFSR) subsystem of a memory device included in a system memory and/or parallel processing memory of the computer system of FIG. 1, according to various embodiments; and5 is a flowchart of method steps for performing a write training operation by a memory device included in the system memory and/or parallel processing memory of the computer system of FIG. 1, according to various embodiments.Detailed waysIn the following description, numerous specific details are set forth in order to provide a more thorough understanding of various embodiments. However, it will be apparent to one skilled in the art that the inventive concept may be practiced without one or more of these specific details.System Overview1 is a block diagram of a computer system 100 configured to implement one or more aspects of the various embodiments. As shown, computer system 100 includes, but is not limited to, central processing unit (CPU) 102 and system memory 104 coupled to parallel processing subsystem 112 through memory bridge 105 and communication path 113 . Memory bridge 105 is coupled to system memory 104 through system memory controller 130 . Memory bridge 105 is further coupled through communication path 106 to I/O (input/output) bridge 107 , which in turn is coupled to switch 116 . Parallel processing subsystem 112 is coupled to parallel processing memory 134 through parallel processing subsystem (PPS) memory controller 132 .In operation, I/O bridge 107 is configured to receive user input from an input device 108 , such as a keyboard or mouse, and to send the input to CPU 102 that processes data through communication path 106 and memory bridge 105 . Switch 116 is configured to provide connections between I/O bridge 107 and other components of computer system 100, such as network adapter 118 and various add-in cards 120 and 121.Continuing with reference to the figures, I/O bridge 107 is coupled to system disk 114 , which may be configured to store content, applications, and data for CPU 102 and parallel processing subsystem 112 . Generally speaking, system disk 114 provides non-volatile storage for applications and data, and may include fixed or removable hard drives, flash memory devices, CD-ROMs (compact disk read only memory), DVD-ROMs (digital versatile disk- ROM), Blu-ray, HD-DVD (High Definition DVD), or other magnetic, optical or solid state storage devices. Finally, although not explicitly shown, other components, such as Universal Serial Bus or other port connections, compact disc drives, digital versatile disc drives, film recording devices, etc., may also be connected to I/O bridge 107 .In various embodiments, memory bridge 105 may be a north bridge chip and I/O bridge 107 may be a south bridge chip. Furthermore, communication paths 106 and 113 and other communication paths within computer system 100 may be implemented using any technically suitable protocol, including but not limited to AGP (Accelerated Graphics Port), HyperTransport, or any other bus known in the art or point-to-point communication protocol.In some embodiments, parallel processing subsystem 112 includes a graphics subsystem that delivers pixels to display device 110, which may be any conventional cathode ray tube, liquid crystal display, light emitting diode display, or the like. In such embodiments, parallel processing subsystem 112 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry. Such circuitry may be included across one or more parallel processing units (PPUs) included within parallel processing subsystem 112 . In some embodiments, each PUPS includes a graphics processing unit (GPU) that may be configured to cause the graphics rendering pipeline to perform operations on pixels generated based on graphics data provided by CPU 102 and/or system memory 104 operations on the data. Each PPU may be implemented using one or more integrated circuit devices, such as programmable processors, application specific integrated circuits (ASICs), memory devices), or in any other technically feasible manner.In some embodiments, parallel processing subsystem 112 incorporates circuitry optimized for general purpose and/or computational processing. Furthermore, such circuits may be included across one or more PPUs within parallel processing subsystem 112 that are configured to perform such general purpose and/or computational operations. In other embodiments, one or more PPUs within parallel processing subsystem 112 may be configured to perform graphics processing, general purpose processing, and computational processing operations. System memory 104 includes at least one device driver 103 configured to manage the processing operations of one or more PPUs within parallel processing subsystem 112 .In various embodiments, parallel processing subsystem 112 may integrate one or more of the other elements of FIG. 1 to form a single system. For example, parallel processing subsystem 112 may be integrated with CPU 102 and other connecting circuits on a single chip to form a system-on-chip (SoC).In operation, CPU 102 is the main processor of computer system 100, controlling and coordinating the operation of other system components. In particular, the CPU 102 issues commands that control the operation of the PPUs within the parallel processing subsystem 112 . In some embodiments, CPU 102 writes the command stream of the PPU within parallel processing subsystem 112 to a data structure (not explicitly shown in FIG. 1 ), which may be located in system memory 104, PP memory 134, or the CPU Another storage location accessible by both 102 and the PPU. A pointer to the data structure is written to the pushbuffer to initiate processing of the command stream in the data structure. The PPU reads the command stream from the push buffer and then executes the commands asynchronously relative to the operation of the CPU 102 . In the embodiment in which multiple push buffers are generated, the application program can specify an execution priority for each push buffer through the device driver 103, so as to control the scheduling of different push buffers.Each PPU includes an I/O (input/output) unit that communicates with the rest of computer system 100 through communication path 113 and memory bridge 105 . The I/O unit generates packets (or other signals) for transmission on communication path 113, and also receives all incoming packets (or other signals) from communication path 113, directing the incoming packets to the appropriate elements of the PPU. The connections of the PPU to the rest of the computer system 100 may vary. In some embodiments, the parallel processing subsystem 112 including at least one PPU is implemented as an add-in card that can be inserted into an expansion slot of the computer system 100 . In other embodiments, the PPU may be integrated with a bus bridge, such as memory bridge 105 or I/O bridge 107, on a single chip. Also, in other embodiments, some or all of the elements of the PPU may be included with the CPU 102 in a single integrated circuit or system-on-chip (SoC).The CPU 102 and PPU within the parallel processing system 112 access system memory through the system memory controller 130 . System memory controller 130 sends signals to memory device transfers in system memory 104 to start memory devices, send commands to memory devices, write data to memory devices, read data from memory devices, and the like. One example memory device employed in system memory 104 is double data rate SDRAM (DDR SDRAM, or more briefly DDR). DDR memory devices perform memory write and read operations at twice the data rate of previous generation single data rate (SDR) memory devices.Additionally, the PPU and/or other components within parallel processing system 112 access PP memory 134 through parallel processing system (PPS) memory controller 132 . PPS memory controller 132 sends signals to memory devices in PP memory 134 to start memory devices, send commands to memory devices, write data to memory devices, read data from memory devices, and the like. PP memory 134 is an example memory device used in synchronous graphics random access memory (SGRAM), a specialized form of SDRAM used for computer graphics applications. A specific type of SGRAM is Graphics Double Data Rate SGRAM (GDDR SDRAM, or more simply GDDR). Compared to DDR memory devices, GDDR memory devices are configured with wider data buses in order to transfer more bits of data per memory write and read operation. By employing double data rate technology and a wider data bus, GDDR memory devices are able to achieve the high data transfer rates typically required by PPUs.It is to be understood that the system shown below is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, can be modified as desired. For example, in some embodiments, system memory 104 may be connected directly to CPU 102 rather than through memory bridge 105 through which other devices will communicate with system memory 104 . In other alternative topologies, parallel processing subsystem 112 may be connected to I/O bridge 107 or directly to CPU 102 rather than to memory bridge 105 . In other embodiments, I/O bridge 107 and memory bridge 105 may be integrated into a single chip rather than exist as one or more discrete devices. Finally, in some embodiments, one or more of the components shown in FIG. 1 may not be present. For example, switch 116 could be eliminated and network adapter 118 and add-in cards 120, 121 would be connected directly to I/O bridge 107.It is to be understood that the core architecture described below is illustrative and subject to changes and modifications. Among other things, the computer system 100 of FIG. 1 may include any number of CPUs 102 , parallel processing subsystems 112 , or memory systems, such as system memory 104 and parallel processing memory 134 , within the scope of embodiments of the present disclosure. Further, as used herein, references to shared memory may include any one or more technically feasible memories including, but not limited to, local memory within parallel processing subsystem 112 shared by one or more PPUs, Memory, cache memory, parallel processing memory 134 and/or system memory 104 shared among multiple parallel processing subsystems 112 . Note also that, as used herein, a reference to a cache memory may include any one or more technically feasible memories, including but not limited to L1 cache, L1.5 cache, and L2 cache. In view of the foregoing, those of ordinary skill in the art will recognize that the architecture depicted in FIG. 1 in no way limits the scope of the various embodiments of the present disclosure.Perform write training operations on DRAMVarious embodiments are directed to techniques for efficiently performing write training of DRAM memory devices. DRAM memory devices include one or more linear feedback shift registers (LFSRs) that generate write patterns in the form of pseudo-random bit sequences (PRBS). In some embodiments, each of several input pins of an interface (eg, a data interface) undergoing write training operations is coupled to a separate LFSR to check the PRBS pattern received on the corresponding input pin. To begin write training, the memory controller associated with the memory device sends a reset command and/or reset signal to the LFSR on the memory device to seed the LFSR. In response, the memory device seeds the LFSR with a predetermined seed value and/or polynomial. Additionally or alternatively, the memory controller seeds the LFSR by sending seed values and/or polynomials to the memory device through another interface that has been trained (eg, a separate command-address interface). In response, the memory device seeds the LFSR according to the seed value and/or polynomial received from the memory controller. In some embodiments, the memory controller includes the reset command, reset signal or seed value and/or polynomial in the write training command that the memory controller transmits to the memory device through the command address interface. In some embodiments, when the memory device loads the seed value into the LFSR to prepare the write training result register to receive the pass/fail status (also referred to herein as the pass/fail result value) for the current write training operation ), the write training result register is self-cleared to the initial value.During a write training operation, the memory controller sends a write training pattern to one or more interface pins on the memory device based on the same seed value and/or polynomial that the memory device used to seed the LFSR. As the memory device receives the bit pattern, the write training checker on the one or more interface pins checks the incoming write training pattern on the one or more interface pins against the output of the LFSR in the memory device. In some embodiments, the PRBS checker for the input pins is implemented using exclusive-or (XOR) logic.If the incoming write data pattern matches the data pattern generated by the LFSR in the memory device, the write training operation passes, and the memory device records a pass status in the write training result register. However, if the incoming write data pattern does not match the data pattern generated by the LFSR in the memory device, then the write training operation fails and the memory device has a failed state in the write training result register. In some embodiments, the write training result register includes a separate pass/fail status bit for each input pin that undergoes a write training operation.During a write training operation, the memory controller periodically advances the LFSR on the memory controller by shifting the value in the LFSR on the memory controller. Accordingly, the memory controller sends a new write training command to the memory device. In response, the memory device advances the LFSR on the memory device by shifting the value in the LFSR on the memory device. In this way, the LFSR on the memory controller and the LFSR on the memory device maintain the same value during the write training operation. Therefore, the LFSR on the memory controller and the LFSR on the memory device generate the same data pattern during write training operations.When the memory device completes all or part of the write training operation, the memory controller reads the value in the write training result register to determine whether the write training operation passed or failed. In some embodiments, the write training result register is self-cleared to the initial value when the memory controller reads the value written to the training result register. In some embodiments, the write training result register is initially emptied to indicate a failure status. Thereafter, the write training result register is updated as needed after each write training command to indicate whether the write training operation corresponding to the write training command passed or failed. When the status register is read to the memory controller, the status register is self-cleared again to indicate a failure condition.2 is a block diagram of a training architecture 200 included in the system memory controller 130 and/or the PPS memory controller 132 of the computer system 100 of FIG. 1, according to various embodiments.Training architecture 200 includes a memory controller processor 226 that sends signals to components of training architecture 200 included in the memory controller and memory devices included in system memory 104 and/or PP memory 134. Training architecture 300 of FIG. 3 . The memory controller processor 226 sends signals to start the memory device, send commands to the memory device, write data to the memory device, read data from the memory device, and the like. The memory controller processor 226 generates commands to the memory device and sends the commands to the transmitter 208 . The transmitter 208 in turn sends the command to the memory device through the command address (CA) output pin 206 .Additionally, memory controller processor 226 sends read/write command flip-flops to read/write linear feedback shift register (R/WLFSR) 220, resulting in synchronous operation. The read/write command triggers may be in the form of commands, signals, etc. sent by the memory controller processor 226 and received by the R/W LFSR 220 . The first synchronization operation generated by the read/write command flip-flop initializes the R/W LFSR 220 to a known state in order to generate sequence values. A second synchronization operation, triggered by a read/write command, causes the R/W LFSR 220 to change from generating the current sequence value to generating the next sequence value. When the R/W LFSR 220 is initialized, the R/W LFSR 220 loads the LFSR seed value from the configuration register 234 to generate the initial sequence value. The memory controller processor 226 stores the LFSR seed value in the configuration register 234 before the R/W LFSR 220 is initialized. When the R/W LFSR 220 is advanced, the R/W LFSR 220 advances from generating the current sequence value to the next sequence value. Memory controller processor 226 initializes and advances R/W LFSR 220 synchronously according to the memory device used to advance R/W LFSR 320 of FIG. 3 in order to maintain synchronization between R/W LFSR 220 and R/W LFSR 320 . In this manner, the training architecture 300 can verify that the data received by the memory device matches the data sent by the training architecture 200 in the system memory controller.R/W LFSR 220 sends the sequence value to encoder 230. The encoder 230 performs encoding operations on the sequence values. The sequence values sent by the training architecture 200 to the DQ, DQX I/O, EDC pins 216 are typically encoded to optimize signaling on the memory interface. The goal of sending encoded data at the physical I/O layer between the memory controller and the memory device is to optimize the signaled data. The optimization data is encoded to minimize transitions on the interface, minimize crosstalk, reduce the consumption of direct current (DC) power in termination circuits on the interface, and the like. Data may be encoded through a maximum transition avoidance (MTA) operation, reducing the number of low-to-high and/or high-to-low signal transitions in order to improve the signal-to-noise ratio (SNR) on the memory interface. Additionally or alternatively, data may be encoded through a data bus inversion (DBI) operation to reduce the number of high signal values on the memory interface, thereby reducing the power consumed by the memory interface. Additionally or alternatively, the data may be encoded by any technically feasible operation.The encoder 230 generates an encoded sequence value that is sent to the memory device and transmits the encoded sequence value to the transmitter 218 . The transmitter 218 in turn transmits the encoded sequence value to the memory device via one or more data (DQ), extended data (DQX), and/or error detection and correction (EDC) pins 216 .3 is a block diagram of a training architecture 300 of memory devices included in system memory 104 and/or parallel processing memory 134 of computer system 100 of FIG. 1, according to various embodiments. As further described, training architecture 300 includes components for command address interface training, data read interface training, and data write interface training. Through these components, the training architecture 300 performs command address training operations, data read training operations, and data write training operations without storing training data in the DRAM core 326 of the memory device. When operating the memory device at higher speeds, the memory controller performs these training operations periodically to meet setup and hold times on all input pins and output pins of the memory device.In general, the memory controller performs training operations in a specific order. First, the memory controller performs a training operation on the command address interface. Command address interface training can be performed by any technically feasible technique. By first training the command address interface, the memory device is ready to receive commands and write mode registers as needed to perform data read interface training and data write interface training. In general, as long as setup and hold times are met on all command address (CA) input pins 306, the command address interface requires no training. The memory controller causes the seed value and/or polynomial to be loaded into the command address linear feedback shift register (CALFSR) 310 . The memory controller applies the data pattern to one or more CA input pins 306 . CA input pin 306 is sent to CA LFSR 310 and XOR gate 312 through receiver 308 . The CALFSR310 replicates the same schema as the memory controller. XOR gate 312 compares the data pattern on CA input pin 306 with the data from CALFSR 310 . If the data pattern on the CA input pin 306 matches the data from the CA LFSR 310, the XOR gate 312 sends a low value. If the data pattern on CA input pin 306 does not match the data from CALFSR 310, XOR gate 312 sends a high value. The mode 304 of the input multiplexer 302 selects the bottom input to send the output of the XOR gate 312 to the transmitter 314 and then to one or more of data (DQ), extended data (DQX), and/or error detection and correction ( EDC) pin 316. The memory controller then reads one or more of the DQ, DQX and/or EDC pins 316 to determine whether the command address input training was successful. Once the command address input training is complete, the command address received from the memory controller passes through the CA input pin 306 and the receiver 308, and then reaches the DRAM core 326. In various embodiments, feedback from the memory device for different usage cases resulting from interface training may be sent by the memory device to the memory control through any one or more of the DQ, DQX and/or EDC pins 316 in any technically feasible combination device.After completing command address interface training, the memory controller may send commands to the memory device to facilitate data read interface training and data write interface training. The memory device receives these commands through the CA input pin 306 . Receiver 308 sends commands from CA input pin 306 to command decoder 332 . Command decoder 332 decodes commands received from training architecture 200 in the memory controller. Some commands are used to store values to and/or load values from configuration registers 334 . For example, command decoder 332 may receive a command to store a value in configuration register 334 to store a linear feedback shift register (LFSR) seed value, which is the linear feedback shift register (LFSR) seed value each time R/WLFSR 320 is initialized Loaded into read/write linear feedback shift register (R/W LFSR) 320 .Some commands perform different operations in the memory device. For example, command decoder 332 may receive a read command, and in response, the memory device performs a read operation to load data from DRAM core 326 and send the data to the memory controller. Similarly, the command decoder 332 may receive a write command, and in response, the memory device performs a write operation to store the data received from the memory controller in the DRAM core 326 . Further, if the command decoder 332 receives a read command or a write command during data read interface training or data write interface training, the command decoder 332 sends the trigger obtained from the read/write command to the R/W LFSR 320. The read/write command flip-flop initializes the R/W LFSR 320 to generate the first sequence value and/or advance the R/W LFSR 320 from the current sequence value to the next sequence value.Second, the memory controller performs training operations on the data read interface. Generally speaking, the training operation on the data read interface is performed before the training operation on the data write interface. The sequence of this training operation ensures that the data is read correctly from the memory device, which allows the memory controller to perform optimal write training operations. The memory controller sends a command to the memory device that causes the seed value and/or polynomial to be loaded into the R/W LFSR 320. R/W LFSR 320 sends a series of sequence values to encoder 330 based on the seed value and/or polynomial.The encoder 330 performs encoding operations on the sequence values. The sequence values sent by the R/W LFSR 320 to the DQ, DQX I/O pins 316 are typically encoded to optimize signaling on the memory interface. The goal of sending encoded data at the physical I/O layer between the memory controller and the memory device is to optimize the signaled data. The optimization data is encoded to minimize transitions on the interface, minimize crosstalk, reduce the amount of direct current (DC) power consumed by termination circuits on the interface, and the like. Data may be encoded through a maximum transition avoidance (MTA) operation to reduce the number of low-to-high and/or high-to-low signal transitions to improve the signal-to-noise ratio (SNR) on the memory interface. Additionally or alternatively, the data may be encoded through a data bus inversion (DBI) operation to reduce the number of high signal values on the memory interface in order to reduce the power consumed on the memory interface. Additionally or alternatively, the data may be encoded by any technically feasible operation.The mode 304 of the input multiplexer 302 selects the top input to send the output of the encoder 330 to the transmitter 314 and then to one or more of data (DQ), extended data (DQX), and/or error detection and correction ( EDC) pin 316. The memory controller then reads one or more of the DQ, DQX and/or EDC pins 316 to determine if the data received from the R/W LFSR 320 is in the desired mode.Third, the memory controller performs training operations on the data write interface. The memory controller causes the seed value and/or polynomial to be loaded into the R/W LFSR 320 . The memory controller applies the data pattern to one or more of the DQ, DQX and/or EDC pins 316 . DQ, DQX and/or EDC pins 316 are sent through receiver 318 to R/W LFSR 320 and XOR gate 322 . R/WLFSR 320 replicates the same pattern as R/W LFSR 220 on the memory controller. The encoder 330 encodes the pattern presented by the R/WLFSR 320 to replicate the encoded data received by the receiver 318 from the memory controller. XOR gate 322 pairs the data pattern on DQ, DQX and/or EDC pins 316 with the data from encoder 330 . If the data pattern on the CA input pin 306 matches the data from the encoder 330, the XOR gate 322 sends a low value. If the data pattern on DQ, DQX and/or EDC pins 316 does not match the data from encoder 330, XOR gate 322 sends a high value. The output of XOR gate 322 is sent to write training result register 324 and a pass/fail write training status is stored for each DQ, DQX and/or EDC pin 316 undergoing write training. The memory controller reads the write training result register 324 to determine the result of the write training operation. When the memory controller reads the write training result register 324, the mode 304 of the input multiplexer 302 selects the second one from the top input to send the output of the write training result register 324 through the transmitter 314 to one or more A plurality of DQ, DQX and/or EDC pins 316 . The memory controller then reads one or more of the DQ, DQX and/or EDC pins 316 to determine whether the data write training was successful. Once the data write training is complete, the write data received from the memory controller passes through the DQ, DQX and/or EDC pins 316 and the receiver 318 , and then reaches the DRAM core 326 .In some embodiments, once the failure status is stored in the write training results register 324, the failure status remains in the write training results register 324 until a reset of the memory device occurs. Even if the subsequent data write interface training operation results in a pass state, the fail state written to the training result register 324 does not become a pass state. Instead, the write training result register 324 maintains the failed status of the previous failed data write interface training operation. In these embodiments, the failed state is indicative of at least one data write interface training operation performed since the last reset of the memory device resulted in a failed state. The failed state is cleared when the memory device is reset. The reset of the memory device is performed in response to a read of the register that triggered the reset, by loading the R/W LFSR 220 according to the seed value, by receiving a signal on the reset pin of the memory device, or the like.Once the data read training and data write training are complete, the mode 304 of the input multiplexer 302 selects the second from the bottom input to send the output of the DRAM core 326 to the transmitter 314 and then to one or more data (DQ), extended data (DQX), and/or error detection and correction (EDC) pins 316 .It is to be understood that the system shown below is illustrative and that variations and modifications are possible. Among other things, training architecture 300 includes components for command address interface training, data read interface training, and data write interface training. However, training architecture 300 may include components for training any other technically feasible input and/or output interface within the scope of the present disclosure. Further, in some instances, a single LFSR generates a source signal, such as a pseudorandom bit sequence (PRBS), for training any combination of one or more I/O pins of the memory device, including all I/Os of the memory device pin. Additionally or alternatively, one LFSR may generate PRBS for training any one or more of the I/O pins of the memory device. Additionally or alternatively, multiple LFSRs may generate PRBSs for one or more I/O pins of the memory device, as described above.4 is a block diagram of a linear feedback shift register (LFSR) subsystem 400 of memory devices included in system memory 104 and/or parallel processing memory 134 of computer system 100 of FIG. 1, according to various embodiments. As shown, LFSR subsystem 400 includes multiple LFSRs 410(0)-410(4) and multiple XOR gates 420(0)-420(3).LFSR subsystem 400 includes LFSRs 410(0)-410(3) that directly generate bit sequences, such as pseudorandom bit sequences (PRBS) for particular I/O pins of a memory device. Thus, DQ0LFSR 410(0) generates DQ0 PRBS 430(0) for bit 0 of the memory device's data pin bus. Likewise, DQ2 LFSR 410(2) generates DQ2 PRBS 430(2) for bit 2 of the memory device's data pin bus. In a similar manner, the DQ4 LFSR 410(4) generates the DQ4 PRBS 430(4) for bit 4 of the data pin bus of the memory device, and the DQ6 LFSR 410(6) generates the bit for the data pin bus of the memory device 6 DQ6 PRBS 430(6). An error detection and correction (EDC) LFSR 410(8) generates an EDC PRBS 430(8) for the EDC bits of the memory device's EDC pin bus.LFSR subsystem 400 generates PRBS for the remaining DQ bits based on any technically feasible combination of two or more of the outputs of DQ LFSRs 410(0)-(8) in LFSR subsystem 400. In some examples, LFSR subsystem 400 generates DQ1 PRBS 430(1) based on a logical combination of two or more other LFSRs, such as XOR gates that perform an XOR function on the outputs of DQ0 LFSR 410(0) and DQ2 LFSR 410(2) 420 output. Similarly, LFSR subsystem 400 generates DQ3 PRBS based on the logical combination of two or more other LFSRs, such as the output of XOR gate 422 that performs an XOR function on the outputs of DQ2 LFSR 410(2) and DQ4 LFSR 410(4) 430(1).LFSR subsystem 400 is based on any technically feasible logical combination of the outputs of two or more other LFSRs, such as the output of XOR gate 424 that performs an XOR function on the outputs of DQ4 LFSR 410(4) and DQ6 LFSR 410(6) , generate DQ5 PRBS 430(5). LFSR subsystem 400 generates DQ7 PRBS 430 (7 based on the logical combination of two or more other LFSRs, such as the output of XOR gate 426 that performs an XOR function on the output of DQ6 LFSR 410(6) and the output of EDC LFSR 410(8) ). By sharing the LFSR among multiple outputs, the LFSR subsystem 400 generates a unique PRBS for each output of a particular signal bus without specifying a separate LFSR for the output of each signal bus. In the above example, the LFSR subsystem 400 includes only 5 LFSRs and also generates a unique PRBS for each of the 8 signal bus outputs.It is to be understood that the system shown herein is illustrative and that variations and modifications are possible. In some examples, the techniques described herein may be used in conjunction with a data (DQ) pin of a memory device for an extended data (DQX) pin, error detection and correction (EDC) pin, command address (CA) pin of a memory device and/or any other input/output pins.Additionally or alternatively, the patterns generated by the memory device may be subjected to coding schemes that reduce and/or eliminate maximum transitions of training, such as coding schemes based on Phase Amplitude Modulation (PAM4) signaling parameters. As a result, the patterns generated by the memory device can eliminate the need to add full MTA encoder logic, which can be expensive.In some examples, when LFSR subsystem 400 sends randomized LFSR data from parallel processing subsystem 112 to DRAM core 326, training results may suffer if LFSR subsystem 400 does not perform some type of encoding to avoid maximum transitions Negative impact. Therefore, the training results may be sub-optimal, as regular read/write operations avoid maximum transitions by using MTA encoding logic. Thus, the LFSR subsystem 400 can perform low-overhead techniques to emulate the benefits of MTA without having to implement full MTA encoding and decoding logic. These techniques involve the detection of maximal transitions of random LFSR outputs. This technique converts maximal transitions on random LFSR outputs into non-maximal transitions (0<->2, 0<->1, no transition, etc.). More generally, the encoding performed by the LFSR subsystem 400 can manipulate random data in order to emulate the characteristics of MTA encoding/decoding without adding complete MTA encoder/decoder logic to the LFSR subsystem 400 .5 is a flowchart of method steps for performing a write training operation to memory devices in system memory 104 and/or parallel processing memory 134 of the computer system of FIG. 1, according to various embodiments. Although the method steps have been described in conjunction with the systems of FIGS. 1-4, those of ordinary skill in the art will understand that any system configured to perform the method steps in any order is within the scope of the present invention.As shown, method 500 begins at step 502, where the memory device initializes a write training LFSR (eg, R/W LFSR 320) on the memory device according to a seed value. The memory controller causes the seed value and/or polynomial to be loaded into the R/W LFSR 320 . To begin write training, the memory controller associated with the memory device sends a reset command and/or reset signal to the R/W LFSR 320 on the memory device to seed the R/W LFSR 320 . In response, the memory device seeds the R/W LFSR 320 with a predetermined seed value and/or polynomial. Additionally or alternatively, the memory controller seeds the R/W LFSR 320 by sending the seed value and/or the polynomial memory device through another interface that has been trained, such as a separate command address interface. In response, the memory device seeds the R/W LFSR 320 according to the seed value and/or polynomial received from the memory controller. In some embodiments, the memory controller includes a reset command, a reset signal or seed value and/or a polynomial in the write training command sent by the memory controller to the memory device through the command address interface. In some embodiments, the write training result register self-clears to the initial value when the memory device loads the seed value into the LFSR to prepare the write training result register to receive the pass/fail status of the current write training operation.At step 504, the memory device receives the data pattern in the form of a signal on an input pin. The memory controller applies the data pattern to one or more of the DQ, DQX and/or EDC pins 316 .At step 506, the memory device compares the signal on the input pin to the value in the write training LFSR, such as R/W LFSR320. DQ, DQX and/or EDC pins 316 are sent through receiver 318 to R/W LFSR 320 and XOR gate 322 . The R/W LFSR320 replicates the same pattern as the memory controller. XOR gate 322 compares the data pattern on DQ, DQX and/or EDC pins 316 with the data from R/W LFSR 320 . If the data pattern on the CA input pin 306 matches the data from the R/W LFSR 320, the XOR gate 322 sends a low value. XOR gate 322 sends a high value if the data pattern on DQ, DQX and/or EDC pins 316 does not match the data from R/W LFSR 320.At step 508, the memory device records the results in a results register, such as into the training results register 324. The output of the XOR gate 322 is sent to the write training result register 324 and a pass/fail write training status is stored for each DQ, DQX and/or EDC pin 316 undergoing write training. The memory device optionally advances the R/W LFSR 320. During a write training operation, the memory controller periodically advances the LFSR on the memory controller by shifting the value in the LFSR on the memory controller. Accordingly, the memory controller sends a new write training command to the memory device. In response, the memory device advances the R/W LFSR 320 on the memory device by shifting the value in the R/W LFSR 320 on the memory device. In this way, the LFSR on the memory controller and the R/W LFSR 320 on the memory device maintain the same value during write training operations. Therefore, the LFSR on the memory controller and the R/W LFSR 320 on the memory device generate the same data pattern during write training operations.At step 510, the memory device determines whether the write test is complete. The memory device may determine whether the test is complete based on completing multiple iterations of the write training operation, based on commands received from the memory controller, and the like. If the memory device determines that the write test is not complete, the method 500 proceeds to step 504, as described above.However, if the memory device determines that the write test is complete, the method 500 proceeds to step 512, where the memory device sends the result to the memory controller. When the memory device completes all or part of the write training operation, the memory controller reads the write training result register 324 to determine the result of the write training operation, thereby determining whether the write training operation passed or failed. When the memory controller reads the write to the training result register 324, the mode 304 of the input multiplexer 302 selects the second from the top input to send the output of the write to the training result register 324 through the transmitter 314, and then to a or more DQ, DQX and/or EDC pins 316. The memory controller then reads one or more of the DQ, DQX and/or EDC pins 316 to determine whether the data write training was successful.At step 514, the memory device clears the result register. In some embodiments, the write training result register is self-cleared to the initial value when the memory controller reads the value written to the training result register. In some embodiments, the write training result register is initially emptied to indicate a failure status. Thereafter, the write training result register is updated as needed after each write training command to indicate whether the write training operation corresponding to the write training command passed or failed. When the status register is read to the memory controller, the status register is self-cleared again to indicate a failure condition.Method 500 then terminates. Alternatively, method 500 proceeds to step 502 to perform additional write training operations.In summary, various embodiments are directed to techniques for efficiently performing write training of DRAM memory devices. DRAM memory devices include one or more linear feedback shift registers (LFSRs) that generate write patterns in the form of pseudo-random bit sequences (PRBS). In some embodiments, each of several input pins of an interface undergoing write training operations, such as a data interface, is coupled to a corresponding LFSR to check the PRBS pattern received on the corresponding input pin. To begin write training, the memory controller associated with the memory device sends a reset command and/or reset signal to the LFSR on the memory device to seed the LFSR. In response, the memory device seeds the LFSR with a predetermined seed value and/or polynomial. Additionally or alternatively, the memory controller seeds the LFSR by sending seed values and/or polynomials to the memory device through another interface that has been trained, such as a separate command address interface. In response, the memory device seeds the LFSR with the seed value and/or polynomial received from the memory controller. In some embodiments, the memory controller includes a reset command, a reset signal or seed value and/or a polynomial in the write training command sent by the memory controller to the memory device through the command address interface. In some embodiments, the write training result register is self-cleared to an initial value when the memory device loads the seed value into the LFSR to prepare the write training result register to receive a pass/fail status for the current write training operation.During a write training operation, the memory controller sends a write training pattern to one or more interface pins on the memory device based on the same seed value and/or polynomial that the memory device used to seed the LFSR. As the memory device receives the bit pattern, the write training checker on the one or more interface pins checks the incoming write training pattern on the one or more interface pins against the output of the LFSR in the memory device. In some embodiments, the PRBS checker for the input pins is implemented using XOR logic.If the incoming write data pattern matches the data pattern generated by the memory device's LFSR, the write training operation passes, and the memory device records a pass status in the write training result register. However, if the incoming write data pattern does not match the data pattern generated by the memory device's LFSR, the write training operation fails, and the memory device records the failure status in the write training result register. In some embodiments, the write training result register includes a separate pass/fail status bit for each input pin that undergoes a write training operation.During a write training operation, the memory controller periodically advances the LFSR on the memory controller by shifting the value in the LFSR on the memory controller. Accordingly, the memory controller sends a new write training command to the memory device. In response, the memory device advances the LFSR on the memory device by shifting the value in the LFSR on the memory device. In this way, the LFSR on the memory controller and the LFSR on the memory device maintain the same value during the write training operation. Therefore, the LFSR on the memory controller and the LFSR on the memory device generate the same data pattern during write training operations.When the memory device completes all or part of the write training operation, the memory controller reads the value in the write training result register to determine whether the write training operation passed or failed. In some embodiments, the write training result register is self-cleared to the initial value when the memory controller reads the value written to the training result register. In some embodiments, the write training result register is initially emptied to indicate a failure status. Thereafter, the write training result register is updated as needed after each write training command to indicate whether the write training operation corresponding to the write training command passed or failed. When the status register is read to the memory controller, the status register is self-cleared again to indicate a failure condition.At least one technical advantage of the techniques of the present disclosure over the prior art is that, according to the techniques of the present disclosure, lengthy write training data patterns sent to the memory device during write training operations need not be stored in the memory device or retrieved from the memory device. Read from the memory device to determine if the write training operation was successful. Instead, the memory controller only needs to send the write training data pattern and read the pass/fail results to determine whether the write training operation was successful. Thus, the write training operation is completed in about half the time relative to the prior art which requires reading the write training data pattern from the memory device.Another advantage of the techniques of the present disclosure is that all pins of the data interface are trained simultaneously, resulting in shorter training times relative to traditional methods. In contrast, according to the traditional method of writing data patterns to the DRAM memory core and then reading the data patterns back, only the data input/output pins themselves are trained. After the training of the data pins is completed, additional pins of the data interface not stored to the DRAM memory core are trained in a separate training operation. Training time is further reduced by using a PRBS pattern checker operating at the input/output pin level to train all pins of the data interface in parallel. These advantages represent one or more technical improvements over prior art methods.Any and all combinations of any claim element recited in any claim and/or any element described in this application in any way are intended to be within the intended scope of this disclosure and protection.The description of various embodiments has been presented for purposes of illustration, and is not intended to be exhaustive or limited to the embodiments of the present disclosure. Numerous modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments.Aspects of the present embodiments may be embodied as a system, method, or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.), or an embodiment combining software and hardware aspects, which may all be collectively referred to herein as "Module" or "System". Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer-readable media having computer-readable program code embodied thereon.Any combination of one or more computer-readable media may be utilized. The computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium. The computer-readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of computer-readable storage media would include the following: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory ( ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In the context of this document, a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in connection with a command execution system, apparatus, or device.Aspects of the present disclosure are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer or other programmable data processing apparatus, thereby producing a machine that causes these computer program instructions when executed by the processor of the computer or other programmable data processing apparatus , capable of implementing the functions/acts specified in the block or blocks of the flowchart and/or block diagram. Such processors may be, but are not limited to, general purpose processors, special purpose processors, special purpose processors, or field programmable gate arrays.The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which includes one or more executable commands for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It is also noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented with a combination of dedicated hardware and computer commands.While the foregoing has been directed to embodiments of the present disclosure, other and further embodiments of the present disclosure can be devised without departing from the essential scope thereof, and the scope of which is to be determined by the following claims.
The present disclosure is directed to hardware-enforced access protection. A login agent module (LAM) may be configured to cause a prompt requesting login information to be presented. The LAM may then provide the login information to an operating system login authentication module (OSLAM), which may be configured to authenticate the login information using known user information. If authenticated, the OSLAM may generate and transmit a signed login success message to a secure user authentication module (SUAM) using a private key. The SUAM may be secure/trusted software loaded by firmware, and may be configured to authenticate the signed login success message. If authenticated, the SUAM may transmit an encrypted authentication message to the OSLAM. If the encrypted authentication message is authenticated, the OSLAM may grant access.
An apparatus comprising:a secure user authentication module (112);memory (204); one or more processors (202) coupled to the memory (204), the one or more processors (202) to:load the secure user authentication module (112) into a trusted execution environment (104) at the apparatus;a user interface (208);a login agent module (106) configured to cause a prompt to be presented by the user interface (208) at the apparatus, the prompt requesting login information to be entered into the apparatus;an operating system login authentication module (108) configured to receive the login information from the login agent module (106) and to transmit a signed login success message to the secure user authentication module (112) based on the login information,wherein the secure user authentication module (112) is further configured to transmit an encrypted authentication message to the operating system login authentication module (108) based on receipt of the signed login success message and based on authenticating the signed login success message, andwherein the operating system login authentication module (108) is further configured to grant access to the apparatus based on receipt and authentication of the encrypted authentication message; anda secure login policy module (300) configured to at least determine a context for the apparatus and to set a login policy for at least one of the login agent module (106) or the operating system login authentication module (108) based on the context,wherein the secure user authentication module (112) is further configured to cause the apparatus to shut down if a signed login success message is not received, and wherein the secure login policy module (300) is further configured to set an amount of time for the secure user authentication module (112) to wait for receipt of the signed login success message before causing the apparatus to shut down.The apparatus of claim 1, wherein the one or more processors (202) are further to load the secure user authentication module (112) to be loaded into the trusted execution environment in the apparatus by firmware in the apparatus.The apparatus of claim 1, further comprising a firmware interface module (110) configured to convey the signed login success message from the operating system login authentication module to the secure user authentication module and the encrypted authentication message from the secure user authentication module to the operating system login authentication module.The apparatus of claim 1, wherein the operating system login authentication module (108) is further configured to authenticate the login information against known user information.The apparatus of claim 4, wherein the operating system login authentication module (108) is further configured to use the known user information to secure a private key, the private key being accessible when the login information is authenticated against the known user information; and optionally,wherein the operating system login authentication module (108) is further configured to generate the signed login success message using the private key, orwherein the private key is encrypted by the trusted execution environment at the apparatus.The apparatus of claim 1, further comprising an authentication recovery module (302) to reconfigure user-related security at the apparatus by communicating with a remote resource.A method comprising:causing, via a login agent module (106), a prompt to be presented by a user interface at the apparatus, the prompt requesting login information to be entered into the apparatus;loading a secure user authentication module (112) into a trusted execution environment (104) at the apparatus; andreceiving, via an operating system login authentication module (108), the login information from the login agent module and transmitting, via the operating system login authentication module (108), a signed login success message to the secure user authentication module (112) based on the login information, andtransmitting, via the secure user authentication module (112), an encrypted authentication message to the operating system login authentication module (108) based on receipt of the signed login success message and based on authenticating the signed login success message, andgranting, via the operating system login authentication module (108), access to the apparatus based on receipt and authentication of the encrypted authentication message; andat least determining, via a secure login policy module (300), a context for the apparatus and to set a login policy for at least one of the login agent module (106) or the operating system login authentication module (108) based on the context, andcausing, via the secure user authentication module (112), the apparatus to shut down if a signed login success message is not received, andseting, via the secure login policy module (300), an amount of time for the secure user authentication module (112) to wait for receipt of the signed login success message before causing the apparatus to shut down.The method of claim 7, further comprising loading the secure user authentication module (112) into a trusted execution environment in the apparatus by firmware in the apparatus.The method of claim 7, further comprising conveying, via a firmware interface module (110), the signed login success message from the operating system login authentication module to the secure user authentication module and the encrypted authentication message from the secure user authentication module to the operating system login authentication module.The method of claim 7, further comprising authenticating, via the operating system login authentication module (108), the login information against known user information.The method of claim 10, further comprising utilizing, via the operating system login authentication module (108, the known user information to secure a private key, the private key being accessible when the login information is authenticated against the known user information; and optionally,the method further comprising generating, via the operating system login authentication module (108),the signed login success message using the private key, orwherein the private key is encrypted by a trusted execution environment at the apparatus.The method of claim 7, further comprising reconfiguring, via an authentication recovery module (302), user-related security at the apparatus by communicating with a remote resource.At least one non-transitory or tangible machine-readable medium comprising instructions which, when executed on a computing device, cause the computing device to implement or perform a method as claimed in any of claims 7-12.A system comprising means to implement or perform a method as claimed in any of claims 7-12.A computing device arranged to implement or perform a method as claimed in any of claims 7-12.
TECHNICAL FIELDThe present disclosure relates to device security, and more particularly, to systems configured to enhance software-based protection schemes with hardware-enforced security.BACKGROUNDThe variety of mobile devices emerging in the marketplace continues to expand due to, for example, increasing functionality that makes them applicable to many everyday situations. For example, the ability of mobile "smart" devices to communicate data in addition to simple voice interaction makes these devices useful for handling tasks that traditionally needed to be handled over a wired connection (e.g., a desktop linked to the Internet), in-person, etc. These tasks may be conducted using various applications on the mobile device that are configured to provide functionality such as, for example, personal or professional interactivity (e.g., email, messaging, etc.), financial management and transactions (e.g., banking, electronic purchases, etc.), database functionality such as contact management, entertainment applications, etc.However, the convenience created by mobile device use comes with some inherent risks. There substantial resale value in the mobile device itself, which may make it attractive to others who may wish to possess it unlawfully. Then there is information stored on the mobile device. This information may include identification information (name, address, phone numbers, social security numbers, etc.), financial account information (e.g., bank accounts, credit card numbers, etc.), login information for personal or business-related networks, etc. This information may be worth much more that than the actual mobile device itself in that it may grant access to others to wrongfully access information, make unauthorized transactions, impersonate the device user, etc.Existing mobile device security may be provided by the device operating system (OS) or by third-party security applications that execute with OS-level privilege. While effective against average users, more advanced users may circumvent these protections by attacking the device at the OS level. For example, dictionary attacks may be utilized to determine passwords, sensitive information may be retrieved by forcibly accessing the device memory using external interfaces, the mobile device may be reconfigured by installing a new OS, etc. As a result, mobile devices continue to be an attractive target for wrongdoers that know how to prey upon their weaknesses.US200500039013A1 discloses an embodiment providing a method comprising storing user authentication information in a hardware structure of a computer system, the hardware structure including a security mechanism to protect the stored authentication information from unauthorized access, and authenticating a user of the computer system by comparing user input authentication information with the stored authentication information.The article Bajikar S.: "Trusted Platform Module (TPM) based Security on Notebook PCs", Mobile Platforms Group Intel Corporation, 20 June 2002, XP008141583 , discloses TPM architecture comprising TPM hardware along with its supporting software and firmware that provides a platform root of trust. It is able to extend its trust to other parts of the platform by building a chain of trust, where each link extends its trust to the next one. The TPM is basically a secure micro-controller with added cryptographic functionalities. To simplify system integration into the PC platform, the TPM uses a Low Pin Count (LPC) bus interface to attach to the PC chipset. The TPM provides a set of crypto capabilities that allow certain crypto functions to be executed within the TPM hardware. Hardware and software agents outside of the TPM do not have access to the execution of these crypto functions within the TPM hardware, and as such, can only provide I/O to the TPM. The TPM contains a hardware engine to perform up to 2048 bit RSA encryption/decryption. The TPM uses its built-in RSA engine during digital signing and key wrapping operations. The TPM uses its built-in hash engine to compute hash values of small pieces of data. Large pieces of data (such as an email message) are hashed outside of the TPM, as the TPM hardware may be too slow in performance for such purposes.The invention is defined in the independent claims. Embodiments of the invention are described in the dependent claims.BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of various embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals designate like parts, and in which:FIG. 1 illustrates an example device configured for hardware-enforced access protection in accordance with at least one embodiment of the present disclosure;FIG. 2 illustrates an example device configuration in accordance with at least one embodiment of the present disclosure;FIG. 3 illustrates an example configuration for an authentication module in accordance with at least one embodiment of the present disclosure;FIG. 4 illustrates a flowchart of example operations for activation and user authentication recovery in accordance with at least one embodiment of the present disclosure; andFIG. 5 illustrates a flowchart of example operations for hardware-enforced access protection in accordance with at least one embodiment of the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONThis disclosure describes systems and methods for hardware-enforced access protection. A device may comprise, for example, a login agent module (LAM), an operating system login authentication module (OSLAM) and a secure user authentication module (SUAM). Initially, the LAM may be configured to cause a prompt to be presented by a user interface in the device. The prompt may request login information to be entered into the device (e.g., by a device user). The LAM may be further configured to provide the login information to the OSLAM, which may be configured to authenticate the login information. If the login information is authenticated, the OSLAM may be further configured to transmit a signed login success message to the SUAM. In one embodiment, the SUAM may be loaded (e.g., by firmware) into a secure memory space in the device such as, for example, a trusted execution environment (TLL). The SUAM may be configured to authenticate the signed login success message, and if authenticated, to transmit an encrypted authentication message to the OSLAM. The OSLAM may be further configured to decrypt and authenticate the encrypted authentication message. If authenticated, the OSLAM may grant access to the device.In one embodiment, the OSLAM may include a private key and the SUAM may include a public key. The private key may be protected by known user information in the device. When a user logs into the device (e.g., enters login information), the login information may be compared to the known user information, and access may be granted to the private key only when the login information corresponds to the known user information. The private key may then be used to generate the signed login success message, which may be authenticated by the SUAM using the public key. Likewise, the encrypted authentication message may be decrypted by the OSLAM using the private key. In one embodiment, the private key may be encrypted by the TEE root to provide additional protection (e.g., to prevent the private key from being determined as the result of a dictionary attack where key combinations are continually guessed until a match is found).In one embodiment, the device may further include a firmware interface module. The firmware interface module may be configured to convey the signed login success message from the OSLAM to the SUAM, and conversely, to convey the encrypted authentication message from the SUAM to OSLAM. In the same or a different embodiment the device may further include a secure policy module (SPM) and/or an authentication recovery module (ARM). The SPM may be configured to control the operation of the LAM, OSLAM and/or the SUAM. In the instance of the LAM and OSLAM the SPM may be configured to determine a context for the device (e.g., device location, device condition, etc.) and may set a login policy that controls the operation of the LAM and/or the OSLAM based on the context. For example, SPM may determine that the device is at a "home" location, and may require less login information to be entered as opposed to when the device is determined to be at a known location. The SPM may be further configured to, for example, define a wait timer and/or a maximum number of login attempts in the SUAM. The wait timer may define how long the SUAM will wait for receipt of the signed login success message before the SUAM causes the device to shut down. In one embodiment, if the wait timer expires without the signed login success message being received, the SUAM may then determine whether the maximum number of login attempts has been exceeded. If the maximum number of login attempts has been exceeded, the SUAM may place the device into a lockout state, wherein access may be denied to the operating system until a user authentication recovery is performed. The ARM may be configured to perform user a authentication recovery when the device is connected to a remote resource (e.g., a computing device accessible via a network connection).In one embodiment, the remote resource may be server accessible via the Internet, the server being configured to provision private keys to devices. For example, when activated the device may be configured to initially determine whether it is connected to the remote resource. The device may then be configured to determine if the private key is present in the device. If the device is not connected to the remote resource and the private key is present in the device, then hardware-enforced access protection may be started in the device. If the device is not connected to the remote resource and the private key is not present in the device, then the device may present a notification (e.g., to the device user) instructing that a security setup must be performed before using the device, the security setup requiring the device to be connected to the remote resource. If the device is connected to the remote resource and the device is a new device (e.g., no private key is present), then security software may be downloaded to the device (e.g., some or all of the LAM and OSLAM) from the remote resource and the private key may be provisioned by the remote resource. If the device is connected to the remote resource and user authentication recovery is required, the remote resource may perform user authentication recovery (e.g., may confirm the user's identity through personal inquiries, passwords, keys etc.). If the user is authenticated by the remote resource, the existing security configuration in the device may be reset and a new private key may be provisioned to the device from the remote resource.FIG. 1 illustrates example device 100 configured for hardware-enforced access protection in accordance with at least one embodiment of the present disclosure. Examples of device 100 may include, but are not limited to, a mobile communication device such as a cellular handset or a smartphone based on the Android® operating system (OS), iOS®, Blackberry® OS, Palm® OS, Symbian® OS, etc., a mobile computing device such as a tablet computer like an iPad®, Galaxy Tab®, Kindle Fire®, etc., an Ultrabook® including a low-power chipset manufactured by Intel Corp., a netbook, a notebook computer, a laptop computer, a computing device that is typically stationary such as a desktop computer, etc. Device 100 may include, for example, at least two types of execution environment in which various modules in device 100 may operate. Low privilege environment 102 may be, for example, an operating system (OS) in device 100. Modules operating in low privilege environment 102 are not "measured" (e.g., verified based on a hash of their code to determine authenticity) 100, and thus, may be freely written to, executed, changed, etc. High privilege module 104 may be, for example, a trusted execution environment (TEE) in device 100. High privilege environment 104 may provide a cryptographically protected execution environment in which modules may operate that is separate from possible interference or intervention by outside influences. As a result, hardware management, emulation, debugging, security and other system-critical features generally execute in high privilege environment 104.In one embodiment, low privilege environment 102 in device 100 (e.g., an OS in device 100) may include at least LAM 106 and OSLAM 108. LAM 106 may be configured to obtain login information (e.g., from a user of device 100). For example, LAM 106 may cause a prompt to be presented via a user interface of device 100 (e.g., a display) requesting the entry of login information into device 100 (e.g., via a touch screen, keypad, etc. in device 100). The login information may then be passed to OSLAM 108, which may be configured to authenticate the login information. For example, OSLAM 108 may determine if the login information received from LAM 106 corresponds to known user information in device 100. If the login information is authenticated, OSLAM may generate a signed login success message. For example, the signed login success message may be signed using a private key in OSLAM 108. In one embodiment, low privilege environment 102 may further comprise firmware interface module (FIM) 110. FIM 110 may be configured to facilitate interaction between OSLAM 108 and SUAM 112 in high privilege execution environment 104. For example, the signed login success message may be transmitted from OSLAM 108 to FIM 110, and from FIM 110 to SUAM 112. FIM 110 has been shown as optional in FIG. 1 as its functionality may also be incorporated into OSLAM 108.SUAM 112 may be loaded into high privilege environment 104. For example, SUAM 112 may be loaded into device 100 upon activation from firmware in device 100. SUAM 112 may also be "measured" in that, during loading, the hash value of the program code of SUAM 112 may be compared to a hash value of a known good version of SUAM 112 by hardware in device 100, and if the hash of the program code matches the known good hash value then the code may be allowed to load. SUAM 112 may be configured to control access to device 100 based on receipt of the signed login success message. For example, if the signed login success message is received, SUAM 112 may then authenticate the message (e.g., using a public key). If the signed login success message is authenticated, the SUAM 112 may transmit an encrypted authentication message to OSLAM 108 (e.g., via FIM 110), which may cause OSLAM 108 to unlock device 100 (e.g., to grant access to device 100). Otherwise, if the signed login success message is not received, or if a received signed login success message cannot be authenticated, then SUAM 112 may cause the device to shut down. In some instances, SUAM 112 may also cause the device to enter a lockout state requiring user authentication recovery.FIG. 2 illustrates an example configuration for device 100' in accordance with at least one embodiment of the present disclosure. Device 100' may comprise, for example, system module 200, which may be configured to manage normal operations in device 100'. System module 200 may comprise, for example, processing module 202, memory module 204, power module 206, user interface module 208 and communication interface module 210, which may be configured to interact with communication module 212. Further, authentication module 214 may be configured to interact with at least communication module 212 and user interface module 208. While communication module 212 and authentication module 214 are arranged separately from system module 200 in FIG. 2 , it is merely for the sake of explanation herein. Some or all of the functionality associated with modules 212 and 214 may also be included in system module 200.In device 100', processing module 202 may comprise one or more processors situated in separate components, or alternatively, may comprise one or more processing cores embodied in a single component (e.g., in a System-on-a-Chip (SOC) configuration) and any processor-related support circuitry (e.g., bridging interfaces, etc.). Example processors may include various x86-based microprocessors available from the Intel Corporation including those in the Pentium, Xeon, Itanium, Celeron, Atom, Core i-series product families. Examples of support circuitry may include chipsets (e.g., Northbridge, Southbridge, etc. available from the Intel Corporation) configured to provide an interface through which processing module 202 may interact with other system components that may be operating at different speeds, on different buses, etc. in mobile device 104'. Some or all of the functionality commonly associated with the support circuitry may also be included in the same physical package as the processor (e.g., an SOC package like the Sandy Bridge integrated circuit available from the Intel Corporation). In one embodiment, processing module 202 may be equipped with virtualization technology (e.g., VT-x technology available in some processors and chipsets available from the Intel Corporation) allowing for the execution of multiple virtual machines (VM) on a single hardware platform. For example, VT-x technology may also incorporate trusted execution technology (TXT) configured to reinforce software-based protection with a hardware-enforced measured launch environment (MLE) such as previously described with respect to the loading of SUAM 112 from firmware in device 100.Processing module 202 may be configured to execute instructions in device 100'. Instructions may include program code configured to cause processing module 202 to perform activities related to reading data, writing data, processing data, formulating data, converting data, transforming data, etc. Information (e.g., instructions, data, etc.) may be stored in memory module 204. Memory module 204 may comprise random access memory (RAM) or read-only memory (ROM) in a fixed or removable format. RAM may include memory configured to hold information during the operation of signage controller 106' such as, for example, static RAM (SRAM) or Dynamic RAM (DRAM). ROM may include memories such as bios memory configured to provide instructions when device 100' activates, programmable memories such as electronic programmable ROMs (EPROMS), Flash, etc. Other fixed and/or removable memory may include magnetic memories such as, for example, floppy disks, hard drives, etc., electronic memories such as solid state flash memory (e.g., embedded multimedia card (eMMC), etc.), removable memory cards or sticks (e.g., micro storage device (uSD), USB, etc.), optical memories such as compact disc-based ROM (CD-ROM), etc. Power module 206 may include internal power sources (e.g., a battery) and/or external power sources (e.g., electromechanical or solar generator, power grid, etc.), and related circuitry configured to supply device 100' with the power needed to operate.User interface module 208 may include circuitry configured to allow users to interact with mobile 100' such as, for example, various input mechanisms (e.g., microphones, switches, buttons, knobs, keyboards, speakers, touch-sensitive surfaces, one or more sensors configured to capture images and/or sense proximity, distance, motion, gestures, etc.) and output mechanisms (e.g., speakers, displays, lighted/flashing indicators, electromechanical components for vibration, motion, etc.). Communication interface module 210 may be configured to handle packet routing and other control functions for communication module 212, which may include resources configured to support wired and/or wireless communications. Wired communications may include serial and parallel wired mediums such as, for example, Ethernet, Universal Serial Bus (USB), Firewire, Digital Visual Interface (DVI), High-Definition Multimedia Interface (HDMI), etc. Wireless communications may include, for example, close-proximity wireless mediums (e.g., radio frequency (RF) such as based on the Near Field Communications (NFC) standard, infrared (IR), optical character recognition (OCR), magnetic character sensing, etc.), short-range wireless mediums (e.g., Bluetooth, WLAN, Wi-Fi, etc.) and long range wireless mediums (e.g., cellular, satellite, etc.). In one embodiment, communication interface module 210 may be configured to prevent wireless communications that are active in communication module 212 from interfering with each other. In performing this function, communication interface module 210 may schedule activities for communication module 212 based on, for example, the relative priority of messages awaiting transmission.Authentication module 214 may be configured to handle some or all of the operations disclosed in FIG. 1 with respect to LAM 106, OSLAM 108, FIM 110 and SUAM 112. In this pursuit, authentication module 214 may interact with communication module 212 and/or user interface module 208. An example configuration for authentication module 214 describing an example interaction is disclosed in FIG. 3 (e.g., as authentication module 214'). Authentication module 214' may comprise LAM 106, OSLAM 108 and SUAM 112 as previously disclosed. LAM 106 may interact with user interface module 208 to obtain login information (LI) (e.g., to prompt a user to enter login information and to obtain the login information entered by the user). The LI may then be provided to the OSLAM, which authenticate the LI, and if authenticated, may transmit a signed login success message (SLSM) to SUAM 112. SUAM 112 may then authenticate the SUAM, and if authenticated, may transmit an encrypted authentication message (EAM) back to OSLAM 108, which may then unlock device 100 (e.g., grant access to device 100).In one embodiment, authentication module 214' may also include secure login policy module (SLPM) 300 and authentication recovery module (ARM) 302. SLPM 300 may be configured to control login ALM 106, OSLAM 108 and/or SUAM 112. For example, SLPM 300 may be configured to control LAM 106 and/or OSLAM 108 by establishing a login policy for one or both of these modules based on a context. Context, as referencing herein, may pertain to, for example, the status of device 100', the environment in which device 100' is operating, etc. For example, SLPM 300 may be configured to determine when device 100' is in a safe location (e.g., a known user's home, car, workplace, etc.). SLPM 300 may determine location by, for example, obtaining positioning information from communication module 212 (e.g., Global Positioning System (GPS) coordinates, identification of short-range networks sensed by device 100', etc.). In known locations, the login information required to access device 100' may be reduced (e.g., a password or a simple pattern-based touch screen interaction may be required). Alternatively, the positioning information may indicate that the device is in an unknown or public location (e.g., on public transportation, at an airport or train terminal, at a restaurant, etc.) In such instances the security policy may require more comprehensive login information (e.g., username and password, answers to identity challenge questions, biometrics such as fingerprint or retina scans, etc.). Context may also apply to the condition of device 100. For example, if device 100 was shut down and restarted (e.g., by normal shutdown or by removal of the battery), restarted after a security-related configuration was changed (e.g., after user identity information was changed or security software was disabled), etc. In this context more comprehensive login information may be required by the login policy to ensure that device 100' is compromised. The security policy may control the operation of LAM 106 by changing the prompt requesting login information based on the type and/or amount of login information required, and/or OSLAM 108 in the manner in which the login information is authenticated against known user information.SLPM 300 may also be configured to control the operation of SUAM 112. For example, SLPM 300 may determine a wait time and/or may start a wait timer controlling the amount of time SUAM 112 will wait to receive an SLSM before shutting down device 100'. For example, SLPM 300 may set a two minute wait timer, and may start the wait timer after the prompt is first displayed to the user to login to device 100'. The manner in which SUAM 112 interacts with the wait timer may change based on the particular configuration of device 100'. For example, the wait time may be short, and may be reset every time the SUAM 112 receives the SLSM but does not authenticate the SLSM. Alternatively, the wait timer may be longer and may continue until an SLSM is both received and authenticated by SUAM 112. If not SLSM is received by SUAM 112, or if an SLSM is received but not authenticated, SUAM 112 may shut down device 100'. In one embodiment, SLPM 300 may also set a number of login attempts allowed in the device. The limit on the number of login attempts may provide an extra layer of security in that someone who is attempting a dictionary attack may be quickly limited in their ability to guess passwords, etc. The number of login attempts may be checked by SUAM 112 before shutting down device 100', and if the number of login attempts has been exceeded, SUAM 112 may place device 100' in a lockout state requiring user authentication recovery before device 100' can again be accessed.In one embodiment, ARM 302 may be configured to provide user authentication recovery when security information is lost or forgotten, or alternatively, if device 100' has been placed in a lockout state. In performing user authentication recovery, ARM 302 may use communication module 212 to interact with a remote resource such as, for example, a computing device (e.g., a server) accessible via a network (e.g., the Internet). In one embodiment, the user authentication recovery may be orchestrated by the remote resource wherein user identity may be verified by, for example, username and password, challenge questions, biometric data, etc. Once the user's identity is verified, a security configuration in device 100' may be reset by the remote resource.FIG. 4 illustrates a flowchart of example operations for activation and user authentication recovery in accordance with at least one embodiment of the present disclosure. At least one way in which security may be maintained in a device is by protecting access to security configuration from tampering. In one embodiment, software-based security and hardware-based security may exchange encrypted authentication messages. Further to placing the hardware-based security in firmware, the installation and configuration of the software-based security may be controlled by a secure remote resource. In this manner, traditional ways of circumventing device security may be thwarted. For example, a device secured in a manner consistent with the present disclosure may be obtained and bypass the software-based security features may be attempted by clearing the device memory and installing a new operating system. However, since the software-based security (e.g., some or all of LAM 106, OSLAM 108 and a private key used by OSLAM 108) may be installed by the remote resource, the new operating system will not provide messaging expected by the hardware-based security, and thus, the device will continue to power down and possibly enter a lockout ate until user authentication recovery is performed to reset the security.In operation 400 a device may be started. Starting the device may generally include, for example, switching on, powering on, turning on, booting, etc. the device from a powered-down condition, rebooting or restarting the device from a powered-up condition, etc. Upon starting the device a determination may then be made in operation 402 as to whether the device is connected to a remote resource. For example, a determination may be made if the device is connected via a wired or wireless connection to the Internet, and further, whether contact can be made with a computing device (e.g., a server) using the wired or wireless connection. If in operation 402 a determination is made that the device is not connected to the remote resource, then in operation 404 a further determination may be made as to whether a private key has already been established in the device. There may be no private key in the device if, for example, the device has not been used (e.g., was recently purchased), if the OS has been overwritten (e.g., someone is trying to reuse the hardware), etc. If in operation 404 it is determined that no private key has been established in the device, then in operation 406 a notification may be presented by the device (e.g., may be displayed on the user interface of the device), the notification informing that the device must be connected to the remote resource to set up security in the device. The device may then shut down in operation 408 and may further optionally return to operation 400 if, for example, the device is again activated. Otherwise, if a determination is made that a private key exists, then in operation 410 hardware-enforced access protection may be activated in the device (e.g., an example of which is disclosed in FIG. 5 ).If in operation 402 a determination is made that the device is connected to the remote resource, then in operation 412 a further determination may be made as to whether the device is being activated (e.g., requesting security to be set up) for the first time. The determination may be based on various factors including, for example, whether a private key has been established in the device, whether a serial number for the device has been recorded as registered in the remote resource, etc. If it is determined in operation 412 that the device is being activated for the first time, then in operation 414 a user information inquiry may be performed to obtain information about the owner of the device (e.g., for device owner registration, warranty and later use in user authentication recovery) and security software may be downloaded to the device from the remote resource. In one embodiment, the security software may include at least some or all of the LAM and the OSLAM. Then in operation 416 the private key may be provisioned to the device from the remote resource and hardware-enforced access protection may be activated in operation 410.If in operation 412 it is determined that the device is not being activated for the first time, then in user authentication recovery may be activated in operation 418. For example, the device user may be asked to provide username and password information, may be required to respond to challenge questions based on information provided during first time activation, may be required to provide biometric information, etc. A determination may then be made in operation 420 as to whether user authentication was recovered (e.g., whether the user was identified as a known user for the device). If it is determined in operation 420 that the user authentication is not recovered, then in operation 410 hardware-enforced access protection may be started to protect the device from being compromised. If the user authentication is recovered, then in operation 422 security in the device may be reset (e.g., new passwords may be set and the existing private key may be deleted), and in operation 416 a new private key may be provisioned to the device by the remote resource. In operation 410 hardware-enforced access protection may be activated in the device.FIG. 5 illustrates a flowchart of example operations for hardware-enforced access protection in accordance with at least one embodiment of the present disclosure. The operations in FIG. 5 are from the perspective of a SUAM in a device. In operation 500 a scenario may be sensed requiring user to login into the device. For example, the device may have been powered down and reactivated, a security configuration may have changed in the device, a context of the device and/or environment may have changed and login may be dictated by the login policy, etc. A determination may then be made in operation 502 as to whether the device is in a lockout state. If it is determined in operation 502 that the device is in the lockout state, then in operation 504 a notification may be presented (e.g., may be displayed on a user interface in the device), the notification instructing that user authentication recovery must be performed before the device may again be accessed. Access may then be denied (e.g., the device may be shut down) and the device may optionally return to operation 500 wherein scenarios requiring login may be sensed.If it is determined in operation 502 that the device is not in a lockout state, a wait timer may be started in operation 508. A determination may be made in operation 510 as to whether a SLSM is received. If in operation 508 it is determined that a SLSM has been received, then in operation 512 a further determination may be made as to whether the received SLSM can be authenticated. If in operation 512 it is determined that the SLSM has been authenticated, then in operation 514 an encrypted authentication message may be transmitted. The device may then optionally return to sensing for scenarios requiring login in operation 500. If the signed login success message cannot be authenticated in operation 512, the wait timer may be restarted in operation 508 and the SUAM may continue to wait for receipt of the SLSM in operations 510 and 516. If a determination is made in operation 516 that the wait timer has expired without receiving a SLSM, then a further determination may be made in operation 518 as to whether a certain number of login attempts has been met or exceeded in the device. If it is determined in operation 518 that the number of login attempts has been met or exceeded, then in operation 520 the device may be placed into a lockout state and notification of the lockout state may again be presented in operation 504. Otherwise, if it is determined in operation 518 that the number of login attempts has not been exceeded, then in operation 506 access may be denied (e.g., the device may be shut down) followed by an optional return to operation 500 to again sense login scenarios.While FIG. 4 and 5 illustrate various operations according to different embodiments, it is to be understood that not all of the operations depicted in FIG. 4 and 5 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIG. 4 and 5 , and/or other operations described herein, may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.As used in any embodiment herein, the term "module" may refer to software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices. "Circuitry", as used in any embodiment herein, may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. The modules may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smart phones, etc.Any of the operations described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), embedded multimedia cards (eMMCs), secure digital input/output (SDIO) cards, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device.Thus, the present disclosure is directed to systems and methods related to hardware-enforced access protection. An example device may comprise a login agent module (LAM), an operating system login authentication module (OSLAM) and a secure user authentication module (SUAM). The LAM may be configured to cause a prompt requesting login information to be presented by the device. The LAM may then provide the login information to the OSLAM, which may be configured to authenticate the login information using known user information. If authenticated, the OSLAM may generate and transmit a signed login success message to the SUAM using a private key. The SUAM may be secure/trusted software loaded by device firmware, and may be configured to authenticate the signed login success message. If authenticated, the SUAM may transmit an encrypted authentication message to the OSLAM. If the encrypted authentication message is authenticated, the OSLAM may grant access to the device.
A processing device handles two or more operating threads. A non-volatile logic controller (1806) stores first program data from a first program in a first set of non- volatile logic element arrays (1812) and second program data from a second program in a second set of non- volatile logic element arrays (1814). The first program and the second program can correspond to distinct executing threads, and the storage can be completed in response to receiving a stimulus regarding an interrupt for the computing device apparatus or in response to a power supply quality problem for the computing device apparatus. When the device needs to switch between processing threads, the non- volatile logic controller (1806) restores the first program data or the second program data from the non- volatile logic element arrays (1810) in response to receiving a stimulus regarding whether the first program or the second program is to be executed by the computing device apparatus.
CLAIMS What is Claimed is: 1. A computing device apparatus providing non- volatile logic based computing, the apparatus comprising: a plurality of non-volatile logic element arrays; a plurality of volatile storage elements; at least one non- volatile logic controller configured to control the plurality of non- volatile logic element arrays to store a machine state represented by the plurality of volatile storage elements and to read out a stored machine state from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements; wherein the at least one non-volatile logic controller is further configured to store first program data from a first program executed by the computing device apparatus in a first set of nonvolatile logic element arrays of the plurality of non-volatile logic element arrays; wherein the at least one non- volatile logic controller is further configured to store second program data from a second program executed by the computing device apparatus in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays; wherein the at least one non-volatile logic controller is further configured to restore the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first program or the second program is to be executed by the computing device apparatus. 2. The computing device apparatus of claim 1 wherein the at least one non- volatile logic controller is further configured to store the first program data or the second program data to the plurality of non- volatile logic element arrays in response to receiving a stimulus regarding an interrupt for the computing device apparatus. 3. The computing device apparatus of claim 2 wherein the at least one non- volatile logic controller is further configured to restore either the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements based upon a type of the interrupt for the computing device apparatus. 4. The computing device apparatus of claim 1 wherein the first program and the second program correspond to distinct executing threads or virtual machines for the computing device apparatus. 5. The computing device apparatus of claim 1 further comprising a multiplexer connected to variably connect individual ones of the volatile storage elements to one or more corresponding individual ones of the non- volatile logic element arrays. 6. The computing device apparatus of claim 5 further comprising wherein the at least one non- volatile logic controller is further configured to store the first program data or the second program data to the plurality of non-volatile logic element arrays by controlling the multiplexer to connect individual ones of the plurality of volatile storage elements to either the first set of non- volatile logic element arrays or the second set of non- volatile logic element arrays based on whether the first program or the second program is executing in the computing device apparatus. 7. The computing device apparatus of claim 1 further comprising a multiplexer connected to variably connect outputs of individual ones of the non- volatile logic element arrays to inputs of one or more corresponding individual ones of the volatile storage elements. 8. The computing device apparatus of claim 7 further comprising wherein the at least one non- volatile logic controller is further configured to restore the first program data or the second program data to the plurality of volatile storage elements by controlling the multiplexer to connect inputs of individual ones of the plurality of volatile storage elements to outputs of either the first set of non-volatile logic element arrays or the second set of non- volatile logic element arrays based on whether the first program or the second program is to be executed in the computing device apparatus. 9. The computing device apparatus of claim 1 wherein the at least one non- volatile logic controller is further configured to store the first program data or the second program data to the plurality of non- volatile logic element arrays in response to a power supply quality problem for the computing device apparatus. 10. A method comprising : operating a processing device having at least a first processing thread and a second processing thread using a plurality of volatile storage elements; storing first program data stored in the plurality of volatile storage elements during execution of the first processing thread in a first set of non-volatile logic element arrays of a plurality of non-volatile logic element arrays; storing second program data stored in the plurality of volatile storage elements during execution of the second processing thread in a second set of non- volatile logic element arrays of the plurality of non-volatile logic element arrays; restoring the first program data or the second program data from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first processing thread or the second processing thread is to be executed. 11. The method of claim 10 further comprising storing the first program data or the second program data to the plurality of non- volatile logic element arrays in response to an interrupt for the processing device. 12. The method of claim 11 further comprising restoring either the first program data or the second program data from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements based upon a type of the interrupt for the processing device. 13. The method of claim 10 further comprising storing the first program data or the second program data to the plurality of non-volatile logic element arrays by controlling a multiplexer to connect individual ones of the plurality of volatile storage elements to either the first set of nonvolatile logic element arrays or the second set of non-volatile logic element arrays based on whether the first processing thread or the second processing thread is executing in the processing device. 14. The method of claim 13 further comprising storing the first program data or the second program data to the plurality of non-volatile logic element arrays in response to a power supply quality problem for the processing device. 15. A method comprising : operating a processing device having at least a first processing thread and a second processing thread using a plurality of volatile storage elements; in response to an interrupt for the processing device, executing one of: storing first program data stored in the plurality of volatile storage elements during execution of the first processing thread in a first set of non-volatile logic element arrays of a plurality of non- volatile logic element arrays in response to an interrupt, or storing second program data stored in the plurality of volatile storage elements during execution of the second processing thread in a second set of non-volatile logic element arrays of the plurality of non-volatile logic element arrays; restoring the first program data or the second program data from the plurality of non- volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first processing thread or the second processing thread is to be executed and based upon a type of the interrupt for the processing device; storing the first program data or the second program data to the plurality of non-volatile logic element arrays in response to a power supply quality problem for the processing device.
PROCESSING DEVICE WITH RESTRICTED POWER DOMAIN WAKEUP RESTORE FROM NONVOLATILE LOGIC ARRAY [0001] This generally relates to nonvolatile memory cells and their use in a system, and in particular, in combination with logic arrays to provide nonvolatile logic modules. BACKGROUND [0002] Many portable electronic devices such as cellular phones, digital cameras/camcorders, personal digital assistants, laptop computers and video games operate on batteries. During periods of inactivity the device may not perform processing operations and may be placed in a power-down or standby power mode to conserve power. Power provided to a portion of the logic within the electronic device may be turned off in a low power standby power mode. However, presence of leakage current during the standby power mode represents a challenge for designing portable, battery operated devices. Data retention circuits such as flip- flops and/or latches within the device may be used to store state information for later use prior to the device entering the standby power mode. The data retention latch, which may also be referred to as a shadow latch or a balloon latch, is typically powered by a separate 'always on' power supply. [0003] A known technique for reducing leakage current during periods of inactivity utilizes multi-threshold CMOS (MTCMOS) technology to implement the shadow latch. In this approach, the shadow latch utilizes thick gate oxide transistors and/or high threshold voltage (V t) transistors to reduce the leakage current in standby power mode. The shadow latch is typically detached from the rest of the circuit during normal operation (e.g., during an active power mode) to maintain system performance. To retain data in a "master-slave "flip-flop topology, a third latch, e.g., the shadow latch, may be added to the master latch and the slave latch for the data retention. In other cases, the slave latch may be configured to operate as the retention latch during low power operation. However, some power is still required to retain the saved state. For example, see US Patent 7,639,056, "Ultra Low Area Overhead Retention Flip-Flop for Power- Down Applications", which is incorporated by reference herein. [0004] System on Chip (SoC) is a concept that has been around for a long time; the basic approach is to integrate more and more functionality into a given device. This integration can take the form of either hardware or solution software. Performance gains are traditionally achieved by increased clock rates and more advanced process nodes. Many SoC designs pair a microprocessor core, or multiple cores, with various peripheral devices and memory circuits. [0005] Energy harvesting, also known as power harvesting or energy scavenging, is the process by which energy is derived from external sources, captured, and stored for small, wireless autonomous devices, such as those used in wearable electronics and wireless sensor networks. Harvested energy may be derived from various sources, such as: solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, etc. However, typical energy harvesters provide a very small amount of power for low-energy electronics. The energy source for energy harvesters is present as ambient background and is available for use. For example, temperature gradients exist from the operation of a combustion engine, and in urban areas, there is a large amount of electromagnetic energy in the environment because of radio and television broadcasting, etc. BRIEF DESCRIPTION OF THE DRAWINGS [0006] FIG. 1 is a functional block diagram of a portion of an example system on chip (SoC) as configured in accordance with various embodiments of the invention; [0007] FIG. 2 is a more detailed block diagram of one flip-flop cloud used in the SoC of FIG. 1; [0008] FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor; [0009] FIGS. 4-7 are schematic and timing diagrams illustrating an example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; [0010] FIGS. 8-9 are schematic and timing diagrams illustrating another example ferroelectric nonvolatile bit cell as configured in accordance with various embodiments of the invention; [0011] FIG. 10 is a block diagram illustrating an example NVL array used in the SoC of FIG. 1; [0012] FIGS. 11 A and 1 IB are more detailed schematics of input/output circuits used in the NVL array of FIG. 10; [0013] FIG. 12A is a timing diagram illustrating an example offset voltage test during a read cycle as configured in accordance with various embodiments of the invention; [0014] FIG. 12B illustrates a histogram generated during an example sweep of offset voltage as configured in accordance with various embodiments of the invention; [0015] FIG. 13 is a schematic illustrating parity generation in the NVL array of FIG. 10; [0016] FIG. 14 is a block diagram illustrating example power domains within an NVL array as configured in accordance with various embodiments of the invention; [0017] FIG. 15 is a schematic of an example level converter for use in the NVL array as configured in accordance with various embodiments of the invention; [0018] FIG. 16 is a timing diagram illustrating an example operation of level shifting using a sense amp within a ferroelectric bitcell as configured in accordance with various embodiments of the invention; [0019] FIG. 17 is a block diagram of an example power detection arrangement as configured in accordance with various embodiments of the invention; [0020] FIG. 18 is a functional block diagram of a portion of an example system on chip (SoC) and flip flop design with more than one NVL array per flip flop cloud as configured in accordance with various embodiments of the invention; [0021] FIG. 19 is a flow chart illustrating an example operation of a processing device operating two or more processing threads as configured in accordance with various embodiments of the invention; and [0022] FIG. 20 is a block diagram of another example SoC that includes NVL arrays as configured in accordance with various embodiments of the invention. DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS [0023] While prior art systems made use of retention latches to retain the state of flip- flops in logic modules during low power operation, some power is still required to retain state. In contrast, nonvolatile elements can retain the state of flip flops in logic module while power is completely removed. Such logic elements will be referred to herein as Non-Volatile Logic (NVL). A micro-control unit (MCU) implemented with NVL within an SoC (system on a chip) may have the ability to stop, power down, and power up with no loss in functionality. A system reset/reboot is not required to resume operation after power has been completely removed. This capability is ideal for emerging energy harvesting applications, such as Near Field Communication (NFC), radio frequency identification (RFID) applications, and embedded control and monitoring systems, for example, where the time and power cost of the reset/reboot process can consume much of the available energy, leaving little or no energy for useful computation, sensing, or control functions. Though this description discusses an SOC containing a programmable MCU for sequencing the SOC state machines, one of ordinary skill in the art can see that NVL can be applied to state machines hard coded into ordinary logic gates or ROM, PLA, or PLD based control systems. [0024] In one approach, an SoC includes one or more blocks of nonvolatile logic. For example, a non-volatile logic (NVL) based SoC may back up its working state (all flip-flops) upon receiving a power interrupt, have zero leakage in sleep mode, and need less than 400ns to restore the system state upon power-up. [0025] Without NVL, a chip would either have to keep all flip-flops powered in at least a low power retention state that requires a continual power source even in standby mode or waste energy and time rebooting after power-up. For energy harvesting applications, NVL is useful because there is no constant power source required to preserve the state of flip-flops (FFs), and even when the intermittent power source is available, boot-up code alone may consume all the harvested energy. For handheld devices with limited cooling and battery capacity, zero-leakage IC's (integrated circuits) with "instant-on" capability are ideal. [0026] Ferroelectric random access memory (FRAM) is a non-volatile memory technology with similar behavior to DRAM (dynamic random access memory). Each individual bit can be accessed, but unlike EEPROM (electrically erasable programmable read only memory) or Flash, FRAM does not require a special sequence to write data nor does it require a charge pump to achieve required higher programming voltages. Each ferroelectric memory cell contains one or more ferroelectric capacitors (FeCap). Individual ferroelectric capacitors may be used as non-volatile elements in the NVL circuits described herein. [0027] FIG. 1 is a functional block diagram illustrating a portion of a computing device, in this case, an example system on chip (SoC) 100 providing non- volatile logic based computing features. While the term SoC is used herein to refer to an integrated circuit that contains one or more system elements, the teachings of this disclosure can be applied to various types of integrated circuits that contain functional logic modules such as latches, integrated clock gating cells, and flip-flop circuit elements (FF) that provide non-volatile state retention. Embedding non-volatile storage elements outside the controlled environment of a large array presents reliability and fabrication challenges. An NVL bitcell based NVL array is typically designed for maximum read signal margin and in-situ margin testability as is needed for any NV-memory technology. However, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. [0028] To amortize the test feature costs and improve manufacturability, and with reference to the example of FIGS. 1 and 2, a plurality of non- volatile logic element arrays or NVL arrays 110 are disposed with a plurality of volatile storage elements 220. At least one nonvolatile logic controller 106 configured to control the plurality of NVL arrays 110 to store a machine state represented by the plurality of volatile storage elements 220 and to read out a stored machine state from the plurality of NVL arrays 110 to the plurality of volatile storage elements 220. For instance, the at least one non- volatile logic controller 106 is configured to generate a control sequence for saving the machine state to or retrieving the machine state from the plurality of NVL arrays 110. A multiplexer 212 is connected to variably connect individual ones of the volatile storage elements 220 to one or more corresponding individual ones of the NVL arrays 110. [0029] In the illustrated example, the computing device apparatus is arranged on a single chip, here an SoC 100 implemented using 256b mini-arrays 110, which will be referred to herein as NVL arrays, of FeCap (ferroelectric capacitor) based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed. Each cloud 102-104 of FFs 120 includes an associated NVL array 110. Such dispersal results in individual ones of the NVL arrays 110 being arranged physically closely to and connected to receive data from corresponding individual ones of the volatile storage elements 220. A central NVL controller 106 controls all the arrays and their communication with FFs 120. While three FF clouds 102- 104 are illustrated here, SoC 100 may have additional, or fewer, FF clouds all controlled by NVL controller 106. The SOC 100 can be partitioned into more than one NVL domain in which there is a dedicated NVL controller for managing the NVL arrays 110 and FFs 120 in each of the separate NVL domains. The existing NVL array embodiment uses 256 bit mini-arrays, but the arrays may have a greater or lesser number of bits as needed. [0030] SoC 100 is implemented using modified retention flip flops 120 including circuitry configured to enable write back of data from individual ones of the plurality of non- volatile logic element arrays to the individual ones of the plurality of flip flop circuits. There are various known ways to implement a retention flip flop. For example, a data input may be latched by a first latch. A second latch coupled to the first latch may receive the data input for retention while the first latch is inoperative in a standby power mode. The first latch receives power from a first power line that is switched off during the standby power mode. The second latch receives power from a second power line that remains on during the standby mode. A controller receives a clock input and a retention signal and provides a clock output to the first latch and the second latch. A change in the retention signal is indicative of a transition to the standby power mode. The controller continues to hold the clock output at a predefined voltage level and the second latch continues to receive power from the second power line in the standby power mode, thereby retaining the data input Such a retention latch is described in more detail in US Patent 7,639,056, "Ultra Low Area Overhead Retention Flip-Flop for Power-Down Applications". [0031] FIG. 2 illustrates an example retention flop architecture that does not require that the clock be held in a particular state during retention. In such a "clock free" NVL flop design, the clock value is a "don't care" during retention. [0032] In SoC 100, modified retention FFs 120 include simple input and control modifications to allow the state of each FF to be saved in an associated FeCap bit cell in NVL array 110, for example, when the system is being transitioned to a power off state. When the system is restored, then the saved state is transferred from NVL array 110 back to each FF 120. Power savings and data integrity can be improved through implementation of particular power configurations. In one such approach, individual retention flip flop circuits include a primary logic circuit portion (master stage or latch) powered by a first power domain (such as VDDL in the below described example) and a slave stage circuit portion powered by a second power domain (such as VDDR in the below described example). In this approach, the first power domain is configured to be powered down and the second power domain is active during write back of data from the plurality of NVL arrays to the plurality of volatile storage elements. The plurality of non-volatile logic elements are configured to be powered by a third power domain (such as VDDN in the below described example) that is configured to be powered down during regular operation of the computing device apparatus. [0033] With this configuration, a plurality of power domains can be implemented that are independently powered up or powered down in a manner that can be specifically designed to fit a given implementation. Thus, in another aspect, the computing apparatus includes a first power domain configured to supply power to switched logic elements of the computing device apparatus and a second power domain configured to supply power to logic elements configured to control signals for storing data to or reading data from the plurality of non-volatile logic element arrays. Where the plurality of volatile storage elements comprise retention flip flops, the second power domain is configured to provide power to a slave stage of individual ones of the retention flip flops. A third power domain supplies power for the plurality of non- volatile logic element arrays. In addition to the power domains, NVL arrays can be defined as domains relating to particular functions. For example, a first set of at least one of the plurality of nonvolatile logic element arrays can be associated with a first function of the computing device apparatus and a second set of at least one of the plurality of non-volatile logic element arrays can be associated with a second function of the computing device apparatus. Operation of the first set of at least one of the plurality of non-volatile logic element arrays is independent of operation of the second set of at least one of the plurality of non- volatile logic element arrays. So configured, flexibility in the control and handling of the separate NVL array domains or sets allows more granulated control of the computing device's overall function. [0034] This more specific control can be applied to the power domains as well. In one example, the first power domain is divided into a first portion configured to supply power to switched logic elements associated with the first function and a second portion configured to supply power to switched logic elements associated with the second function. The first portion and the second portion of the first power domain are individually configured to be powered up or down independently of other portions of the first power domain. Similarly, the third power domain can be divided into a first portion configured to supply power to non-volatile logic element arrays associated with the first function and a second portion configured to supply power to non-volatile logic element arrays associated with the second function. As with the first power domain, the first portion and the second portion of the third power domain are individually configured to be powered up or down independently of other portions of the third power domain. [0035] So configured, if individual functions are not used for a given device, flip flops and NVL arrays associated with the unused functions can be respectively powered down and operated separately from the other flip flops and NVL arrays. Such flexibility in power and operation management allows one to tailor the functionality of a computing device with respect to power usage and function. This can be further illustrated in the following example design having a CPU, three SPI interfaces, three UART interfaces, three I2C interfaces, and only one logic power domain (VDDL). The logic power domain is distinguished from the retention or NVL power domains (VDDR and VDDN respectively), although these teachings can be applied to those power domains as well. Although this example device has only one logic power domain, a given application for the device might only use one of the three SPI units, one of the three UARTs and one of the three I2C peripherals. To allow applications to optimize the NVL application wake -up and sleep times and energy costs, the VDDL power domain can be partitioned into 10 separate NVL domains (one CPU, three SPI, three UART, three I2C totalling 10 NVL domains), each of which can be enabled/disabled independently of the others. So, the customer could enable NVL capability for the CPU, one SPI, one UART, and one I2C for their specific application while disabling the others. In addition, this partitioning also allows flexibility in time as well as energy and the different NVL domains can save and restore state at different points in time. [0036] To add further flexibility, NVL domains can overlap with power domains. Referring to the above example, four power domains can be defined: one each for CPU, SPI, UART, and I2C (each peripheral power domain has three functional units) while defining three NVL domains within each peripheral domain and one for the CPU (total of 10 NVL domains again). In this case, individual power domains turn on or off in addition to controlling the NVL domains inside each power domain for added flexibility in power savings and wakeup/sleep timing. [0037] Moreover, individual ones of the first power domain, the second power domain, and the third power domain are configured to be powered down or up independently of other ones of the first power domain, the second power domain, and the third power domain. For instance, integral power gates can be configured to be controlled to power down the individual ones of the first power domain, the second power domain, and the third power domain. As described in table 1 below, the third power domain is configured to be powered down during regular operation of the computing device apparatus, and the second power domain is configured to be powered down during a write back of data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements. A fourth power domain can be configured to supply power to real time clocks and wake-up interrupt logic. [0038] Such approaches can be further understood in reference to the illustrated example SoC 100 where NVL arrays 110 and controller 106 are operated on an NVL power domain referred to as VDDN and are switched off during regular operation. All logic, memory blocks 107 such as ROM (read only memory) and SRAM (static random access memory), and master stage of FFs are on a logic power domain referred to as VDDL. FRAM (ferroelectric random access memory) arrays are directly connected to a dedicated global supply rail (VDDZ) maintained at a higher fixed voltage needed for FRAM (i.e., VDDL <= VDDZ, where VDDZ is a fixed supply and VDDL can be varied as long as VDDL remains at a lower potential than VDDZ). Note that FRAM arrays as shown in 103 typically contain integrated power switches that allow the FRAM arrays to be powered down as needed, though it can easily be seen that FRAM arrays without internal power switches can be utilized in conjunction with power switches that are external to the FRAM array. The slave stages of retention FFs are on a retention power domain referred to as the VDDR domain to enable regular retention in a stand-by mode of operation. Table 1 summarizes power domain operation during normal operation, system backup to NVL arrays, sleep mode, system restoration from NVL arrays, and back to normal operation. Table 1 also specifies domains used during a standby idle mode that may be initiated under control of system software in order to enter a reduced power state using the volatile retention function of the retention flip flops. A set of switches indicated at 108 are used to control the various power domains. There may be multiple switches 108 that may be distributed throughout SoC 100 and controlled by software executed by a processor on SoC 100 and/or by a hardware controller (not shown) within SoC 100. There may be additional domains in addition to the three illustrated here, as will be described later. [0039] State info could be saved in a large centralized FRAM array, but would require a more time to enter sleep mode, longer wakeup time, excessive routing, and power costs caused by the lack of parallel access to system FFs. Table 1 - system power modes [0040] FIGS. 2 is a more detailed block diagram of one FF cloud 102 used in SoC 100. In this embodiment, each FF cloud includes up to 248 flip flops and each NVL array is organized as an 8 x 32 bit array, but one bit is used for parity in this embodiment. However, in other embodiments, the number of flip flops and the organization of the NVL array may have a different configuration, such as 4 x m, 16 x m, etc, where m is chosen to match the size of the FF cloud. In some embodiments, all of the NVL arrays in the various clouds may be the same size, while in other approaches there may be different size NVL arrays in the same SoC. [0041] Block 220 is a more detailed schematic of each retention FF 120. Several of the signals have an inverted version indicated by suffix "B" (referring to "bar" or /), such as RET and RETB, CLK and CLKB, etc. Each retention FF includes a master latch 221 and a slave latch 222. Slave latch 222 is formed by inverter 223 and inverter 224. Inverter 224 includes a set of transistors controlled by the retention signal (RET, RETB) that are used to retain the FF state during low power sleep periods, during which power domain VDDR remains on while power domain VDDL is turned off, as described above and in Table 1. [0042] NVL array 110 is logically connected with the 248 FFs it serves in cloud 102. Generally speaking, to enable data transfer from an NVL array to the FFs, individual FFs include circuitry configured to enable write back of data from individual ones of the plurality of NVL arrays 110. In the illustrated example, two additional ports are provided on the slave latch 222 of each FF as shown in block 220. A data input port (gate 225) is configured to insert data ND from one of the NVL arrays 110 to an associated volatile storage element 220. The data input port is configured to insert the data ND by allowing passage of a stored data related signal from the one of the NVL arrays to a slave stage of the associated flip flop circuit in response to receiving an update signal NU from the at least one non- volatile logic controller 106 on a data input enable port to trigger the data input port. Inverter 223 is configured to be disabled in response to receiving the inverted NVL update signal NUZ to avoid an electrical conflict between the tri-state inverter 223 and the NVL data port input tri-state inverter 225. [0043] More specifically, in the illustrated example, the inv-inv feedback pair (223 and 224) form the latch itself. These inverters make a very stable configuration for holding the data state and will fight any attempts to change the latch state unless at least one of the inverters is disabled to prevent electrical conflict when trying to overwrite the current state with the next state via one of the data ports. The illustrated NVL FF 220 includes two data ports that access the slave latch 222 as compared to one data port for a regular flop. One port transfers data from the master stage 221 to the slave stage 222 via the cmos pass gate controlled by the clock. When using this port to update the slave state 221, the inverter 224 driving onto the output node of the pass gate controlled by CLK is disabled to avoid an electrical conflict while the inverter 223 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state in preparation for holding the data when clock goes low (for a posedge FF). [0044] For the same reason, the inverter 223 is disabled when the ND data port is activated by NU transitioning to the active high state to avoid an electrical conflict on the ND port. The second inverter 224 is enabled to transfer the next state onto the opposite side of the latch so that both sides of the latch have the next state to be latched when NU goes low. In this example, the NU port does not in any way impact the other data port controlled by the clock. On a dual port FF, having both ports active at the same time is an illegal control condition, and the resulting port conflict means the resulting next state will be indeterminate. To avoid a port conflict, the system holds the clock in the inactive state if the slave state is updated while in functional mode. In retention mode, the RET signal along with supporting circuits inside the FF are used to prevent electrical conflicts independent of the state of CLK while in retention mode (see the inverter controlled by RETB in the master stage). [0045] As illustrated these additional elements are disposed in the slave stage 222 of the associated FF. The additional transistors, however, are not on the critical path of the FF and have only 1.8% and 6.9% impact on normal FF performance and power (simulation data) in this particular implementation. When data from the NVL array is valid on the ND (NVL-Data) port, the NU (NVL-Update) control input is pulsed high for a cycle to write to the FF. The thirty-one bit data output of an NVL array fans out to ND ports of eight thirty-one bit FF groups. [0046] To save flip-flop state, a multiplexer is configured to pass states from a plurality of the individual ones of the plurality of volatile storage elements 220 for essentially simultaneous storage in an individual one of the plurality of NVL arrays 110. For instance, the multiplexer may be configured to connect to N groups of M volatile storage elements of the plurality of volatile storage elements per group and to an N by M size NVL array of the plurality of NVL arrays. In this configuration, the multiplexer connects one of the N groups to the N by M size NVL array to store data from the M volatile storage elements into a row of the N by M size NVL array at one time. In the illustrated example, Q outputs of 248 FFs are connected to the 31b parallel data input of NVL array 110 through a 31b wide 8-1 mux 212. To minimize FF loading, the mux may be broken down into smaller muxes based on the layout of the FF cloud and placed close to the FFs they serve. Again, the NVL controller synchronizes writing to the NVL array, and the select signals MUX_SEL <2:0> of 8-1 mux 212. [0047] When the FFs are operating in a retention mode, a clock CLK of the computing device is a "don't care" such that it is irrelevant for the volatile storage elements with respect to updating the slave stage state whenever the NU signal is active, whereby the non-volatile logic controller is configured to control and effect storage of data from individual ones of the volatile storage elements into individual ones of the non- volatile storage elements. In other words, the clock CLK control is not needed during NVL data recovery during retention mode, but the clock CLK should be controlled at the system level once the system state is restored, right before the transition between retention mode and functional mode. In another approach, the NVL state can be recovered to the volatile storage elements when the system is in a functional mode. In this situation where the VDDL power is active, the clock CLK is held in the inactive state for the volatile storage elements during the data restoration from the NVL array, whereby the nonvolatile logic controller is configured to control and effect transfer of data from individual ones of the non-volatile storage elements into individual ones of the volatile storage elements. For example, a system clock CLK is typically held low for positive edge FF based logic and held high for negative edge FF based logic. [0048] Generally speaking, to move from regular operation into system backup mode, the first step is to stop the system clock(s) in an inactive state to freeze the machine state to not change while the backup is in progress. The clocks are held in the inactive state until backup is complete. After backup is complete, all power domains are powered down and the state of the clock becomes a don't care in sleep mode by definition. [0049] When restoring the state from NVL arrays, the FF are placed in a retention state (see Table 2 below) in which the clock continues to be a don't care as long as the RET signal is active (clock can be a don't care by virtue of special transistors added to each retention FF and is controlled by the RET signal). While restoring NVL state, the flops remain in retention mode so clock remains a don't care. Once the NVL state is recovered, the state of the machine logic that controls the state of the system clocks will also be restored to the state they were in at the time of the state backup, which also means that for this example all the controls (including the volatile storage elements or FF's) that placed the system clock into inactive states have now been restored such that the system clocks will remain in the inactive state upon completion of NVL data recovery. Now the RET signal can be deactivated, and the system will sit quiescent with clocks deactivated until the NVL controller signals to the power management controller that the restoration is complete, in response to which the power management controller will enable the clocks again. [0050] To restore flip-flop state during restoration, NVL controller 106 reads an NVL row in NVL array 110 and then pulses the NU signal for the appropriate flip-flop group. During system restore, retention signal RET is held high and the slave latch is written from ND with power domain VDDL unpowered; at this point the state of the system clock CLK is a don't care. FF's are placed in the retention state with VDDL = 0V and VDDR = VDD in order to suppress excess power consumption related to spurious data switching that occurs as each group of 31 FF's is updated during NVL array read operations. Suitably modified non-retention flops can be used in NVL based SOC's at the expense of higher power consumption during NVL data recovery operations. [0051] System clock CLK should start from low once VDDL comes up and thereafter normal synchronous operation continues with updated information in the FFs. Data transfer between the NVL arrays and their respective FFs can be done in serial or parallel or any combination thereof to tradeoff peak current and backup/restore time. Because a direct access is provided to FFs controlled by at least one non-volatile logic controller that is separate from a central processing unit for the computing device apparatus, intervention from a microcontroller processing unit (CPU) is not required for NVL operations; therefore the implementation is SoC/CPU architecture agnostic. Table 2 summarizes operation of the NVL flip flops. Table 2 - NVL Flip Flop truth table [0052] Because the at least one non-volatile logic controller is configured to variably control data transfer to or reading from the plurality of non-volatile arrays in parallel, sequentially, or in any combination thereof based on input signals, system designers have additional options with respect to tailoring system operation specifications to particular needs. For instance, because no computation can occur on an MCU SOC during the time the system enters a low power system state or to wakeup from a low power state, minimizing the wakeup or go to sleep time is advantageous. On the other hand, non-volatile state retention is power intensive because significant energy is needed to save and restore state to or from non-volatile elements such as ferro-electric capacitors. The power required to save and restore system state can exceed the capacity of the power delivery system and cause problems such as electromigration induced power grid degradation, battery life reduction due to excessive peak current draw, or generation of high levels of noise on the power supply system that can degrade signal integrity on die. Thus, allowing a system designer to be able to balance between these two concerns is desirable. [0053] In one such approach, the at least one non- volatile logic controller 106 is configured to receive the input signals through a user interface 125, such as those known to those of skill in the art. In another approach, the at least one non-volatile logic controller is configured to receive the input signals from a separate computing element 130 that may be executing an application. In one such approach, the separate computing element is configured to execute the application to determine a reading sequence for the plurality of non-volatile arrays based at least in part on a determination of power and computing resource requirements for the computing device apparatus 130. So configured, a system user can manipulate the system state store and retrieve procedure to fit a given design. [0054] FIG. 3 is a plot illustrating polarization hysteresis exhibited by a ferroelectric capacitor. The general operation of ferroelectric bit cells is known. When most materials are polarized, the polarization induced, P, is almost exactly proportional to the applied external electric field E; so the polarization is a linear function, referred to as dielectric polarization. In addition to being nonlinear, ferroelectric materials demonstrate a spontaneous nonzero polarization as illustrated in FIG. 3 when the applied field E is zero. The distinguishing feature of ferroelectrics is that the spontaneous polarization can be reversed by an applied electric field; the polarization is dependent not only on the current electric field but also on its history, yielding a hysteresis loop. The term "ferroelectric" is used to indicate the analogy to ferromagnetic materials, which have spontaneous magnetization and also exhibit hysteresis loops. [0055] The dielectric constant of a ferroelectric capacitor is typically much higher than that of a linear dielectric because of the effects of semi-permanent electric dipoles formed in the crystal structure of the ferroelectric material. When an external electric field is applied across a ferroelectric dielectric, the dipoles tend to align themselves with the field direction, produced by small shifts in the positions of atoms that result in shifts in the distributions of electronic charge in the crystal structure. After the charge is removed, the dipoles retain their polarization state. Binary "0"s and "l"s are stored as one of two possible electric polarizations in each data storage cell. For example, in the figure a " 1" may be encoded using the negative remnant polarization 302, and a "0" may be encoded using the positive remnant polarization 304, or vice versa. [0056] Ferroelectric random access memories have been implemented in several configurations. A one transistor, one capacitor (1T-1C) storage cell design in an FeRAM array is similar in construction to the storage cell in widely used DRAM in that both cell types include one capacitor and one access transistor. In a DRAM cell capacitor, a linear dielectric is used, whereas in an FeRAM cell capacitor the dielectric structure includes ferroelectric material, typically lead zirconate titanate (PZT). Due to the overhead of accessing a DRAM type array, a 1T-1C cell is less desirable for use in small arrays such as NVL array 110. [0057] A four capacitor, six transistor (4C-6T) cell is a common type of cell that is easier to use in small arrays. An improved four capacitor cell will now be described. [0058] FIG. 4 is a schematic illustrating one embodiment of a ferroelectric nonvolatile bitcell 400 that includes four capacitors and twelve transistors (4C-12T). The four FeCaps are arranged as two pairs in a differential arrangement. FeCaps CI and C2 are connected in series to form node Q 404, while FeCaps CI' and C2' are connected in series to form node QB 405, where a data bit is written into node Q and stored in FeCaps CI and C2 via bit line BL and an inverse of the data bit is written into node QB and stored in FeCaps CI' and C2' via inverse bitline BLB. Sense amp 410 is coupled to node Q and to node QB and is configured to sense a difference in voltage appearing on nodes Q, QB when the bitcell is read. The four transistors in sense amp 410 are configured as two cross coupled inverters to form a latch. Pass gate 402 is configured to couple node Q to bitline B and pass gate 403 is configured to couple node QB to bit line BLB. Each pass gate 402, 403 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps. Plate line 1 (PL1) is coupled to FeCaps CI and CI' and plate line 2 (PL2) is coupled to FeCaps C2 and C2'. The plate lines are use to provide biasing to the FeCaps during reading and writing operations. Alternatively, in another embodiment the cmos pass gates can be replaced with NMOS pass gates that use a pass gate enable that is has a voltage higher than VDDL. The magnitude of the higher voltage must be larger than the usual NMOS Vt in order to pass a undegraded signal from the bitcell Q/QB nodes to/from the bitlines BL/BLB (I.E. Vpass gate control must be > VDDL + Vt). [0059] Typically, there will be an array of bit cells 400. There may then be multiple columns of similar bitcells to form an n row by m column array. For example, in SoC 100, the NVL arrays are 8 x 32; however, as discussed earlier, different configurations may be implemented. [0060] FIGS. 5 and 6 are timing diagram illustrating read and write waveforms for reading a data value of logical 0 and writing a data value of logical 0, respectively. Reading and writing to the NVL array is a multi-cycle procedure that may be controlled by the NVL controller and synchronized by the NVL clock. In another embodiment, the waveforms may be sequenced by fixed or programmable delays starting from a trigger signal, for example. During regular operation, a typical 4C-6T bitcell is susceptible to time dependent dielectric breakdown (TDDB) due to a constant DC bias across FeCaps on the side storing a "1". In a differential bitcell, since an inverted version of the data value is also stored, one side or the other will always be storing a "1". [0061] To avoid TDDB, plate line PL1, plate line PL2, node Q and node QB are held at a quiescent low value when the cell is not being accessed, as indicated during time periods sO in FIGS. 5, 6. Power disconnect transistors MP 411 and MN 412 allow sense amp 410 to be disconnected from power during time periods sO in response to sense amp enable signals SAEN and SAENB. Clamp transistor MC 406 is coupled to node Q and clamp transistor MC 407 is coupled to node QB. Clamp transistors 406, 407 are configured to clamp the Q and QB nodes to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods sO, which in this embodiment equal 0 volts, (the ground potential). In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB is essentially eliminated. The clamp transistors also serve to prevent any stray charge buildup on nodes Q and QB due to parasitic leakage currents. Build up of stray charge may cause the voltage on Q or QB to rise above Ov, leading to a voltage differential across the FeCaps between Q or QB and PL1 and PL2. This can lead to unintended depolarization of the FeCap remnant polarization and could potentially corrupt the logic values stored in the FeCaps. [0062] In this embodiment, Vdd is 1.5 volts and the ground reference plane has a value of 0 volts. A logic high has a value of approximately 1.5 volts, while a logic low has a value of approximately 0 volts. Other embodiments that use logic levels that are different from ground for logic 0 (low) and Vdd for logic 1 (high) would clamp nodes Q, QB to a voltage corresponding to the quiescent plate line voltage so that there is effectively no voltage across the FeCaps when the bitcell is not being accessed. [0063] In another embodiment, two clamp transistors may be used. Each of these two transistors is used to clamp the voltage across each FeCap to be no greater than one transistor Vt (threshold voltage). Each transistor is used to short out the FeCaps. In this case, for the first transistor, one terminal connects to Q and the other one connects to PL1, while for transistor two, one terminal connects to Q and the other connects to PL2. The transistor can be either NMOS or PMOS, but NMOS is more likely to be used. [0064] Typically, a bit cell in which the two transistor solution is used does not consume significantly more area than the one transistor solution. The single transistor solution assumes that PL1 and PL2 will remain at the same ground potential as the local VSS connection to the single clamp transistor, which is normally a good assumption. However, noise or other problems may occur (especially during power up) that might cause PL1 or PL2 to glitch or have a DC offset between the PL1/PL2 driver output and VSS for brief periods; therefore, the two transistor design may provide a more robust solution. [0065] To read bitcell 400, plate line PL1 is switched from low to high while keeping plate line PL2 low, as indicated in time period s2. This induces voltages on nodes Q, QB whose values depend on the capacitor ratio between C1-C2 and Cl '-C2' respectively. The induced voltage in turn depends on the remnant polarization of each FeCap that was formed during the last data write operation to the FeCap's in the bit cell. The remnant polarization in effect "changes" the effective capacitance value of each FeCap which is how FeCaps provide nonvolatile storage. For example, when a logic 0 was written to bitcell 400, the remnant polarization of C2 causes it to have a lower effective capacitance value, while the remnant polarization of CI causes it to have a higher effective capacitance value. Thus, when a voltage is applied across CI - C2 by switching plate line PL1 high while holding plate line PL2 low, the resultant voltage on node Q conforms to equation (1). A similar equation holds for node QB, but the order of the remnant polarization of CI' and C2' is reversed, so that the resultant voltages on nodes Q and QB provide a differential representation of the data value stored in bit cell 400, as illustrated at 5 [0066] The local sense amp 410 is then enabled during time period s3. After sensing the differential values 502, 503, sense amp 410 produces a full rail signal 504, 505. The resulting full rail signal is transferred to the bit lines BL, BLB during time period s4 by asserting the transfer gate enable signals PASS, PASSB to enable transfer gates 402, 403 and thereby transfer the full rail signals to an output latch responsive to latch enable signal LAT EN that is located in the periphery of NVL array 110, for example [0067] FIG. 6 is a timing diagram illustrating writing a logic 0 to bit cell 400. The write operation begins by raising both plate lines to Vdd during time period si . This is called the primary storage method. The signal transitions on PL1 and PL2 are capacitively coupled onto nodes Q and QB, effectively pulling both storage nodes almost all the way to VDD (1.5v). Data is provided on the bit lines BL, BLB and the transfer gates 402, 403 are enabled by the pass signal PASS during time periods s2-s4 to transfer the data bit and its inverse value from the bit lines to nodes Q, QB. Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive after the write data drivers have forced adequate differential on Q/QB during time period s2. However, to avoid a short from the sense amp to the 1.2v driver supply, the write data drivers are turned off at the end of time period s2 before the sense amp is turned on during time periods s3, s4. In an alternative embodiment called the secondary store method, write operations hold PL2 at Ov or ground throughout the data write operation. This can save power during data write operations, but reduces the resulting read signal margin by 50% as C2 and C2' no longer hold data via remnant polarization and only provide a linear capacitive load to the CI and C2 FeCaps. [0068] Key states such as PL1 high to SAEN high during s2, SAEN high pulse during s3 during read and FeCap DC bias states s3-4 during write can selectively be made multi-cycle to provide higher robustness without slowing down the NVL clock. [0069] For FeCap based circuits, reading data from the FeCap's may partially depolarize the capacitors. For this reason, reading data from FeCaps is considered destructive in nature; i.e. reading the data may destroy the contents of the FeCap's or reduce the integrity of the data at a minimum. For this reason, if the data contained in the FeCap's is expected to remain valid after a read operation has occurred, the data must be written back into the FeCaps. [0070] In certain applications, specific NVL arrays may be designated to store specific information that will not change over a period of time. For example, certain system states can be saved as a default return state where returning to that state is preferable to full reboot of the device. The reboot and configuration process for a state of the art ultra low power SoC can take 1000 - 10000 clock cycles or more to reach the point where control is handed over to the main application code thread. This boot time becomes critical for energy harvesting applications in which power is intermittent, unreliable, and limited in quantity. The time and energy cost of rebooting can consume most or all of the energy available for computation, preventing programmable devices such as MCU's from being used in energy harvesting applications. An example application would be energy harvesting light switches. The energy harvested from the press of the button on the light switch represents the entire energy available to complete the following tasks: 1) determine the desired function (on/off or dimming level), 2) format the request into a command packet, 3) wake up a radio and squirt the packet over an RF link to the lighting system. Known custom ASIC chips with hard coded state machines are often used for this application due to the tight energy constraints, which makes the system inflexible and expensive to change because new ASIC chips have to be designed and fabricated whenever any change is desired. A programmable MCU SOC would be a much better fit, except for the power cost of the boot process consumes most of the available energy, leaving no budget for executing the required application code. [0071] To address this concern, in one approach, at least one of the plurality of nonvolatile logic element arrays is configured to store a boot state representing a state of the computing device apparatus after a given amount of a boot process is completed. The at least one non-volatile logic controller in this approach is configured to control restoration of data representing the boot state from the at least one of the plurality of non-volatile logic element arrays to corresponding ones of the plurality of volatile storage elements in response to detecting a previous system reset or power loss event for the computing device apparatus. To conserve power over a typical read/write operation for the NVL arrays, the at least one non-volatile logic controller can be configured to execute a round-trip data restoration operation that automatically writes back data to an individual non- volatile logic element after reading data from the individual non- volatile logic element without completing separate read and write operations. [0072] An example execution of a round-trip data restoration is illustrated in FIG. 7, which illustrates a writeback operation on bitcell 400, where the bitcell is read, and then written to the same value. As illustrated, initiating reading of data from the individual non-volatile logic element is started at a first time SI by switching a first plate line PL1 high to induce a voltage on a node of a corresponding ferroelectric capacitor bit cell based on a capacitance ratio for ferroelectric capacitors of the corresponding ferroelectric capacitor bit cell. If clamp switches are used to ground the nodes of the ferroelectric capacitors, a clear signal CLR is switched from high to low at the first time SI to unclamp those aspects of the individual non- volatile logic element from electrical ground. At a second time S2, a sense amplifier enable signal SAEN is switched high to enable a sense amplifier to detect the voltage induced on the node and to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a third time S3, a pass line PASS is switched high to open transfer gates to provide an output signal corresponding to data stored in the individual non-volatile logic element. At a fourth time S4, a second plate line PL2 is switched high to induce a polarizing signal across the ferroelectric capacitors to write data back to the corresponding ferroelectric capacitor bit cell corresponding to the data stored in the individual non-volatile logic element. To the individual non-volatile logic element to a non-volatile storage state having the same data stored therein, at a fifth time S5 the first plate line PL1 and the second plate line PL2 are switched low, the pass line PASS is switched low at the sixth time S6, and the sense amplifier enable signal SAEN is switched law at the seventh time S7. If clamp switches are used to ground the nodes of the ferroelectric capacitors, at the seventh time a clear signal CLR is switched from low to high to clamp the aspects of the individual non-volatile logic element to the electrical ground to help maintain data integrity as discussed herein. This process includes a lower total number of transitions than what is needed for distinct and separate read and write operations (read, then write). This lowers the overall energy consumption. [0073] Bitcell 400 is designed to maximize read differential across Q/QB in order to provide a highly reliable first generation of NVL products. Two FeCaps are used on each side rather than using one FeCap and constant BL capacitance as a load because this doubles the differential voltage that is available to the sense amp. A sense amp is placed inside the bitcell to prevent loss of differential due to charge sharing between node Q and the BL capacitance and to avoid voltage drop across the transfer gate. The sensed voltages are around VDD/2, and a HVT transfer gate takes a long time to pass them to the BL. Bitcell 400 helps achieve twice the signal margin of a regular FRAM bitcell known in the art, while not allowing any DC stress across the FeCaps. [0074] The timing of signals shown in FIGS. 5 and 6 are for illustrative purposes. Various embodiments may signal sequences that vary depending on the clock rate, process parameters, device sizes, etc. For example, in another embodiment, the timing of the control signals may operate as follows. During time period SI : PASS goes from 0 to 1 and PL1/PL2 go from 0 to 1. During time period S2: SAEN goes from 0 to 1, during which time the sense amp may perform level shifting as will be described later, or provides additional drive strength for a non-level shifted design. During time period S3: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same, but are moved up one clock cycle. This sequence is one clock cycle shorter than that illustrated in FIG. 6. [0075] In another alternative, the timing of the control signals may operate as follows. During time period SI : PASS goes from 0 to 1 (BL/BLB, Q/QB are Ov and VDDL respectively). During time period S2: SAEN goes from 0 to 1 (BL/BLB, Q/QB are Ov and VDDN respectively). During time period S3: PL1/PL2 go from 0 to 1 (BL/Q is coupled above ground by PL1/PL2 and is driven back low by the SA and BL drivers). During time period S4: PL1/PL2 go from 1 to 0 and the remainder of the waveforms remain the same. [0076] FIGS. 8-9 are a schematic and timing diagram illustrating another embodiment of a ferroelectric nonvolatile bit cell 800, a 2C-3T self-referencing based NVL bitcell. The previously described 4-FeCap based bitcell 400 uses two FeCaps on each side of a sense amp to get a differential read with double the margin as compared to a standard 1C-1T FRAM bitcell. However, a 4-FeCap based bitcell has a larger area and may have a higher variation because it uses more FeCaps. [0077] Bitcell 800 helps achieve a differential 4-FeCap like margin in lower area by using itself as a reference, referred to herein as self-referencing. By using fewer FeCaps, it also has lower variation than a 4 FeCap bitcell. Typically, a single sided cell needs to use a reference voltage that is in the middle of the operating range of the bitcell. This in turn reduces the read margin by half as compared to a two sided cell. However, as circuit fabrication process moves, the reference value may become skewed, further reducing the read margin. A self-reference scheme allows comparison of a single sided cell against itself, thereby providing a higher margin. Tests of the self-referencing cell described herein have provided at least double the margin over a fixed reference cell. [0078] Bitcell 800 has two FeCaps CI, C2 that are connected in series to form node Q 804. Plate line 1 (PL1) is coupled to FeCap CI and plate line 2 (PL2) is coupled to FeCap C2. The plate lines are use to provide biasing to the FeCaps during reading and writing operations. Pass gate 802 is configured to couple node Q to bitline B. Pass gate 802 is implemented using a PMOS device and an NMOS device connected in parallel. This arrangement reduces voltage drop across the pass gate during a write operation so that nodes Q, QB are presented with a higher voltage during writes and thereby a higher polarization is imparted to the FeCaps. Alternatively, an NMOS pass gate may be used with a boosted word line voltage. In this case, the PASS signal would be boosted by one NFET Vt (threshold voltage). However, this may lead to reliability problems and excess power consumption. Using a CMOS pass gate adds additional area to the bit cell but improves speed and power consumption. Clamp transistor MC 806 is coupled to node Q. Clamp transistor 806 is configured to clamp the Q node to a voltage that is approximately equal to the low logic voltage on the plate lines in response to clear signal CLR during non-access time periods sO, which in this embodiment 0 volts, ground. In this manner, during times when the bit cell is not being accessed for reading or writing, no voltage is applied across the FeCaps and therefore TDDB and unintended partial depolarization is essentially eliminated. [0079] The initial state of node Q, plate lines PL1 and PL2 are all 0, as shown in FIG. 9 at time period sO, so there is no DC bias across the FeCaps when the bitcell is not being accessed. To begin a read operation, PL1 is toggled high while PL2 is kept low, as shown during time period si . A signal 902 develops on node Q from a capacitance ratio based on the retained polarization of the FeCaps from a last data value previously written into the cell, as described above with regard to equation 1. This voltage is stored on a read capacitor 820 external to the bitcell by passing the voltage though transfer gate 802 onto bit line BL and then through transfer gate 822 in response to a second enable signal EN1. Note: BL and the read capacitors are precharged to VDD/2 before the pass gates 802, 822, and 823 are enabled in order to minimize signal loss via charge sharing when the recovered signals on Q are transferred via BL to the read storage capacitors 820 and 821. Then, PL1 is toggled back low and node Q is discharged using clamp transistor 806 during time period s2. Next, PL2 is toggled high keeping PL1 low during time period s3. A new voltage 904 develops on node Q, but this time with the opposite capacitor ratio. This voltage is then stored on another external read capacitor 821 via transfer gate 823. Thus, the same two FeCaps are used to read a high as well as low signal. Sense amplifier 810 can then determine the state of the bitcell by using the voltages stored on the external read capacitors 820, 821. [0080] Typically, there will be an array of bit cells 800. One column of bit cells 800- 800n is illustrated in FIG. 8 coupled via bit line 801 to read transfer gates 822, 823. There may then be multiple columns of similar bitcells to form an n row by m column array. For example, in SoC 100, the NVL arrays are 8 x 32; however, as discussed earlier, different configurations may be implemented. The read capacitors and sense amps may be located in the periphery of the memory array, for example. [0081] FIG. 10 is a block diagram illustrating NVL array 110 in more detail. Embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. As discussed earlier with reference to FIG. 1, adding testability features to individual NVL FFs may be prohibitive in terms of area overhead. To amortize the test feature costs and improve manufacturability, SoC 100 is implemented using 256b mini-NVL arrays 110, of FeCap based bitcells dispersed throughout the logic cloud to save state of the various flip flops 120 when power is removed. Each cloud 102-104 of FFs 120 includes an associated NVL array 110. A central NVL controller 106 controls all the arrays and their communication with FFs 120. [0082] While an NVL array may be implemented in any number of n rows of m column configurations, in this example, NVL array 110 is implemented with an array 1040 of eight rows and thirty-two bit columns of bitcells. Each individual bit cell, such as bitcell 1041, is coupled to a set of control lines provided by row drivers 1042. The control signals described earlier, including plate lines (PLl, PL2), sense amp enable (SEAN), transfer gate enable (PASS), and clear (CLR) are all driven by the row drivers. There is a set of row drivers for each row of bitcells. [0083] Each individual bit cell, such as bitcell 1041 is also coupled via the bitlines to a set of input/output (IO) drivers 1044. In this implementation, there are thirty-two sets of IO drivers, such as IO driver set 1045. Each driver set produces an output signal 1047 that provides a data value when a row of bit lines is read. Each bitline runs the length of a column of bitcells and couples to an IO driver for that column. Each bitcell may be implemented as 2C-3T bitcell 800, for example. In this case, a single bitline will be used for each column, and the sense amps and read capacitors will be located in IO driver block 1044. In another implementation of NVL array 110, each bitcell may be implemented as 4C-12T bit cell 400. In this case, the bitlines will be a differential pair with two IO drivers for each column. A comparator receives the differential pair of bitlines and produces a final single bit line that is provided to the output latch. Other implementations of NVL array 110 may use other known or later developed bitcells in conjunction with the row drivers and IO drivers that will be described in more detail below. [0084] Timing logic 1046 generates timing signals that are used to control the read drivers to generate the sequence of control signals for each read and write operation. Timing logic 1046 may be implemented using both synchronous or asynchronous state machines, or other known or later developed logic technique. One potential alternative embodiment utilizes a delay chain with multiple outputs that "tap" the delay chain at desired intervals to generate control signals. Multiplexors can be used to provide multiple timing options for each control signal. Another potential embodiment uses a programmable delay generator that produces edges at the desired intervals using dedicated outputs that are connected to the appropriate control signals. [0085] FIG. 11 is a more detailed schematic of a set of input/output circuits 1150 used in the NVL array of FIG. 10. Referring back to FIG. 10, each IO set 1045 of the thirty-two drivers in IO block 1044 is similar to IO circuits 1150. I/O block 1044 provides several features to aid testability of NVL bits. [0086] Referring now to FIG. 11, a first latch (LI) 1151 serves as an output latch during a read and also combines with a second latch (L2) 1152 to form a scan flip flop. The scan output (SO) signal is routed to multiplexor 1153 in the write driver block 1158 to allow writing scanned data into the array during debug. Scan output (SO) is also coupled to the scan input (SI) of the next set of IO drivers to form a thirty-two bit scan chain that can be used to read or write a complete row of bits from NVL array 110. Within SoC 100, the scan latch of each NVL array is connected in a serial manner to form a scan chain to allow all of the NVL arrays to be accessed using the scan chain. Alternatively, the scan chain within each NVL array may be operated in a parallel fashion (N arrays will generate N chains) to reduce the number of internal scan flop bits on each chain in order to speed up scan testing. The number of chains and the number of NVL arrays per chain may be varied as needed. Typically, all of the storage latches and flipflops within SoC 100 include scan chains to allow complete testing of SoC 100. Scan testing is well known and does not need to be described in more detail herein. In this embodiment, the NVL chains are segregated from the logic chains on a chip so that the chains can be exercised independently and NVL arrays can be tested without any dependencies on logic chain organization, implementation, or control. The maximum total length of NVL scan chains will always be less than the total length of logic chains since the NVL chain length is reduced by a divisor equal to the number of rows in the NVL arrays. In the current embodiment, there are 8 entries per NVL array, so the total length of NVL scan chains is l/8 ththe total length of the logic scan chains. This reduces the time required to access and test NVL arrays and thus reduces test cost. Also, it eliminates the need to determine the mapping between logic flops, their position on logic scan chains and their corresponding NVL array bit location (identifying the array, row, and column location), greatly simplifying NVL test, debug, and failure analysis. [0087] While scan testing is useful, it does not provide a good mechanism for production testing of SoC 100 since it may take a significant amount of time to scan in hundreds or thousands of bits for testing the various NVL arrays within SoC 100. This is because there is no direct access to bits within the NVL array. Each NVL bitcell is coupled to an associated flip-flop and is only written to by saving the state of the flip flop. Thus, in order to load a pattern test into an NVL array from the associated flipflops, the corresponding flipflops must be set up using a scan chain. Determining which bits on a scan chain have to be set or cleared in order to control the contents of a particular row in an NVL array is a complex task as the connections are made based on the physical location of arbitrary groups of flops on a silicon die and not based on any regular algorithm. As such, the mapping of flops to NVL locations need not be controlled and is typically somewhat random. [0088] An improved testing technique is provided within IO drivers 1150. NVL controller 106, referring back to FIG. 1, has state machine(s) to perform fast pass/fail tests for all NVL arrays on the chip to screen out bad dies. In one such approach, at least one non-volatile logic controller is configured to control a built-in-self-test mode where all zeros or all ones are written to at least a portion of an NVL array of the plurality of NVL arrays and then it is determined whether data read from the at least the portion of the NVL array is all ones or all zeros. This is done by first writing all 0's or l 's to a row using all 0/1 write driver 1180, applying an offset disturb voltage (V Off), then reading the same row using parallel read test logic 1170. Signal corr l from AND gate Gl goes high if the data output signal (OUT) from data latch 1151 is high, and signal corr l from an adjacent column's IO driver's parallel read test logic AND gate Gl is high. In this manner, the Gl AND gates of the thirty-two sets of I/O blocks 1150 in NVL array 110 implement a large 32 input AND gate that tell the NVL controller if all outputs are high for the selected row of NVL array 110. OR gate GO does the same for reading 0's. In this manner, the NVL controller may instruct all of the NVL arrays within SoC 100 to simultaneously perform an all ones write to a selected row, and then instruct all of the NVL arrays to simultaneously read the selected row and provide a pass fail indication using only a few control signals without transferring any explicit test data from the NVL controller to the NVL arrays. In typical memory array BIST (Built In Self Test) implementations, the BIST controller must have access to all memory output values so that each output bit can be compared with the expected value. Given there are many thousands of logic flops on typical silicon SOC chips, the total number of NVL array outputs can also measure in the thousands. It would be impractical to test these arrays using normal BIST logic circuits due to the large number of data connections and data comparators required. The NVL test method can then be repeated eight times, for NVL arrays having eight rows (the number of repetitions will vary according to the array organization. In one example, a 10 entry NVL array implementation would repeat the test method 10 times), so that all of the NVL arrays in SoC 100 can be tested for correct all ones operation in only eight write cycles and eight read cycles. Similarly, all of the NVL arrays in SoC 100 can be tested for correct all zeros operation in only eight write cycles and eight read cycles. The results of all of the NVL arrays may be condensed into a single signal indicating pass or fail by an additional AND gate and OR gate that receive the corr O and corr l signals from each of the NVL arrays and produces a single corr O and corr l signal, or the NVL controller may look at each individual corr O and corr l signal. [0089] All 0/1 write driver 1180 includes PMOS devices Ml, M3 and NMOS devices M2, M4. Devices Ml and M2 are connected in series to form a node that is coupled to the bitline BL, while devices M3 and M4 are connected in series to form a node that is coupled to the inverse bitline BLB. Control signal "all l A" and inverse "all l B" are generated by NVL controller 106. When asserted during a write cycle, they activate device devices Ml and M4 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 1. Similarly, control signal "all O A" and inverse "all O B" are generated by NVL controller 106. When asserted during a write cycle, they activate devices M2 and M3 to cause the bit lines BL and BLB to be pulled to represent a data value of logic 0. In this manner, the thirty-two drivers are operable to write all ones into a row of bit cells in response to a control signal and to write all zeros into a row of bit cells in response to another control signal. One skilled in the art can easily design other circuit topologies to accomplish the same task. The current embodiment is preferred as it only requires 4 transistors to accomplish the required data writes. [0090] During a normal write operation, write driver block 1158 receives a data bit value to be stored on the data in signal. Write drivers 1156, 1157 couple complimentary data signals to bitlines BL, BLB and thereby to the selected bit cell. Write drivers 1156, 1157 are enabled by the write enable signal STORE. [0091] FIG. 12A is a timing diagram illustrating an offset voltage test during a read cycle. To apply a disturb voltage to a bitcell, state si is modified during a read. This FIG. illustrates a voltage disturb test for reading a data value of "0" (node Q); a voltage disturb test for a data value of " 1" is similar, but injects the disturb voltage onto the opposite side of the sense amp (node QB). Thus, the disturb voltage in this embodiment is injected onto the low voltage side of the sense amp based on the logic value being read. Transfer gates 1154, 1155 are coupled to the bit line BL, BLB. A digital to analog converter, not shown (may be on-chip, or off-chip in an external tester, for example), is programmed by NVL controller 106, by an off-chip test controller, or via an external production tester to produce a desired amount of offset voltage V OFF. NVL controller 106 may assert the Vcon control signal for the bitline side storing a "0" during the si time period to thereby enable Vcon transfer gate 1154, 1155, discharge the other bit- line using M2/M4 during si, and assert control signal PASS during si to turn on transfer gates 402, 403. This initializes the voltage on node Q/QB of the "0" storing side to offset voltage V Off, as shown at 1202. This pre-charged voltage lowers the differential available to the SA during s3, as indicated at 1204, and thereby pushes the bitcell closer to failure. For fast production testing, V Off may be set to a required margin value, and the pass/fail test using G0- 1 may then be used to screen out any failing die. [0092] FIG. 12B illustrates a histogram generated during a sweep of offset voltage. Bit level failure margins can be studied by sweeping V Off and scanning out the read data bits using a sequence of read cycles, as described above. In this example, the worst case read margin is 550mv, the mean value is 597mv, and the standard deviation is 22mv. In this manner, the operating characteristics of all bit cells in each NVL array on an SoC may be easily determined. [0093] As discussed above, embedding non-volatile elements outside the controlled environment of a large array presents reliability and fabrication challenges. The NVL bitcell should be designed for maximum read signal margin and in-situ testability as is needed for any NV-memory technology. However, NVL implementation cannot rely on SRAM like built in self test (BIST) because NVL arrays are distributed inside the logic cloud. The NVL implementation described above includes NVL arrays controlled by a central NVL controller 106. While screening a die for satisfactory behavior, NVL controller 106 runs a sequence of steps that are performed on-chip without any external tester interference. The tester only needs to issue a start signal, and apply an analog voltage which corresponds to the desired signal margin. The controller first writes all 0s or Is to all bits in the NVL array. It then starts reading an array one row at a time. The NVL array read operations do not necessarily immediately follow NVL array write operations. Often, high temperature bake cycles are inserted between data write operations and data read operations in order to accelerate time and temperature dependent failure mechanisms so that defects that would impact long term data retention can be screened out during manufacturing related testing. As described above in more detail, the array contains logic that ANDs and ORs all outputs of the array. These two signals are sent to the controller. Upon reading each row, the controller looks at the two signals from the array, and based on knowledge of what it previously wrote, decides it the data read was correct or not in the presence of the disturb voltage. If the data is incorrect, it issues a fail signal to the tester, at which point the tester can eliminate the die. If the row passes, the controller moves onto the next row in the array. All arrays can be tested in parallel at the normal NVL clock frequency. This enables high speed on- chip testing of the NVL arrays with the tester only issuing a start signal and providing the desired read signal margin voltage while the NVL controller reports pass at the end of the built in testing procedure or generates a fail signal whenever the first failing row is detected. Fails are reported immediately so the tester can abort the test procedure at the point of first failure rather than waste additional test time testing the remaining rows. This is important as test time and thus test cost for all non-volatile memories (NVM) often dominates the overall test cost for an SOC with embedded NVM. If the NVL controller activates the "done" signal and the fail signal has not been activated at any time during the test procedure, the die undergoing testing has passed the required tests. [0094] For further failure analysis, the controller may also have a debug mode. In this mode, the tester can specify an array and row number, and the NVL controller can then read or write to just that row. The read contents can be scanned out using the NVL scan chain. This method provides read or write access to any NVL bit on the die without CPU intervention or requiring the use of a long complicated SOC scan chains in which the mapping of NVL array bits to individual flops is random. Further, this can be done in concert with applying an analog voltage for read signal margin determination, so exact margins for individual bits can be measured. [0095] These capabilities help make NVL practical because without testability features it would be risky to use non- volatile logic elements in a product. Further, pass/fail testing on-die with minimal tester interaction reduces test time and thereby cost. [0096] NVL implementation using mini-arrays distributed in the logic cloud means that a sophisticated error detection method like ECC would require a significant amount of additional memory columns and control logic to be used on a per array basis, which could be prohibitive from an area standpoint. However, in order to provide an enhanced level of reliability, the NVL arrays of SoC 100 may include parity protection as a low cost error detection method, as will now be described in more detail. [0097] FIG. 13 is a schematic illustrating parity generation in NVL array 110 that illustrates an example NVL array having thirty-two columns of bits (0:31), that exclusive-ors the input data value DATA IN 1151 with the output of a similar XOR gate of the previous column's IO driver. Each IO driver section, such as section 1350, of the NVL array may contain an XOR gate 1160, referring again to FIG. 11A. During a row write, the output of XOR gate 1160 that is in column 30 is the overall parity value of the row of data that is being written in bit columns 0:30 and is used to write parity values into the last column by feeding its output to the data input of column 31 the NVL mini-array, shown as XOR IN in FIG. 1 IB. [0098] In a similar manner, during a read, XOR gate 1160 exclusive-ors the data value DATA OUT from read latch 1151 via mux 1161 (see FIG. 11) with the output of a similar XOR gate of the previous column's IO driver. The output of XOR gate 1160 that is in bit column 30 is the overall parity value for the row of data that was read from bit columns 0:30 and is used to compare to a parity value read from bit column 31 in parity error detector 1370. If the overall parity value determined from the read data does not match the parity bit read from column 31 , then a parity error is declared. [0099] When a parity error is detected, it indicates that the stored FF state values are not trustworthy. Since the NVL array is typically being read when the SoC is restarting operation after being in a power off state, then detection of a parity error indicates that a full boot operation needs to be performed in order to regenerate the correct FF state values. [00100] However, if the FF state was not properly stored prior to turning off the power or this is a brand new device, for example, then an indeterminate condition may exist. For example, if the NVL array is empty, then typically all of the bits may have a value of zero, or they may all have a value of one. In the case of all zeros, the parity value generated for all zeros would be zero, which would match the parity bit value of zero. Therefore, the parity test would incorrectly indicate that the FF state was correct and that a boot operation is not required, when in fact it would be required. In order to prevent this occurrence, an inverted version of the parity bit may be written to column 31 by bit line driver 1365, for example. Referring again to FIG. 11A, note that while bit line driver 1156 for columns 0-30 also inverts the input data bits, mux 1153 inverts the data in bits when they are received, so the result is that the data in columns 0-30 is stored un- inverted. In another embodiment, the data bits may be inverted and the parity error not inverted, for example. [00101] In the case of all ones, if there is an even number of columns, then the calculated parity would equal zero, and an inverted value of one would be stored in the parity column. Therefore, in an NVL array with an even number of data columns with all ones would not detect a parity error. In order to prevent this occurrence, NVL array 110 is constrained to have an odd number of data columns. For example, in this embodiment, there are thirty-one data columns and one parity column, for a total of thirty-two bitcell columns. [00102] In some embodiments, when an NVL read operation occurs, control logic for the NVL array causes the parity bit to be read, inverted, and written back. This allows the NVL array to detect when prior NVL array writes were incomplete or invalid/damaged. Remnant polarization is not completely wiped out by a single read cycle. Typically, it take 5 - 15 read cycles to fully depolarize the FeCaps or to corrupt the data enough to reliably trigger an NVL read parity. For example, if only four out of eight NVL array rows were written during the last NVL store operation due to loss of power, this would most likely result in an incomplete capture of the prior machine state. However, because of remnant polarization, the four rows that were not written in the most recent state storage sequence will likely still contain stale data from back in time, such as two NVL store events ago, rather than data from the most recent NVL data store event. The parity and stale data from the four rows will likely be read as valid data rather than invalid data. This is highly likely to cause the machine to lock up or crash when the machine state is restored from the NVL arrays during the next wakeup/power up event. Therefore, by writing back the parity bit inverted after every entry is read, each row of stale data is essentially forcibly invalidated. [00103] Writing data back to NVL entries is power intensive, so it is preferable to not write data back to all bits, just the parity bit. The current embodiment of the array disables the PL1, PL2, and sense amp enable signals for all non-parity bits (i.e. Data bits) to minimize the parasitic power consumption of this feature. [00104] In this manner, each time the SoC transitions from a no-power state to a power-on state, a valid determination can be made that the data being read from the NVL arrays contains valid FF state information. If a parity error is detected, then a boot operation can be performed in place of restoring FF state from the NVL arrays. [00105] Referring back to FIG. 1, low power SoC 100 has multiple voltage and power domains, such as VDDN FV, VDDN CV for the NVL arrays, VDDR for the sleep mode retention latches and well supplies, and VDDL for the bulk of the logic blocks that form the system microcontroller, various peripheral devices, SRAM, ROM, etc., as described earlier with regard to Table 1 and Table 2. FRAM has internal power switches and is connected to the always on supply VDDZ In addition, the VDDN FV domain may be designed to operate at one voltage, such as 1.5 volts needed by the FeCap bit cells, while the VDDL and VDDN CV domain may be designed to operate at a lower voltage to conserve power, such as 0.9 - 1.5 volts, for example. Such an implementation requires using power switches 108, level conversion and isolation in appropriate areas. Aspects of isolation and level conversion needed with respect to NVL blocks 110 will now be described in more detail. The circuits are designed such that VDDL/VDDN_CV can be any valid voltage less than or equal to VDDN FV and the circuit will function correctly. [00106] FIG. 14 is a block diagram illustrating power domains within NVL array 110. Various block of logic and memory may be arranged as illustrated in Table 3. [00107] Power domains VDDL, VDDN CV, VDDN FV, and VDDR described in Table 3 are controlled using a separate set of power switches, such as switches 108 described earlier. However, isolation may be needed for some conditions. Data output buffers within IO buffer block 1044 are in the NVL logic power domain VDDN CV and therefore may remain off while domain VDDR (or VDDL depending on the specific implementation) is ON during normal operation of the chip. ISO-Low isolation is implemented to tie all such signals to ground during such a situation. While VDDN CV is off, logic connected to data outputs in VDDR (or VDDL depending on the specific implementation) domain in random logic area may generate short circuit current between power and ground in internal circuits if any signals from the VDDN CV Full Chip Voltage Voltage level Domain VDD 0.9 - 1.5 Always ON supply for VDDL, VDDR, VDDN CV power switches, and always ON logic (if any) VDDZ 1.5 Always on 1.5V supply for FRAM, and for VDDN FV power switches. FRAM has internal power switches. VDDL 0.9 - 1.5 All logic, and master stage of all flops, SRAM, ROM, Write multiplexor, buffers on FF outputs, and mux outputs: Variable logic voltage; e.g. 0.9 to 1.5V (VDDL). This supply is derived from the output of VDDL power switches VDDN CV 0.9- 1.5 NVL array control and timing logic, and IO circuits, NVL controller. Derived from VDDN CV power switches. VDDN FV 1.5 NVL array Wordline driver circuits 1042 and NVL bitcell array 1040: Same voltage as FRAM. Derived from VDDN FV power switches. VDDR 0.9 - 1.5 This is the data retention domain and includes the slave stage of retention flops, buffers on NVL clock, flop retention enable signal buffers, and NVL control outputs such as flop update control signal buffers, and buffers on NVL data outputs. Derived from VDDR power switches. Table 3 - example full chip power domains domain are floating (not driven when VDDN CV domain is powered down) if they are not isolated. The same is applicable for correct_0/l outputs and scan out output of the NVL arrays. The general idea here is that any outputs of the NVL array will be isolated when the NVL array has no power given to it. In case there is always ON logic present in the chip, all signals going from VDDL or VDDN CV to VDD must be isolated using input isolation at the VDD domain periphery. Additional built-in isolation exists in NVL flops at the ND input. Here, the input goes to a transmission gate, whose control signal NU is driven by an always on signal. When the input is expected to be indeterminate, NU is made low, thereby disabling the ND input port. Similar built-in isolation exists on data inputs and scan-in of the NVL array. This isolation would be needed during NVL restore when VDDL is OFF. Additionally, signals NU and NVL data input multiplexor enable signals (mux sel) must be buffered only in the VDDR domain. The same applies for the retention enable signal. [00108] To enable the various power saving modes of operation, VDDL and VDDN* domain are shut off at various times, and isolation is makes that possible without burning short circuit current. [00109] Level conversion from the lower voltage VDDL domain to the higher voltage VDDN domain is needed on control inputs of the NVL arrays that go to the NVL bitcells, such as: row enables, PL1, PL2, restore, recall, and clear, for example. This enables a reduction in system power dissipation by allowing blocks of SOC logic and NVL logic gates that can operate at a lower voltage to do so. For each row of bitcells in bitcell array 1040, there is a set of word line drivers 1042 that drive the signals for each row of bitcells, including plate lines PL1, PL2, transfer gate enable PASS, sense amp enable SAEN, clear enable CLR, and voltage margin test enable VCON, for example. The bitcell array 1040 and the wordline circuit block 1042 are supplied by VDDN. Level shifting on input signals to 1042 are handled by dedicated level shifters (see FIG. 15), while level shifting on inputs to the bitcell array 1040 are handled by special sequencing of the circuits within the NVL bitcells without adding any additional dedicated circuits to the array datapath or bitcells. [00110] FIG. 15 is a schematic of a level converter 1500 for use in NVL array 110. FIG. 15 illustrates one wordline driver that may be part of the set of wordline drivers 1402. Level converter 1500 includes PMOS transistors PI, P2 and NMOS transistor Nl, N2 that are formed in region 1502 in the 1.5 volt VDDN domain for wordline drivers 1042. However, the control logic in timing and control module 1046 is located in region 1503 in the 1.2v VDDL domain (1.2v is used to represent the variable VDDL core supply that can range from 0.9v to 1.5v). 1.2 volt signal 1506 is representative of any of the row control signals that are generated by control module 1046, for use in accessing NVL bitcell array 1040. Inverter 1510 forms a complimentary pair of control signals 1511, 1512 in region 1503 that are then routed to transistors Nl and N2 in level converter 1500. In operation, when 1.2 volt signal 1506 goes high, NMOS device Nl pulls the gate of PMOS device P2 low, which causes P2 to pull signal 1504 up to 1.5 volts. Similarly, when 1.2 volt signal 1506 goes low, complimentary signal 1512 causes NMOS device N2 to pull the gate of PMOS device PI low, which pulls up the gate of PMOS device P2 and allows signal 1504 to go low, approximately zero volts. The NMOS devices must be stronger than the PMOS so the converter doesn't get stuck. In this manner, level shifting may done across the voltage domains and power may be saved by placing the control logic, including inverter 1510, in the lower voltage domain 1503. For each signal, the controller is coupled to each of level converter 1500 by two complimentary control signals 1511, 1512. [00111] FIG. 16 is a timing diagram illustrating operation of level shifting using a sense amp within a ferroelectric bitcell. Input data that is provided to NVL array 110 from multiplexor 212, referring again to FIG. 2, also needs to be level shifted from the 1.2v VDDL domain to 1.5 volts needed for best operation of the FeCaps in the 1.5 volt VDDN domain during write operations. This may be done using the sense amp of bit cell 400, for example. Referring again to FIG. 4 and to FIG. 13, note that each bit line BL, such as BL 1352, which comes from the 1.2 volt VDDL domain, is coupled to transfer gate 402 or 403 within bitcell 400. Sense amp 410 operates in the 1.5v VDDN power domain. Referring now to FIG. 16, note that during time period s2, data is provided on the bit lines BL, BLB and the transfer gates 402, 403 are enabled by the pass signal PASS during time periods s2 to transfer the data bit and its inverse value from the bit lines to differential nodes Q, QB. However, as shown at 1602, the voltage level transferred is only limited to less than the 1.5 volt level because the bit line drivers are located in the 1.2 v VDDL domain. [00112] Sense amp 410 is enabled by sense amp enable signals SAEN, SAENB during time period s3, s4 to provide additional drive, as illustrated at 1604, after the write data drivers, such as write driver 1156, 1157, have forced adequate differential 1602 on Q/QB during time period s2. Since the sense amp is supplied by a higher voltage (VDDN), the sense amp will respond to the differential established across the sense amp by the write data drivers and will clamp the logic 0 side of the sense amp to VSS (Q or QB) while the other side containing the logic 1 is pulled up to VDDN voltage level. In this manner, the existing NVL array hardware is reused to provide a voltage level shifting function during NVL store operations. [00113] However, to avoid a short from the sense amp to the 1.2v driver supply, the write data drivers are isolated from the sense amp at the end of time period s2 before the sense amp is turned on during time periods s3, s4. This may be done by turning off the bit line drivers by de- asserting the STORE signal after time period s2 and/or also by disabling the transfer gates by de- asserting PASS after time period s2. [00114] Using the above described arrangements, various configurations are possible to maximize power savings or usability at various points in a processing or computing devices operation cycle. In one such approach, a computing device can be configured to operate continuously across a series of power interruptions without loss of data or reboot. With reference to the example illustrated in FIG. 17, a processing device 1700 as described above includes a plurality of non- volatile logic element arrays 1710, a plurality of volatile storage elements 1720, and at least one non-volatile logic controller 1730 configured to control the plurality of non- volatile logic element arrays 1710 to store a machine state represented by the plurality of volatile storage elements 1720 and to read out a stored machine state from the plurality of non- volatile logic element arrays 1710 to the plurality of volatile storage elements 1720. A voltage or current detector 1740 is configured to sense a power quality from an input power supply 1750. [00115] A power management controller 1760 is in communication with the voltage or current detector 1740 to receive information regarding the power quality from the voltage or current detector 1710. The power management controller 1760 is also configured to be in communication with the at least one non- volatile logic controller 1710 to provide information effecting storing the machine state to and restoration of the machine state from the plurality of non- volatile logic element arrays 1710. [00116] A voltage regulator 1770 is connected to receive power from the input power supply 1750 and provide power to an output power supply rail 1755 configured to provide power to the processing device 1700. The voltage regulator 1770 is further configured to be in communication with the power management controller 1760 and to disconnect the output power supply rail 1755 from the input power supply 1750, such as through control of a switch 1780, in response to a determination that the power quality is below a threshold. [00117] The power management controller 1760 and the voltage or current detector 1740 work together with the at least one non- volatile logic controller 1730 and voltage regulator 1770 to manage the data backup and restoration processes independent of the primary computing path. In one such example, the power management controller 1760 is configured to send a signal to effect stoppage of clocks for the processing device 1700 in response to the determination that the power quality is below the threshold. The voltage regulator 1770 can then send a disconnect signal to the power management controller 1760 in response to disconnecting the output power supply rail 1755 from the input power supply 1750. The power management controller 1760 sends a backup signal to the at least one non- volatile logic controller 1710 in response to receiving the disconnect signal. Upon completion of the backup of system state into NVL arrays, the power can be removed from the SOC, or can continue to degrade without further concern for loss of machine state. [00118] The individual elements that make the determination of power quality can vary in different approaches. For instance, the voltage regulator 1770 can be configured to detect the power quality's rising above the threshold and, in response, to send a good power signal to the power management controller 1760. In response, the power management controller 1760 is configured to send a signal to provide power to the plurality of non-volatile logic element arrays 1710 and the at least one non- volatile logic controller 1730 to facilitate restoration of the machine state. The power management controller 1760 is configured to determine that power up is complete and, in response, send a signal to effect release of clocks for the processing device 1700 wherein the processing device 1700 resumes operation from the machine state prior to the determination that the power quality was below the threshold. [00119] To assure that the processing device 1700 has enough power to complete a backup process, a charge storage element 1790 is configured to provide temporary power to the processing device 1700 sufficient to power it long enough to store the machine state in the plurality of non- volatile logic element arrays 1710 after the output power supply rail 1755 is disconnected from the input power supply 1750. The charge storage element 1790 may be at least one dedicated on-die (or off-die) capacitor designed to store such emergency power. In another approach, the charge storage element 1790 may be circuitry in which naturally occurring parasitic charge builds up in the die where the dissipation of the charge from the circuitry to ground provides sufficient power to complete a backup operation. [00120] The architecture described above can facilitate a number of operation configurations that improve overall processing device function over previous designs. In one such example, known ULP applications sometimes require different tasks to be performed for each interrupt that is triggered such that based on the specific interrupt that is triggered, different operations and code execution is desired. Also, Real Time Operating Systems (RTOS) often switch back and forth between multiple operating threads. Today, these applications copy the current machine context (program counter, stack pointer, register file contents, and the like) into a temporary storage before switching to a different thread or interrupt and later be restored upon returning to the current thread of code execution. This storage and restoration requires a lot of time and power. Time is the enemy of RTOS since the goal of an RTOS is to service operating requests "in real time" (almost instantly). [00121] To address this concern, a version of the processing or computing device described above can be configured to handle two or more operating threads or virtual machines. In one approach, the at least one non-volatile logic controller is configured to store first program data from a first program executed by the computing device apparatus in a first set of nonvolatile logic element arrays of the plurality of non-volatile logic element arrays. Similarly, the at least one non-volatile logic controller is further configured to store second program data from a second program executed by the computing device apparatus in a second set of non-volatile logic element arrays of the plurality of non- volatile logic element arrays. The first program and the second program can correspond to distinct executing threads or virtual machines for the computing device apparatus, and the storage can be completed in response to receiving stimulus regarding an interrupt for the computing device apparatus or in response to a power supply quality problem for the computing device apparatus. When the device needs to switch between processing threads or virtual machines, the at least one non-volatile logic controller is further configured to restore the first program data or the second program data from the plurality of nonvolatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first program or the second program is to be executed by the computing device apparatus. The stimuli described above could be an actual instruction that triggers the context switch, an interrupt signal, an event from an internal timer, an event coming from the outside of the chip, or the like. [00122] An example arrangement used to effect the storage and restoration of the different processing threads or virtual machines is illustrated in FIG. 18, which represents a modification of the example systems of FIGS. 1 and 2. In FIG. 18, a given cloud 1805 of volatile storage elements 230 and 237 includes a plurality 1810 of NVL arrays 1812 and 1814 associated with the volatile storage elements 230 and 237. In one approach, a multiplexer 212 is connected to variably connect individual ones of the volatile storage elements 230 and 237 to one or more corresponding individual ones of the non- volatile logic element arrays 1812 and 1814. In this approach, the at least one non- volatile logic controller 1806 is further configured to store the first program data or the second program data to the plurality of non-volatile logic element arrays 1812 and 1814 by controlling the multiplexer 212 to connect individual ones of the plurality of volatile storage elements 230 and 237 to either the first set 1812 of non- volatile logic element arrays or the second set 1814 of non- volatile logic element arrays based on whether the first program or the second program is executing in the computing device apparatus. A second multiplexer 1822 is connected to variably connect outputs of individual ones of the non- volatile logic element arrays 1812 and 1814 to inputs of one or more corresponding individual ones of the volatile storage elements 230 and 237. Here, the at least one non- volatile logic controller 1806 is further configured to restore the first program data or the second program data to the plurality of volatile storage elements 230 and 237 by controlling the multiplexer 1822 to connect inputs of individual ones of the plurality of volatile storage elements 230 and 237 to outputs of either the first set 1812 of non- volatile logic element arrays or the second set 1814 of nonvolatile logic element arrays based on whether the first program or the second program is to be executed in the computing device apparatus. Generally speaking, in this example, the NVL arrays receive signals from the associated NVL controller during both read and write, whereas the first multiplexer 212 receives signals during a write to NVL array process and the second multiplexer 1822 receives signals during a read from NVL arrays process. [00123] FIG. 19 is a flow chart illustrating operation of a processing device operating two or more processing threads as described above. The method includes operating 1902 a processing device having at least a first processing thread and a second processing thread using a plurality of volatile storage elements. First program data stored in the plurality of volatile storage elements during execution of the first processing thread is stored 1904 in a first set of non-volatile logic element arrays of a plurality of non-volatile logic element arrays. Similarly, second program data stored in the plurality of volatile storage elements during execution of the second processing thread is stored 1906 in a second set of non- volatile logic element arrays of the plurality of non-volatile logic element arrays. The storage in the NVL arrays can be done in response to a program based or power supply quality problem based interrupt, and the choice of which set of data to backup in the NVL arrays can be made based on the type of interrupt received. By one approach, the method can include controlling a multiplexer to connect individual ones of the plurality of volatile storage elements to either the first set of non-volatile logic element arrays or the second set of non-volatile logic element arrays based on whether the first processing thread or the second processing thread is executing in the processing device. To allow further processing of the respective threads, the method includes restoring 1908 the first program data or the second program data from the plurality of non-volatile logic element arrays to the plurality of volatile storage elements in response to receiving a stimulus regarding whether the first processing thread or the second processing thread is to be executed. [00124] So configured, with reference to the example discussed above, by using NVL mini-arrays to save the key machine context, any number of distinct executing threads or virtual machines can be supported (limited only by the die area needed for the required NVL arrays). Switching to a different code stream based on the nature of the interrupt that needs to be serviced is simply a matter of saving the current machine context (program counter, registers, stack pointer, and the like) to the NVL mini-arrays dedicated to that operating thread and recovering the desired operating context from another set of NVL-arrays. Switching between two operating contexts is controlled in hardware by using muxes on the NVL mini-array read and write data ports and control inputs to select the desired set of mini-arrays for the required operation. The multiple machine contexts are saved in NVL mini-arrays and are thus not sensitive to interruptions in the power supply. Machine execution can continue uninterrupted across supply disruptions, independent of the operating context currently being executed when the power is lost. [00125] Moreover, time and power savings in switching between operating threads or machine states can be realized. For example, in known systems, the existing machine context must be saved before operations are switched to another machine context. This is typically done by moving the machine context in chunks equal in size to the normal machine data path width (8 bit, 16 bit, 32 bit, 64 bit, etc). Because the entire data bandwidth to memory in a typical machine is limited by the size of the machines data path, it takes more than one machine clock cycle to store the machine context. For example, if a context must save a 32 bit program counter, a 32 bit stack pointer, a 64 entry x 32 bit register file, and a 32 entry x 32 bit register file, then the total machine context is 98 thirty two bit machine "words". A full context save would take 98 clock cycles assuming the memory can accept one 32 bit word per clock cycle. A full machine context can contain IK - 500K FF's depending on system complexity. For a system with a 32 bit data word and 500K FF's, it could take 500,000/32 bits per cycle = 15,625 clock cycles to save the entire virtual machine state. By contrast, NVL arrays arranged as described herein have parallel access to all FF's. In an example with 8 entries per NVL array and all NVL arrays operating in parallel, it would only take 8 clock cycles to store 500K FF's worth of machine state. System Example [00126] FIG. 20 is a block diagram of another SoC 2000 that includes NVL arrays, as described above. SoC 2000 features a Cortex-M0 processor core 2002, universal asynchronous receiver/transmitter (UART) 2004 and SPI (serial peripheral interface) 2006 interfaces, and 10KB ROM 2010, 8KB SRAM 2012, 64KB (Ferroelectric RAM) FRAM 2014 memory blocks, characteristic of a commercial ultra low power (ULP) microcontroller. The 130nm FRAM process based SoC uses a single 1.5V supply, an 8MHz system clock and a 125MHz clock for NVL operation. The SoC consumes 75uA/MHz & 170uA/MHz while running code from SRAM & FRAM respectively. The energy and time cost of backing up and restoring the entire system state of 2537 FFs requires only 4.72nJ & 320ns and 1.34nJ & 384ns respectively, which sets the industry benchmark for this class of device. SoC 2000 provides test capability for each NVL bit, as described in more detail above, and in-situ read signal margin of 550mV. [00127] SoC 2000 has 2537 FFs and latches served by 10 NVL arrays. A central NVL controller controls all the arrays and their communication with FFs, as described in more detail above. The distributed NVL mini-array system architecture helps amortize test feature costs, achieving a SoC area overhead of only 3.6% with exceptionally low system level sleep/wakeup energy cost of 2.2pJ/0.66pJ per bit. [00128] Although the invention finds particular application to microcontrollers (MCU) implemented, for example, in a System on a Chip (SoC), it also finds application to other forms of processors. A SoC may contain one or more modules which each include custom designed functional circuits combined with pre-designed functional circuits provided by a design library. [00129] While the invention has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various other embodiments of the invention will be apparent to persons skilled in the art upon reference to this description. For example, other portable, or mobile systems such as remote controls, access badges and fobs, smart credit/debit cards and emulators, smart phones, digital assistants, and any other now known or later developed portable or embedded system may embody NVL arrays as described herein to allow nearly immediate recovery to a full operating state from a completely powered down state. [00130] While embodiments of retention latches coupled to a nonvolatile FeCap bitcell are described herein, in another embodiment, a nonvolatile FeCap bitcell from an NVL array may be coupled to flip-flop or latch that does not include a low power retention latch. In this case, the system would transition between a full power state, or otherwise reduced power state based on reduced voltage or clock rate, and a totally off power state, for example. As described above, before turning off the power, the state of the fiipflops and latches would be saved in distributed NVL arrays. When power is restored, the fiipflops would be initialized via an input provided by the associated NVL array bitcell. [00131] The techniques described in this disclosure may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the software may be executed in one or more processors, such as a microprocessor, application specific integrated circuit (ASIC), field programmable gate array (FPGA), or digital signal processor (DSP). The software that executes the techniques may be initially stored in a computer-readable medium such as compact disc (CD), a diskette, a tape, a file, memory, or any other computer readable storage device and loaded and executed in the processor. In some cases, the software may also be sold in a computer program product, which includes the computer-readable medium and packaging materials for the computer-readable medium. In some cases, the software instructions may be distributed via removable computer readable media (e.g., floppy disk, optical disk, flash memory, USB key), via a transmission path from computer readable media on another digital system, etc. [00132] Although method steps may be presented and described herein in a sequential fashion, one or more of the steps shown and described may be omitted, repeated, performed concurrently, and/or performed in a different order than the order shown in the figures and/or described herein. Accordingly, embodiments of the invention should not be considered limited to the specific ordering of steps shown in the figures and/or described herein. [00133] It is therefore contemplated that the claims will cover any such modifications of the embodiments as fall within the true scope of the invention.
A system and method for optional upgrading of a software application on a wireless device during the execution of the software application. The system includes receiving a request to replace a resident executable application with a different version of the application. The system further includes detecting the active execution of the resident executable application. The system also includes receiving, via a network, the different version of the application. Also, the system includes storing the different version of the application in a temporary location in response to detecting the active execution of the resident executable application. In addition, the system includes terminating the active execution of the resident executable application. The system also includes overwriting the resident executable application with the different version of the application stored in the temporary location. Further, the system includes initiating active execution of the different version of the application.
1.A method for using a different version of a resident executable application on a wireless device to replace the application, comprising:Receiving a request to replace a resident executable application with a different version of the application;Detecting the actual execution of the resident executable application program;Receiving the different versions of the application program via a network;Storing the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application;Terminate the actual execution of the resident executable application;Overwriting the application with the different versions of the resident executable application stored in the temporary location; andThe actual execution of the different versions of the application is initiated.2.The method of claim 1, wherein the request is initiated by the wireless device.3.The method of claim 1, wherein the request is initiated by a user-entered detection.4.The method of claim 1, wherein the different versions of the application are a previous version.5.The method of claim 1, wherein the application is at least one of an extension, a script, and content data.6.The method of claim 1, wherein the temporary location is on at least one of a peripheral device and a remote network location.7.A wireless device containing a resident executable application program comprising:Logic configured to receive a request to replace a resident executable application with a different version of the application;Logic configured to detect the actual execution of the resident executable application;Configured to receive the different versions of the application's logic via a network;Logic configured to store the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application;Logic configured to terminate the actual execution of the resident executable application;Configured to overwrite the logic of the application with the different versions of the resident executable application stored in the temporary location; andLogic configured to initiate the actual execution of the different versions of the application.8.The wireless device of claim 7, wherein the request is initiated by the wireless device.9.The wireless device of claim 7, wherein the request is initiated by detection input by a user.10.The wireless device of claim 7, wherein the different versions of the application are a previous version.11.The wireless device of claim 7, wherein the application is at least one of an extension, a script, and content data.12.The wireless device of claim 7, wherein the temporary location is on at least one of a peripheral device and a remote network location.13.A computer program contained on a computer-readable medium, said computer program being able to replace said application program with a different version of a resident executable application program on a wireless device, said computer program comprising:Code operable to receive a request to replace a resident executable application with a different version of the application;Code operable to detect the actual execution of the resident executable application;Operable to receive the different versions of the application code via a network;Code operable to store the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application;Operable to terminate the actually executed code of the resident executable application;Operable to overwrite the code of the application with the different versions of the resident executable application stored in the temporary location; andCode operable to initiate the actual execution of the different versions of the application.14.The computer program according to claim 13, wherein the application program is at least one of an extension, a script, and content data.15.A wireless device containing a resident application program includes:Means for receiving a request to replace a resident executable application with a different version of the application;Means for detecting the actual execution of the resident executable application program;Means for receiving the different versions of the application via a network;A device for storing the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application;Means for terminating the actual execution of the resident executable application program;Means for overwriting the application with the different versions of the resident executable application stored in the temporary location; andMeans for starting the actual execution of the different versions of the application.16.The wireless device of claim 15, wherein the application is at least one of an extension, a script, and content data.17.A method of using a different version of a resident executable application on a wireless device to replace the application includes:Receiving a request to replace a resident executable application with a different version of the application;Receiving the different versions of the application via a network;Storing the different versions of the application in a temporary location in response to receiving a request to replace the application with a resident executable application;Detecting the actual execution of the resident executable application program;Terminate the actual execution of the resident executable application;Overwriting the application with the different versions of the resident executable application stored in the temporary location; andThe actual execution of the different versions of the application is initiated.18.The method of claim 17, wherein the request is initiated by the wireless device.19.The method of claim 17, wherein the request is initiated by a user-entered detection.20.The method of claim 17, wherein the different versions of the application are a previous version.21.The method of claim 17, wherein the resident executable application is at least one of an extension, a script, and content data.22.The method of claim 17, wherein the temporary location is at least one of a peripheral device and a remote network location.23.A wireless device containing a resident executable application program comprising:Logic configured to receive a request to replace a resident executable application with a different version of the application;Configured to receive the different versions of the application's logic via a network;Logic configured to store the different versions of the application in a temporary location in response to receiving the request to replace the application with a different version of a resident executable application;Logic configured to detect the actual execution of the resident executable application;Logic configured to terminate the actual execution of the resident executable application;Configured to overwrite the logic of the application with the different versions of the resident executable application stored in the temporary location; andLogic configured to initiate the actual execution of the different versions of the application.24.The wireless device of claim 23, wherein the request is initiated by the wireless device.25.The wireless device of claim 23, wherein the request is initiated by detection input by a user.26.The wireless device of claim 23, wherein the different versions of the application are a previous version.27.The wireless device of claim 23, wherein the resident executable application is at least one of an extension, a script, and content data.28.The wireless device of claim 23, wherein the temporary location is at least one of a peripheral device and a remote network location.29.A computer program included on a computer-readable medium, said computer program being able to replace said application program with a different version of a resident executable application program on a wireless device, said computer program comprising:Code operable to receive a request to replace a resident executable application with a different version of the application;Operable to receive the different versions of the application code via a network;Code operative to store the different versions of the application in a temporary location in response to receiving the request to replace the application with a resident executable application;Code operable to detect the actual execution of the resident executable application;Operable to terminate the actually executed code of the resident executable application;Operable to overwrite the code of the application with the different versions of the resident executable application stored in the temporary location; andCode operable to initiate the actual execution of the different versions of the application.30.The computer program according to claim 29, wherein the application program is at least one of an extension, a script, and content data.31.A wireless device for processing secure communication with a client computing device via a network includes:Means for receiving a request to replace a resident executable application with a different version of the application;Means for receiving the different versions of the application via a network;A device for storing the different versions of the application in a temporary location in response to receiving a request to use a different version of a resident executable application to replace the application;Means for detecting the actual execution of the resident executable application program;Means for terminating the actual execution of the resident executable application program;Means for overwriting the application with the different versions of the resident executable application stored in the temporary location; andMeans for starting the actual execution of the different versions of the application.32.The wireless device of claim 31, wherein the application is at least one of an extension, a script, and content data.33.A method of using a different version of a resident executable application on a wireless device to replace the application includes:Receiving a request to replace a resident executable application with a different version of the application;Receiving the different versions of the application program via a network;Storing the version of the application in an upgrade location in response to receiving a request to use a different version of a resident executable application to replace the application;Detecting the actual execution of the resident executable application in an actual application location;Terminate the actual execution of the resident executable application; andThe execution of the first application program detected in a continuous search of the upgrade location and the actual application program location is initiated.34.The method of claim 33, wherein the request is initiated by the wireless device.35.The method of claim 33, wherein the request is initiated by a user-entered detection.36.The method of claim 33, wherein the different versions of the application are a previous version.37.The method of claim 33, wherein the resident executable application is at least one of an extension, a script, and content data.38.The method of claim 33, wherein the temporary location is at least one of a peripheral device and a remote network location.39.A wireless device containing a resident executable application program comprising:Logic configured to receive a request to replace a resident executable application with a different version of the application;Configured to receive the different versions of the application's logic via a network;Logic configured to store the version of the application in an upgrade location in response to receiving the request to replace the application with a resident executable application;Logic configured to detect the actual execution of the resident executable application in an actual application location;Logic configured to terminate the actual execution of the resident executable application; andLogic configured to initiate execution of a first application detected in a continuous search of the upgrade location and the actual application location.40.The wireless device of claim 39, wherein the request is initiated by the wireless device.41.The wireless device of claim 39, wherein the request is initiated by a detection input by a user.42.The wireless device of claim 39, wherein the different version of the application is a previous version.43.The wireless device of claim 39, wherein the resident executable application is at least one of an extension, a script, and content data.44.The wireless device of claim 39, wherein the upgrade location is at least one of a peripheral device and a remote network location.45.A computer program included on a computer-readable medium, said computer program being able to replace said application program with a different version of a resident executable application program on a wireless device, said computer program comprising:Code operable to receive a request to replace a resident executable application with a different version of the application;Operable to receive the different versions of the application code via a network;Code operative to store the version of the application in an upgrade location in response to receiving the request to replace the application with a resident executable application;Code operable to detect actual execution of the resident executable application in an actual application location;Operable to terminate the actually executed code of the resident executable application; andCode operable to initiate execution of a first application detected in a continuous search of the upgrade location and the actual application location.46.The computer program according to claim 45, wherein said application program is at least one of an extension, a script, and content data.47.A wireless device containing a resident executable application program comprising:Means for receiving a request to replace a resident executable application with a different version of the application;Means for receiving the different versions of the application via a network;A device for storing the version of the application in an upgrade location in response to receiving a request to use a different version of a resident executable application to replace the application;Means for detecting the actual execution of the resident executable application in an actual application location;Means for terminating said actual execution of said resident executable application; andMeans for initiating execution of a first application detected in a continuous search of the upgrade location and the actual application location.48.The wireless device of claim 47, wherein the application is at least one of an extension, a script, and content data.
Method, software and device for application upgrade during executionCross-reference to related applicationsThis application claims priority from US Provisional Application No. 60 / 515,802, filed on October 29, 2003, which is incorporated herein by reference.Technical fieldThe present invention relates generally to data networks and computer communications across said data networks. More specifically, the present invention relates in part to the installation and removal of software applications and their components on wireless devices that selectively communicate with one or more application download servers across a wireless data network. More specifically, the invention relates in part to an optional upgrade of a software application on a wireless device during said execution of said software application.Background techniqueA wireless device, such as a cellular phone, transmits data packets including voice and data via a wireless network. Cellular phones are themselves manufactured with increased computing power and are equivalent to personal computers and handheld personal digital assistants ("PDAs"). These "smart" cell phones have installed application programming interfaces ("APIs") on their native computer platforms that allow software developers to create software applications that are typically executed entirely on cell phones (usually (Called a "program"). The API is located between the wireless device system software and the software application program so that the application program can utilize the capabilities of the cellular phone computer without requiring the software developer to have specific cellular phone system source code.The software application may be pre-loaded when the wireless device is manufactured, or the user may then request to download another program via the cellular telecommunication carrier network, where the downloaded application can be executed on the wireless telephone. Thus, users of wireless phones can customize their wireless phones by selectively downloading applications such as games, print media, stock updates, news, or any other type of information or application available for download over a wireless network. To manage cellular phone resources, users of wireless devices purposefully delete applications and data from the wireless phone platform to clear storage space so that new applications can be loaded onto the cleaned space.Compared to larger computer platforms for personal computers and PDAs, wireless devices have limited resources (such as storage and processing) for non-essential applications. In general, telecommunications applications have priority over the use of system resources, and other applications allocate resources when they are used. Therefore, the wireless device has a limited capacity for storing all files of the application program, and the management of resources is determined by the user of the phone to delete the application program to make room for a new application program to be downloaded to the wireless device. Wireless devices do not otherwise download applications that they have no resources to save and execute.When trying to free up resources on a wireless device, users are generally unable to remove certain components of a resident application without deactivating the entire resident application. If the user manages to remove a specific component, this action will not be conducive to the expected release of resources, because the deactivated resident application cannot be restored without completely reinstalling the application. Even if the main application is not executable, useless undeleted application components still uselessly occupy storage space. This need for all deletion or non-deletion of resident software applications on the wireless device greatly limits the number of applications that can reside on the wireless device and can be operated by the user.Therefore, it would be advantageous to provide a wireless device that can remove certain components of an application while maintaining important data for the application, such as licenses and user-specified data, thereby maximizing the use of computer resources on the wireless device degree. When the wireless device needs the deleted software component to execute the application again, the wireless device can obtain the software component through the wireless network. The present invention is therefore directed to systems and methods that provide control over the removal and reloading of selected software application components at a wireless device.Summary of the InventionEmbodiments disclosed herein include systems and methods for upgrading software applications on, for example, cellular phones, personal digital assistants, pagers, or other computer platform wireless devices, where the upgrades are optionally in the software applications Executed during program execution. At least one embodiment includes receiving a request to replace a resident executable application with a different version of the application. This embodiment further includes detecting the actual execution of the resident executable application. This embodiment further includes receiving the different versions of the application via a network. This embodiment further includes storing the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application. This embodiment further includes terminating the actual execution of the resident executable application. And, this embodiment further includes overwriting the application with the different versions of the resident executable application stored in the temporary location; and this embodiment further includes starting the application The actual implementation of said different versions.At least one embodiment includes logic configured for receiving a request to replace a resident executable application with a different version of the application. This embodiment further includes logic configured to detect the actual execution of the resident executable application. This embodiment further includes logic configured for receiving the different versions of the application via a network. This embodiment further includes logic configured to store the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application. This embodiment further includes logic configured for terminating the actual execution of the resident executable application, and this embodiment further includes configured for utilizing the resident stored in the temporary location The different versions of the stored executable application program to overwrite the logic of the application program. And, this embodiment further includes logic configured to initiate the actual execution of the different versions of the application.At least one embodiment includes code operable to receive a request to replace a resident executable application with a different version of the application. This embodiment further includes code operable to detect the actual execution of the resident executable application. This embodiment further includes code operable to receive the different versions of the application via a network. This embodiment further includes code operable to store the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application. This embodiment further includes code operable to terminate the actual execution of the resident executable application. This embodiment further includes code operable to overwrite the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment further includes code operable to initiate actual execution of the different versions of the application.At least one embodiment includes means for receiving a request to replace a resident executable application with a different version of the application. This embodiment includes means for detecting the actual execution of the resident executable application. This embodiment includes means for receiving the different versions of the application via a network. This embodiment includes means for storing the different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application. This embodiment includes means for terminating the actual execution of the resident executable application. This embodiment includes means for overwriting the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment includes means for starting the actual execution of the different versions of the application.At least one embodiment includes receiving a request to replace a resident executable application with a different version of the application. This embodiment further includes receiving the different versions of the application via a network. This embodiment further includes storing the different versions of the application in a temporary location in response to receiving the request to use a different version of a resident executable application to replace the application. This embodiment further includes detecting the actual execution of the resident executable application. This embodiment further includes terminating the actual execution of the resident executable application. This embodiment further includes overwriting the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment further includes starting the actual execution of the different versions of the application.At least one embodiment includes logic configured to receive a request to replace a resident executable application with a different version of the application. This embodiment also includes logic configured to receive the different versions of the application via a network. This embodiment further includes configured to store the different versions of the application in a temporary location in response to receiving a request to replace the application with a different version of a resident executable application. In the logic. This embodiment further includes logic configured to detect the actual execution of the resident executable application. This embodiment further includes logic configured for terminating the actual execution of the resident executable application. This embodiment further includes logic configured to overwrite the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment further includes logic configured to initiate the actual execution of the different versions of the application.At least one embodiment includes code operable to receive a request to replace a resident executable application with a different version of the application. This embodiment further includes code operable to receive the different versions of the application via a network. This embodiment further includes storing the different versions of the application in a temporary location responsive to receiving a request to replace the application with a resident executable application. The code. This embodiment further includes code operable to detect the actual execution of the resident executable application. This embodiment further includes code operable to terminate the actual execution of the resident executable application. This embodiment further includes code operable to overwrite the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment further includes code operable to initiate actual execution of the different versions of the application.At least one embodiment includes means for receiving a request to replace a resident executable application with a different version of the application. This embodiment further includes means for receiving the different versions of the application via a network. This embodiment further includes means for storing the different versions of the application in a temporary location in response to receiving a request to replace the application with a resident executable application. . This embodiment further includes means for detecting the actual execution of the resident executable application. This embodiment further includes means for terminating the actual execution of the resident executable application. This embodiment further includes means for overwriting the application with the different versions of the resident executable application stored in the temporary location. And, this embodiment further includes means operable to initiate actual execution of the different versions of the application.At least one embodiment includes receiving a request to replace a resident executable application with a different version of the application. This embodiment further includes receiving the different versions of the application via a network. This embodiment further includes storing the version of the application in an upgrade location in response to receiving a request to use a different version of a resident executable application to replace the application. This embodiment further includes detecting the actual execution of the resident executable application in an actual application location. This embodiment further includes terminating the actual execution of the resident executable application. Moreover, this embodiment further includes starting execution of the first application program detected in a continuous search of the upgrade location and the actual application location.At least one embodiment includes logic configured to receive a request to replace a resident executable application with a different version of the application. This embodiment further includes logic configured for receiving the different versions of the application via a network. This embodiment further includes being configured to store the version of the application in an upgrade location in response to receiving a request to replace the application with a different version of a resident executable application. Logic. This embodiment further includes logic configured to detect the actual execution of the resident executable application in an actual application location. This embodiment further includes logic configured for terminating the actual execution of the resident executable application. And, this embodiment further includes logic configured to initiate execution of the first application detected in a continuous search of the upgrade location and the actual application location.At least one embodiment includes code operable to receive a request to replace a resident executable application with a different version of the application. This embodiment includes code operable to receive the different versions of the application program via a network. This embodiment includes code operable to store the version of the application in an upgrade location in response to receiving a request to replace the application with a resident executable application . This embodiment includes code that is operable to detect the resident executable application in the actual application location in an actual application location. This embodiment includes code that is operable to terminate the actual execution of the resident executable application. And, this embodiment includes code operable to initiate execution of the first application detected in a continuous search of the upgrade location and the actual application location.At least one embodiment includes means for receiving a request to replace a resident executable application with a different version of the application. This embodiment also includes means for receiving the different versions of the application via a network. This embodiment further includes means for storing the version of the application in an upgrade location in response to receiving a request to use a different version of a resident executable application to replace the application. This embodiment also includes means for detecting the actual execution of the resident executable application in an actual application location. This embodiment also includes means for terminating the actual execution of the resident executable application. And, this embodiment includes a device for initiating the execution of the first application detected in a continuous search of the upgrade location and the actual application location.Other objects, advantages and features of the present invention will become apparent in light of the description of the drawings set out below, specific embodiments of the invention and the claims set out above.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is a representative diagram of a system of the present invention that manages software applications on wireless devices that selectively communicate with one or more application download servers via a wireless network.FIG. 2 is a block diagram of hardware components of an exemplary wireless network that provides communication between different wireless devices and an application download server and database.FIG. 3 is a file table resident on a wireless device platform, illustrating an application program and its constituent components.FIG. 4 is a flowchart illustrating the selective removal of application components on a wireless device.FIG. 5 is a flowchart illustrating that a wireless device retrieves an application component from an application download server to restore an application on the wireless device so that the application can be executed again.FIG. 6 is a flowchart illustrating one embodiment of a system including replacing a resident executable application with a different version of the application.FIG. 7 is a flowchart illustrating one embodiment of a system including replacing a hosted executable application with a different version of the application.FIG. 8 is a block diagram of one embodiment of a wireless device used in a system for replacing a resident executable application with a different version of the application.FIG. 9 is a block diagram of one embodiment of a wireless device used in a system for replacing a resident executable application with a different version of the application.FIG. 10 is a block diagram of one embodiment of a wireless device used in a system for replacing a resident executable application with a different version of the application.detailed descriptionReferring to FIG. 1, FIG. 1 shows a system 10 of the present invention for deleting and reloading software application components on a wireless device (e.g., a cellular phone 12) that communicates with at least one application download server 16 across a wireless network 14, said At least one application download server 16 selectively transmits software applications and components to the wireless device across the wireless communication portal or other data portals of the wireless network 14. As shown here, the wireless device may be a cellular phone 12, a personal digital assistant 18, a pager 20 (shown here as a two-way text pager), or even a separate stand-alone with a wireless communication portal and may additionally have a wired connection 24 to a network or the Internet Computer platform 22. The system of the present invention may therefore be implemented on any form of remote module including a wireless communication portal, said remote module including, without limitation, a wireless modem, a PCMCIA card, an access terminal, a personal computer, an access terminal, no display or keypad Phone or any combination or subcombination of it.The application download server 16 and other computer elements in communication with the wireless network 14 are shown here on the network 26. There is a second server 30 and an independent server 32, and each server can provide separate services and processing to the wireless devices 12, 18, 20, 22 across the wireless network 14. Preferably, there is also at least one stored application database 28 that holds applications that can be downloaded by the wireless devices 12, 18, 20, 22.A block diagram illustrating the interrelationships of the components of the wireless network 14 and the elements of the present invention is shown more fully in FIG. The wireless network 14 is merely exemplary and may include any system by which remote modules, such as wireless devices 12, 18, 20, 22, etc., perform radio communications between each other and / or between components of the wireless network 14, including ( Not limited to) wireless network carriers and / or servers. The application download server 16 and the stored application database 28, along with any other servers required to provide cellular telecommunications services, such as the server 30, communicate with the carrier network 40 via data links such as the Internet, a secure LAN, WAN, or other network. The carrier network 40 controls messages (sent as data packets) to a message service controller ("MSC") 42. The carrier network 40 communicates with the MSC via a network, the Internet, and / or POTS ("Common Telephone System"). Generally, a network or Internet connection between the carrier network 40 and the MSC 42 transmits data, and the POTS transmits voice information. The MSC 42 is connected to a plurality of base stations ("BTS") 44. In a manner similar to a carrier network, the MSC 42 is usually connected to the BTS 44 through a network for data transmission and / or the Internet and a POTS for voice information. The BTS 44 eventually broadcasts the message wirelessly to a wireless device, such as a cell phone 12, via a short message service ("SMS") or other radio method known in the art.A wireless device such as a cellular phone 12 has a computer platform 50 that can receive and execute software applications transmitted from the application download server 16. Computer platform 50 includes an application-specific integrated circuit ("ASIC" 52) or other processor, microprocessor, logic circuit, or other data processing device. ASIC 52 is installed when manufacturing wireless devices and is usually not upgradeable. The ASIC 52 or other processor executes an application programming interface ("API") layer that interfaces with any resident program interface in the memory 56 of the wireless device. The memory may consist of read-only or random-access memory (RAM and ROM), EPROM, flash memory cards, or any memory commonly used in computer platforms. The computer platform 50 also includes a local database 58 that can hold unused applications in the memory 56. The local database 58 is typically a flash memory unit, but may be any secondary storage device known in the art, such as magnetic media, EPROM, optical media, magnetic tape or floppy disk or hard disk.A wireless device, such as the cell phone 12, therefore downloads one or more software applications, such as games, news, stock monitors, and the like, and saves the applications on the local database 58 when not in use, and is stored at the user The resident applications stored on the local database 58 are uploaded to the memory 56 as needed for execution on the API 54. However, there are significant cost and size limitations to wireless devices that limit the installation storage capacity available in the local database 58 and memory 56, so a limited amount of resident software applications can be stored on the wireless device. The system and method of the present invention manages this storage capacity limitation by selectively removing and reloading individual software application components, as described further below.Referring to FIG. 3, an illustrative file structure or data management structure stored in the API 54 is shown. The top-level domain is a "file" 60 containing all discrete software files on the computer platform 50. The file structure of FIG. 3 is merely illustrative and may not appear in this form on the computer platform 50, and may even exist entirely in machine code on the wireless devices 12, 18, 20, 22 without a discernable file structure. In file 60 is an API shown here as a Window Binary Runtime Environment ("BREW") 62, which is an API used by QUALCOMM (R) to interact with software applications on the wireless device computer platform 50. The BREW 62 file includes an application file 64, and one file is a chess 66 game that has been downloaded from the application download server 16 and is now resident on the local database 58 of the computer platform 50 of the wireless device. For illustrative purposes, the Chess 66 application is a resident software application for a wireless device.The chess 66 application includes several software components 68, such as the files chess.mod and chess.bar. The application component 68 is a module necessary to execute a chess application on the computer platform 50. Chess 66 also includes data associated with a particular application, shown here as scores.sig 70, which is a score stored for a user playing a chess game on the computer platform 50. A license as a hidden file can also be included in the Chess 66 application. Therefore, the application component 68 that allows the execution of a chess game is easily copied using a copy transmitted from the application download server 16, and the related application data such as the score 70 and the license will be lost if its files or modules are deleted . The present invention therefore utilizes said capabilities to obtain another copy of an unnecessary application component from the application download server 16 while maintaining unretrievable application-related data such as a license, or user-specific data such as personal information and addresses , Or even completely entertainment-related data, such as a previous score of 70 for a chess game.When the user wishes to download another software application to the computer platform 50 without sufficient resources, especially in terms of storage on the local database 58, the BREW API 62 or other space management component can trigger a prompt to the user to ask the chess application Whether the program components are removable so that the requested download application can be placed on the computer platform 50. In addition, the BREW API62 identifies which components are to be removed and automatically manages system resources. When the chess.mod and chess.bar files have been deleted from the chess 66 file, the chess game will not be executable on the computer platform 50. By separating the necessary and non-essential files on the computer platform 50, the wireless device can selectively delete one or more components of one or more application components 68 that host software applications without losing application-related data , Such as score file 70.When an application has removed one or more application components, such as chess.mod and chess.bar application component 68 for chess game 66, and the user wishes to use the application again, the wireless device will cross the wireless network 14 The application download server 16 is selectively prompted to transmit one or more application components 68. Once the wireless device receives the application component 68, the wireless device installs the transmitted one or more application components 68 back onto the computer platform 50 so that the resident application or chess 66 here can be executed again. It should be noted that it is not necessary to remove all application components 68, and may be removed based on the size of the application or other criteria. In addition, a file containing application-related data such as scores.70 may also contain application components required to execute the application and need not be a pure data container.4 and 5 are flowcharts illustrating the method of the present invention for managing the loading and removal of application components 68 of one or more software applications residing on a computer platform 50 residing on wireless devices 12, 18, 20, 22 Illustration. As shown in FIG. 4, the computer platform 50 receives instructions to download a software application, as shown in step 80, and then makes a determination on the ASIC 52 or other processor of the wireless device as to whether sufficient information exists to download the application. The resource is determined as shown in step 82. If sufficient resources are available, the application is downloaded and stored, as shown in step 84, and the download process is terminated. If there are insufficient resources at decision 82, the user is prompted to clean up system resources to download the application, as shown in step 86, which requires deleting an application component. A decision is then made as to whether the user is permitted to clean up the resources, as shown in decision 88, and if not, the user is notified that there are insufficient resources available for download, as shown in step 90, and the download process is terminated. If the user grants permission to clean up resources at decision 88, then one or more application components such as component 68 are selectively deleted to clean up the necessary resources, and the deletion occurs without a significant loss of applications such as the scores.sig file 70 Program-related data or any license to use the application. The application is then downloaded and stored on the computer platform 50, as shown in step 94, and the download process is terminated.As shown in step 100, when a request is received to execute an application with one or more deleted components 68, the process of reinstalling the deleted components is shown in FIG. This example will try to play the chess game again for the user. The user is then preferably prompted to establish a communication link to the application download server 16, as shown in step 102. However, the wireless device may alternately automatically establish a communication link upon receiving an execution request. If the user requests execution of the application's request, a decision is made as to whether the user has authorized the link, as shown at decision 104. If the user refuses to establish the link, the user is notified that the required application components must be downloaded to execute the requesting application, as shown in step 106, and then the execution request is terminated. If the user authorizes the communication link at decision 104, a communication link with the application download server 16 is established, as shown in step 108.Once the communication link with the application download server 16 has been established, the wireless device prompts the application download server 16 to transmit one or more application components required by the wireless device to execute the requested application, as shown in step 110. It is then determined whether the server has transmitted the necessary application components, as shown in decision 112, and if not, the user is notified that the necessary components cannot be obtained, as shown in step 114, and the execution request is terminated. Otherwise, if the user has transmitted the necessary components at decision 112, the wireless device receives the components from the application download server, as shown in step 116, and installs the received components into the application to make the application executable, As shown in step 118. The application is then executed on the wireless device until terminated, as shown in step 120.If, for example, the reloading of the deleted application component of the application component 68 is automatic, then the process of FIG. The wireless device will only notify the user if the application cannot download the component, as shown in step 114.The steps of establishing a communication link are generally to establish a communication link through a digital or analog cellular telecommunication network as shown in FIG. 2, but other wireless networks such as a wireless LAN or a microwave or infrared network may be used alternately. In addition, establishing a communication link can occur automatically on wireless devices 12, 18, 20, 22 that desire to execute a resident software application with one or more related components removed, ie, the wireless device bridges the application with the wireless network 14 Communication of the download server 16. Otherwise, the step of establishing a communication link may occur according to the specific prompts of the users of the wireless devices 12, 18, 20, 22 to bridge the communication with the application download server 16 to the deleted one or more related ones via the wireless network 14 A component's resident software application transmits one or more components. If a user of a wireless device is to pay for a communication link, such as a cell phone call, to transfer a new application component to the wireless device, the user should be prompted to authorize the communication link necessary to reload the component before removing the component. The user may be prompted again when a communication link is needed to retrieve the components of the application to make the application executable. However, if the wireless device is fully automated and the communication link does not need to charge the user, then the user does not need to be prompted and the reloading of the component is obvious unless a problem is encountered and an error message is generated, such as at step 114.Automatic application upgrade during executionIn one embodiment, as shown in FIG. 6, applications or extensions downloaded to the wireless devices 12, 18, 20, 22 may be downloaded to the wireless devices 12, 18, 20, 22 while the applications A previous version or a different version of the extension is being executed on the wireless devices 12, 18, 20, 22. Applications or extensions can be requested or downloaded by the wireless device 12, 18, 20, 22 through the process running on the device user or the wireless device 12, 18, 20, 22, or through the server 16, 30, 32. Note that "upgrade" does not necessarily include a later version, but can refer to a different version. For example, it is preferable to revert to an older version of the application, in which case "upgrade" includes downloading a previous version of the application in place of an existing rather than later version.(Hereafter, only "applications" will be used, but it should be understood that this description also applies to extensions. Extensions include programs, components, or services that are used by data or instructions to assist in execution. For example, extensions can include A Java virtual machine installed on a wireless device for executing Java programs under the BREW environment, or an MPEG player installed and used with MPEG files). In addition, applications include not only any executable type of software, but also scripts or content data.The embodiment process includes step 600 where a request is made to download an application upgrade to the wireless devices 12, 18, 20, 22. The wireless devices 12, 18, 20, 22 may have different versions or previous versions of applications executing on the handset. In step 602, a check is performed to determine if a different or previous version of the requested application is executing on the wireless device 12, 18, 20, 22. If not, then at step 604, the application may take advantage of the new application by overwriting the existing application with the newly requested application in a file location where the existing (i.e., previous or different) application is located on the device. Request the application to upgrade.However, if it is determined at step 602 that the application is executing, then the process continues at step 606 where the upgraded application is stored in a temporary location. This temporary location may be on the device, however it may be stored on a peripheral device or other location accessible to the device on the network.At step 608, the executing application is notified that it needs to terminate. And the application or secondary process starts executing the termination of the application. After the application terminates, the upgraded application is then transferred to the file location of the existing application in step 610. Note that the location to which the upgraded application is copied is anywhere the system expects to execute the application. For example, in one embodiment, if the system desires to find an upgrade location to perform the upgraded application, and if it does not exist, the system looks for another location to execute the existing application, and the process will then upgrade the upgraded application The program is copied to the upgrade location without overwriting the existing application.After the upgraded application is transferred from the temporary location to the correct location (whether overwriting the existing application location or some other desired location as described above), then the application restarts at step 612. Note that this application can restart automatically. It should also be noted that the device does not need to be reset, operated by rebooting the router, restarted, and does not need to perform any other reset type of function for the existing application to be upgraded and the application being executed.In another embodiment of the application upgrade, and as shown in FIG. 7, in step 606 and before step 602, the requested application is downloaded to a temporary location of the device. After downloading, the device is then checked at step 602 to determine if a previous or different application version is being executed on the device. If so, execution of the existing application is terminated at step 608, and the upgraded application is copied from the temporary storage location to the correct storage location (e.g., it can overwrite the existing application) at step 610, and then The upgraded application is executed at step 612. As mentioned above, this execution can be performed automatically.If it is determined in step 602 that the existing application is not executing, then in step 700, the upgraded application is transferred to the correct location, such as overwriting the existing application. In this example, because the application is not executing, there is no need to terminate the upgraded application before transferring it to the correct location.FIG. 8 shows one embodiment of a wireless device implementing the method as described in FIG. 6. As shown, the wireless device 800 includes a memory 802, a network interface 804, a processor 806, and a bus 808. Although the memory 802 is shown as a RAM memory, other embodiments include, for example, a memory 802 of any known type of memory known to provide storage for configuration logic. In addition, although the memory 802 is shown as one adjacent unit of one type of memory, other embodiments use multiple locations and multiple types of memory as the memory 802. A network I / O interface 804 provides inputs and outputs to devices coupled to a network via a bus 808. The processor 806 operates in accordance with instructions and data provided via the bus 808. The processor 906 is part of an ASIC 52 in at least one embodiment.Located in the memory 802 are: logic 810 for receiving a request to replace the application program with a different version of the resident executable application program; logic 812 for detecting the actual execution of the resident executable application program; Logic 814 for receiving different versions of the application program via a network; logic 816 for detecting the actual execution of the resident executable application program to store different versions of the application program in a temporary location; logic 818, For terminating the actual execution of a resident executable application; logic 820 for overwriting the application with a different version of the resident executable application stored in a temporary location; and logic 822 for The actual execution of different versions of the starting application. In one or more different embodiments, the logic 810 for receiving a request to use a different version of a resident executable application to replace the application is further modified such that the request is initiated by a wireless device ( 824), wherein the request is initiated by detecting user input (826), wherein the different version of the application is a previous version (828), and / or wherein the application is in an extension, a script, and content data At least one of them (830). And, in one embodiment, the logic 816 for detecting the actual execution of a resident executable application to store different versions of the application in a temporary location is further modified, where the temporary location is in the peripheral device and At least one of the remote network locations (832).FIG. 9 shows one embodiment of a wireless device implementing the method as described in FIG. 7. As shown, the wireless device 900 includes a memory 902, a network interface 904, a processor 906, and a bus 908. Although memory 902 is shown as a RAM memory, other embodiments include, for example, memory 902 of all known types of memory known to provide storage for configuration logic. In addition, although the memory 802 is shown as one adjacent unit of one type of memory, other embodiments use multiple locations and multiple types of memory as the memory 902. A network I / O interface 904 provides inputs and outputs to a device coupled to a network via a bus 908. The processor 906 operates in accordance with instructions and data provided via the bus 908. In at least one embodiment, the processor 906 is part of an ASIC 52.Located in the memory 902 are: logic 910 for receiving a request to use a different version of a resident executable application program to replace the application program; logic 912 for receiving different versions of the application program through a network Logic 914 for storing different versions of the application in a temporary location in response to receiving a request to replace the application with a resident executable application; logic 916 for detecting the resident Actual execution of the executable application; logic 918 for terminating the actual execution of the resident executable application; logic 920 for overriding with different versions of the resident executable application stored in a temporary location Write the application; and logic 922 to initiate the actual execution of different versions of the application. In one or more different embodiments, the logic 910 for receiving a request to use a different version of a resident executable application to replace the application is further modified so that: where the wireless device initiates the Request (924); wherein the request is initiated by a user-entered detection (926); wherein the different version of the application is a previous version (928); and / or wherein the application is an extension, a script, and content At least one of the data (930). And, in one embodiment, the logic 914 for storing different versions of the application in a temporary location in response to detecting the actual execution of the resident executable application is further modified, where the temporary location is in the peripheral device and On at least one of the remote network locations (932).FIG. 10 shows an embodiment of a version of a wireless device implementing an automatic application upgrade process, which is sometimes described as including a continuous search for an execution location during the start of the execution of a preferred application. As shown, the wireless device 1000 includes a memory 1002, a network interface 1004, a processor 1006, and a bus 1008. Although the memory 1002 is shown as a RAM memory, other embodiments include, for example, a memory 1002 of all known types of memory known to provide storage for configuration logic. In addition, although the memory 1002 is shown as one adjacent unit of one type of memory, other embodiments use multiple locations and multiple types of memory as the memory 1002. A network I / O interface 1004 provides inputs and outputs to devices coupled to a network via a bus 1008. The processor 1006 operates in accordance with instructions and data provided via the bus 1008. In at least one embodiment, the processor 1006 is part of an ASIC 52.Located in the memory 1002 are: logic 1010 for receiving a request to use a different version of the resident executable application program to replace the application program; logic 1012 for receiving different versions of the application program through a network; Logic 1014, configured to store different versions of the application in a temporary location in response to receiving a request that utilizes different versions of the resident executable application to replace the application; logic 1016, used to detect the resident The actual execution of the executable application in the actual application location; logic 1018, which terminates the actual execution of the resident executable application; logic 1020, which is configured to initiate the continuation of the upgrade location and the actual application location The execution of the first application detected in the search. In one or more different embodiments, the logic 1010 for receiving a request to utilize a different version of a resident executable application to replace the application is further modified such that the request is initiated by a wireless device ( 1022), wherein the request is initiated by detecting user input (1024), wherein the different version of the application is a previous version (1026), and / or wherein the application is in an extension, a script, and content data At least one of them (1028). And, in one embodiment, the logic 1014 for storing different versions of the application in the upgrade location in response to receiving a request to replace the application with a different version of the resident executable application is further processed by A modification, wherein the upgrade location is on at least one of a peripheral device and a remote network location (1030).In view of the inventive method, the invention includes a program residing in a computer-readable medium, wherein the program directs a wireless device having a computer platform to perform the inventive method steps. The computer-readable medium may be the memory 56 of the computer platform 50 of the cellular telephone 12 (or other wireless device), or may be in a local database such as the local database 58 of the cellular telephone 12. In addition, the computer-readable medium may be a secondary storage medium that can be loaded onto a wireless device computer platform, such as a magnetic or magnetic tape, an optical disk, a hard disk, a flash memory, or other known in the art. Storage media. In the case of FIGS. 4 and 5, the inventive method may be implemented, for example, by executing a sequence of machine-readable instructions by operating portions of the wireless network 14. These instructions can reside in various types of signal-bearing media. This signal-bearing medium may include, for example, a RAM (not shown) accessible by or residing in a component of the wireless network 14. Whether contained in RAM, disk, or other secondary storage media, instructions can be stored on a variety of machine-readable data storage media, such as DASD memory (e.g., conventional "hard disk" or RAID array), magnetic tape, electronic read-only memory (E.g. ROM, EPROM, or EEPROM), optical storage devices (e.g. CD-ROM, WORM, DVD, digital optical tape), paper "perforated" cards, or other suitable data including transmission media (e.g. digital and analog) Storage media.Although the foregoing disclosure shows illustrative embodiments of the invention, it should be noted that various changes and modifications could be made herein without departing from the scope of the invention as defined by the appended claims. In addition, although the elements of the present invention may be described or claimed in the singular, the plural is also encompassed unless explicitly limited to the singular.
Various embodiments comprise prioritizing frequency allocations in thermally- or power-constrained computing devices. Computer elements may be assigned 'weights' based on their priorities. The computer elements with higher weights may receive higher frequency allocations to assure they receive priority in processing more quickly. The computer elements with lower weights may receive lower frequency allocations and suffer a slowdown in their processing. Elements with the same weight may be grouped together for the purpose of frequency allocation.
CLAIMS:1. A frequency control device to control frequency allocation to multiple compute elements in a processing system, the device configured to:determine, for each element /, a minimum frequency f at which the element will meet minimum performance levels;determine, for each element /, a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels;determine an aggregate frequency budget for all elements combined;if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distribute less than each element’s minimum frequency to each corresponding element; andif the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distribute each element’s maximum frequency to each corresponding element.2. The frequency control device of claim 1, wherein:if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, the device is further configured to sort all elements by priority, place elements with like priority in a group, and assign a group frequency budget to each group.3. The frequency control device of claim 2, further configured to:select a highest priority group;distribute the assigned frequency budget for the highest priority group to the elements in the highest priority group; wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.4. The frequency control device of claim 3, wherein the predetermined portion of the assigned frequency budget is to be based on the relative minimum frequencies for all elements in the highest priority group.5. The frequency control device of claim 3, further configured to determine a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.6. The frequency control device of claim 3, further configured to:select the second highest priority group;distribute the assigned frequency budget for the second highest priority group to the elements in the second highest priority group;wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.7. The frequency control device of claim 1, further comprising the multiple compute elements coupled to the device.8. A method of controlling frequency allocation to multiple compute elements in a processing system, the method comprising:determining, for each element /, a minimum frequency f at which the element will meet minimum performance levels;determining, for each element /, a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels;determining an aggregate frequency budget for all elements combined;if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distributing less than each element’s minimum frequency to each corresponding element; andif the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distributing each element’s maximum frequency to eachcorresponding element.9. The method of claim 8, wherein:if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, sorting all elements by priority, placing elements with like priority in a group, and assigning a group frequency budget to each group.10. The method of claim 9, further comprising:selecting a highest priority group;distributing the assigned frequency budget for the highest priority group to the elements in the highest priority group;wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.11. The method of claim 10, wherein the predetermined portion of the assigned frequency budget is based on the relative minimum frequencies for all elements in the highest priority group.12. The method of claim 10, further comprising determining a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.13. The method of claim 10, further comprising:selecting the second highest priority group;distributing the assigned frequency budget for the second highest priority group to the elements in the second highest priority group;wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.14. A computer-readable non-transitory storage medium that contains instructions, which when executed by one or more processors result in performing the operations of method claims 8-13.15. A frequency control device to control frequency allocation to multiple compute elements in a processing system, the device having means to perform the operations of method claims 8-13.16. The frequency control device of claim 15, further comprising means for multiple compute elements coupled to the device.
HYBRID PRIORITIZED RESOURCE ALLOCATION in THERMALLY- or POWER-CONSTRAINED COMPUTING DEVICESCROSS REFERENCE to RELATED APPLICATIONS[0000] This application is derived from US application serial number 15/866,425, filedJanuary 9, 2018, and claims priority to that date.TECHNICAL FIELD OF THE INVENTION[0001] Various embodiments of the invention relate to controlling power in thermally- or power-constrained computing devices, with an intention of allocating greater power to higher priority components.BACKGROUND[0002] As transistor density has been increasing, power density has also been increasing.Thermal design and thermal packaging are sometimes designed such that when all components are run at maximum frequency and worst case power conditions, thermal design limits may be exceeded. This has previously been addressed by monitoring temperature and power consumption, and then actuating throttling mechanisms to reduce temperature or power consumption when conditions were approaching or exceeding thermal design limits.[0003] Throttling typically takes the form of reducing frequency, which unfortunately reduces performance as well. However, reducing frequency uniformly runs the risk of applying this power reduction technique throughout the system, even in areas where it is not needed, and thereby reducing the performance of all areas. Past attempts at applying frequency reduction non-uniformly have typically been based on criteria such as the power efficiency of different elements. None of these approaches addresses the goal of keeping the most important functions running at high speed. BRIEF DESCRIPTION OF THE DRAWINGS[0004] Some embodiments of the invention may be better understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:[0005] Fig. 1 shows a group of compute elements that make up a system, according to an embodiment of the invention.[0006] Fig. 2 shows a diagram of a computing device, according to an embodiment of the invention.[0007] Figs. 3 A and 3B show a flow diagram of a method of distributing frequency to multiple compute elements, according to an embodiment of the invention.DETAILED DESCRIPTION[0008] In the following description, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.[0009] References to“one embodiment”,“an embodiment”,“example embodiment”,“various embodiments”, etc., indicate that the embodiment(s) of the invention so described may include particular features, structures, or characteristics, but not every embodiment necessarily includes the particular features, structures, or characteristics. Further, some embodiments may have some, all, or none of the features described for other embodiments.[0010] In the following description and claims, the terms“coupled” and“connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments,“connected” is used to indicate that two or more elements are in direct physical or electrical contact with each other. “Coupled” is used to indicate that two or more elements co-operate or interact with each other, but they may or may not have intervening physical or electrical components between them.[0011] As used in the claims, unless otherwise specified the use of the ordinal adjectives“first”,“second”,“third”, etc., to describe a common element, merely indicate that different instances of like elements are being referred to, and are not intended to imply that the elements so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.[0012] Various embodiments of the invention may be implemented fully or partially in software and/or firmware. This software and/or firmware may take the form of instructions contained in or on a non-transitory computer-readable storage medium. The instructions may be read and executed by one or more processors to enable performance of the operations described herein. The medium may be internal or external to the device containing the processor(s), and may be internal or external to the device performing the operations. The instructions may be in any suitable form, such as but not limited to source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. Such a computer-readable medium may include any tangible non-transitory medium for storing information in a form readable by one or more computers, such as but not limited to read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; a flash memory, etc.[0013] Fig. 1 shows a group of compute elements that make up a system, according to an embodiment of the invention. In some embodiments, each compute element may be considered a separate core in a multi-core computer system. In other embodiments, each compute element may be a customized circuit with a specific purpose. Still other embodiments may take other forms. In some embodiments, some or all of the various compute elements may communicate with each other or be coupled to each other. In many embodiments, the clock frequency of each compute element may be independent of the clock frequency of other compute elements. Within this document, since power consumption is derived from clock frequency, the concept of power budget may be equated to frequency budget. For purposes of brevity, the term‘element’ may be used interchangeably with‘compute element’.[0014] The clock frequencies of each of the various elements may be controlled in various ways. In the example shown, each element may have its frequency controlled by frequency controller 110. In some embodiments, this may be accomplished by having the clock signal(s) provided by frequency controller 110, by sending indicator(s) of each element’s frequency to that element, or by other means. In some embodiments, each element’s frequency may be communicated to that element through normal communication channels. As with most digital logic, heat generation may increase with increasing clock frequency.[0015] Fig. 2 shows a diagram of a computing device, according to an embodiment of the invention. Device 200 may include modules such as, but not limited to, processor 202, memories 204 and 206, sensors 228, network interface 220, graphics display device 210, alphanumeric input device 212 (such as a keyboard), user interface navigation device 214, storage device 216 containing a machine readable medium 222, power management device 232, and output controller 234. Instructions 224 may be executed to perform the various functions described in this document. The instructions are shown in multiple memories, though this is not a requirement. Communications network 226 may be a network external to the device, through which the device 200 may communicate with other devices. Multiple computer devices 200 may be part of a computer system. In some embodiments, some or all of the components of computer device 200 may be in a compute element.[0016] In the following descriptions, these terms are defined:N - the total number of elements across which power is being distributed.i one of elements 1 through Nf - The minimum frequency that may be used for element i when a minimumperformance level must be met. F, - The maximum desired frequency for element i. That is, any higher frequency may achieve performance for element i beyond the performance level desired. Cutting off at this frequency allows other, higher priority elements to use the additional available performance. w , - Weight. A unitless measure of the relative priority of the compute element i. In general, higher weights indicate higher priority.Wmax - The maximum weight that any compute element may be assigned.Nw -The number of compute elements with a particular weight w.bucket - a conceptual term indicating a group of all the elements that have been assigned the same weight w.bw -The combined frequency budget granted to all compute elements with weight w. Since power consumption and heat generation are the ultimate resources of concern, and both may be proportional to frequency, the term‘frequency budget’ may be used in the same manner as‘power budget’ or‘energy budget’ in other descriptions.B - the aggregate frequency budget granted to all compute elements combined in all buckets.Aw -The‘aggregate frequency headroom’ of weight w. That is, the collective frequency that all elements in a bucket can possibly utilize beyond the minimum requested frequency of every element in that bucket, before being clipped due to the maximum frequency F„ of each element. In other words, A„ may be equal to the sum of (F, - f ) for all the elements in that bucket. All this headroom might not be useable. It may be limited by the budget b„ available to that bucket.Pz - The performance (frequency) of compute element i that is output according to the algorithm.[0017] Figs. 3 A and 3B show a flow diagram of a method of distributing frequency to multiple compute elements, according to an embodiment of the invention. In flow diagram 300, at 305 the system may determine the total power budget for that system. In some embodiments this may be expressed as a frequency budget, since the power-saving operations of this system are based on controlling the frequencies. Other power saving techniques such as reduced voltage, fan-controlled air flow, etc., are ignored in this document. At 310, the system may be divided into N elements, where each element will have an independently-assignable operating frequency. In some embodiments, each element may be a processing core, but otherembodiments may differ. For the purposes of this discussion, each element, and the other parameters associated with that element, may be indicated with the identifier with the values for i ranging from 1 to N. At 315, F, and f may be determined for each element i.[0018] At 320, a relative weight w , may be assigned to each element i. Weight may be a measure of priority, or importance, and multiple elements may have the same weight. In some embodiments, weights of a higher value indicate a higher priority for the corresponding elements, while weights of a lower value indicate a lower priority for the corresponding elements. In some embodiments, the values assigned to the various weights may be proportional to their relative priorities (not just rank ordered). In other embodiments, proportionality may not matter. Although shown at a particular place in the flow, the assignment of weights to elements may occur earlier than shown. In Figs. 3 A and 3B, the values of weights w;, and frequencies f and F, are all shown at certain points in the flow. But in some embodiments, these values may be redefined at other times.[0019] At 325, it may be determined if the total frequency budget B is sufficient to provide the minimum requested budget f to every element i in the system. If there is insufficient power for this, the shortfall of available frequency may be distributed in various ways at 330.For example, in some embodiments the frequency shortage may be distributed in proportion to the minimum frequency request of each element. In other embodiments, some elements may be shut off completely and their intended frequency made available to the remaining elements. Other techniques may also be used. In any case, operation 330 indicates that some or all of the elements will not receive their requested minimum frequency f .[0020] If B is sufficient to meet the minimum requested f for all elements in the system, it may be determined at 335 if B is sufficient to meet the maximum requested F, for all elements in the system. If B meets or exceeds this amount, every element may be allocated its F, at 340 and any remaining frequency for all elements combined may go unused. But if B is sufficient to supply the aggregate of all f , while insufficient to supply the aggregate of all F;, then the flow may proceed to 345 to determine how to distribute the available frequency budget among the various elements.[0021] Continuing flow diagram 300 at 345 in Fig. 3B, the various elements may be sorted by the weights they were assigned at 320, and placed into groups (virtual‘buckets’), where every element in a bucket has the same weight. For discussion purposes, each bucket may now be referred to as‘bucket w indicating a bucket with elements which all have weight w. After bucketizing the elements at 345, the flow may continue at 350 by setting w = WMAX , which effectively selects the bucket containing the highest priority elements.[0022] Within this bucket, the aggregate headroom for all the elements in the bucket may be determined at 355. In some embodiments, this may be determined by subtracting f from F, for each element in the bucket, and summing the results for all the elements in that bucket. This sum represents aggregate headroom for this bucket, that is, the maximum amount of frequency that may be allocated among all the elements in this bucket based solely on F, and f . However, the budget bwmay be smaller than that maximum amount and therefore limit that allocation of frequency. Therefore at 360, the frequency allocated to the elements of this bucket may be the lesser of 1) the budget bwassigned to this bucket, and 2) the aggregate headroom for this bucket. The budget bwassigned to this bucket may be determined is various ways, such as but not limited to scaling the total available budget B by the relative weight of bucket w to the weight of all buckets i<w (for example, b„ = B*nw*¼7(sum(n;*i)) for i<w. In someembodiments, the distribution of frequency among the elements in this bucket may be prorated based on the various parameters, such as but not limited to: 1) the relative values of f , 2) the relative values of F;, or 3) the relative headroom for each element.[0023] Once the allocated budget has been distributed to the elements in this bucket at360, the remaining overall budget B for the system may be determined at 365. In some embodiments, this may be determined by subtracting the budget just allocated from the previous budget. At 370, if all buckets have been processed as described in operations 355 - 365, then the process may end at 380. But if not, the next lower priority bucket may be selected at 375, and the flow may return to 355 to distribute the frequency budget within that bucket. This may continue until all buckets have been processed, or until all frequency budget has been allocated.[0024] EXAMPLESThe following examples pertain to particular embodiments:[0025] Example 1 includes a frequency control device to control frequency allocation to multiple compute elements in a processing system, the device configured to: determine, for each element /, a minimum frequency f at which the element will meet minimum performance levels; determine, for each element a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels; determine an aggregate frequency budget for all elements combined; if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distribute less than each element’s minimum frequency to each corresponding element; and if the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distribute each element’s maximum frequency to each corresponding element.[0026] Example 2 includes the frequency control device of example 1, wherein: if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, the device is further configured to sort all elements by priority, place elements with like priority in a group, and assign a group frequency budget to each group.[0027] Example 3 includes the frequency control device of example 2, further configured to: select a highest priority group; distribute the assigned frequency budget for the highest priority group to the elements in the highest priority group; wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.[0028] Example 4 includes the frequency control device of example 3, wherein the predetermined portion of the assigned frequency budget is to be based on the relative minimum frequencies for all elements in the highest priority group.[0029] Example 5 includes the frequency control device of example 3, further configured to determine a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.[0030] Example 6 the frequency control device of example 3, further configured to: select the second highest priority group; distribute the assigned frequency budget for the second highest priority group to the elements in the second highest priority group; wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.[0031] Example 7 includes the frequency control device of example 1, further comprising the multiple compute elements coupled to the device.[0032] Example 8 includes a method of controlling frequency allocation to multiple compute elements in a processing system, the method comprising: determining, for each element /, a minimum frequency f at which the element will meet minimum performance levels; determining, for each element a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels; determining an aggregate frequency budget for all elements combined; if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distributing less than each element’s minimum frequency to each corresponding element; and if the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distributing each element’s maximum frequency to each corresponding element.[0033] Example 9 includes the method of example 8, wherein: if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, sorting all elements by priority, placing elements with like priority in a group, and assigning a group frequency budget to each group.[0034] Example 10 includes the method of example 9, further comprising: selecting a highest priority group; distributing the assigned frequency budget for the highest priority group to the elements in the highest priority group; wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.[0035] Example 11 includes the method of example 10, wherein the predetermined portion of the assigned frequency budget is based on the relative minimum frequencies for all elements in the highest priority group.[0036] Example 12 includes the method of example 10, further comprising determining a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.[0037] Example 13 includes the method of example 10, further comprising: selecting the second highest priority group; distributing the assigned frequency budget for the second highest priority group to the elements in the second highest priority group; wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.[0038] Example 14 includes a computer-readable non-transitory storage medium that contains instructions, which when executed by one or more processors result in performing operations comprising: determining, for each element i in a processing system, a minimum frequency f at which the element will meet minimum performance levels; determining, for each element a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels; determining an aggregate frequency budget for all elements combined; if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distributing less than each element’s minimum frequency to each corresponding element; and if the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distributing each element’s maximum frequency to each corresponding element.[0039] Example 15 includes the medium of example 14, wherein:if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, sorting all elements by priority, placing elements with like priority in a group, and assigning a group frequency budget to each group.[0040] Example 16 includes the medium of example 14, further comprising: selecting a highest priority group; distributing the assigned frequency budget for the highest priority group to the elements in the highest priority group; wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.[0041] Example 17 includes the medium of example 16, wherein the predetermined portion of the assigned frequency budget is based on the relative minimum frequencies for all elements in the highest priority group.[0042] Example 18 includes the medium of example 16, wherein the operations further comprise determining a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.[0043] Example 19 includes the medium of example 16, wherein the operations further comprise: selecting the second highest priority group; distributing the assigned frequency budget for the second highest priority group to the elements in the second highest priority group; wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.[0044] Example 20 includes a frequency control device to control frequency allocation to multiple compute elements in a processing system, the device having means to: determine, for each element /, a minimum frequency f at which the element will meet minimumperformance levels; determine, for each element a maximum frequency F, at which the element will meet desired performance levels without exceeding the desired performance levels; determine an aggregate frequency budget for all elements combined; if the aggregate frequency budget is less than an aggregate of minimum frequencies for all elements combined, distribute less than each element’s minimum frequency to each corresponding element; and if the aggregate frequency budget is greater than an aggregate of maximum frequencies for all elements combined, distribute each element’s maximum frequency to each corresponding element.[0045] Example 21 includes the frequency control device of example 20, wherein: if the frequency budget is greater than the aggregate of minimum frequencies for all elements combined and less than the aggregate of maximum frequencies for all elements combined, the device is further configured to sort all elements by priority, place elements with like priority in a group, and assign a group frequency budget to each group.[0046] Example 22 includes the frequency control device of example 21, the device further having means to: select a highest priority group; distribute the assigned frequency budget for the highest priority group to the elements in the highest priority group; wherein said distributing the assigned frequency budget for the highest priority group includes allocating, to each element in the highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the highest priority group.[0047] Example 23 includes the frequency control device of example 22, wherein the predetermined portion of the assigned frequency budget is to be based on the relative minimum frequencies for all elements in the highest priority group.[0048] Example 24 includes the frequency control device of example 22, the device further having means to determine a remaining aggregate frequency budget for all remaining elements after distributing frequencies to the elements in the highest priority group.[0049] Example 25 includes the frequency control device of example 22, the device further having means to: select the second highest priority group; distribute the assigned frequency budget for the second highest priority group to the elements in the second highest priority group; wherein said distributing the assigned frequency budget for the second highest priority group includes allocating, to each element in the second highest priority group, a frequency that is greater than or equal to the minimum frequency for that element and is less than or equal to the lesser of the maximum frequency for that element and a predetermined portion of the assigned frequency budget for the second highest priority group.[0050] Example 26 includes the frequency control device of example 20, further comprising means for the multiple compute elements coupled to the device.[0051] The foregoing description is intended to be illustrative and not limiting.Variations will occur to those of skill in the art. Those variations are intended to be included in the various embodiments of the invention, which are limited only by the scope of the following claims.
A buffer circuit includes a chain of a plurality of inverters. A first inverter has transistors with a size to present a first predetermined capacitive loading at its input. This is selected with regard to the target operating frequency of the driving circuit (typically a flip-flop) and the transistor size selected for this driving circuit. Each inverter has transistors with a size a predetermined size factor greater than the transistors of a preceding inverter. The first inverter has transistors the size factor larger than the driving circuit. The size factor is preferably 3. The number of inverters in the chain is selected so that the last inverter has transistors with a size to drive its output capacitive loading with a maximum rise and fall time corresponding to the target frequency. If the number of inverters is even, then the buffer input is connected to a normal output of the driving circuit. If the number of inverters is odd, then the buffer input is connected to an inverted output of the driving circuit.
What is claimed is: 1. A buffer circuit comprising:a plurality of more than two inverters each having an input and an output, said input of a first inverter being said buffer circuit input, each inverter except a last inverter having its output connected to said input of a succeeding inverter, said output of said last inverter being said buffer output; said first inverter having transistors with a size to present a first predetermined capacitive loading at its input; each inverter other than said first inverter having transistors with a predetermined size factor greater than said transistors of a preceding inverter; said last inverter having transistors with a size to drive a second predetermined capacitive loading greater than said first predetermined capacitive loading with a preselected maximum rise and fall time; a flip-flop circuit having at least one output, said flip-flip circuit having transistors with a size to drive said first predetermined capacitive loading with a preselected flip-flop maximum rise and fall time, said flip-flop circuit having a first output and a second output, said second output producing a signal which is an inverse of said first output; and wherein said input of said first inverter is connected to said first output of said flip-flop if there is an even number of said plurality of inverters and said input of said first inverter is connected to said second output of said flip-flop if there is an odd number of said plurality of inverters.
CLAIM OF PRIORITYThis application claims priority under 35 U.S.C. 119(e) (1) from U.S. Provisional Patent Application No. 60/137,599 filed Jun. 3, 1999.TECHNICAL FIELD OF THE INVENTIONThe technical field of this invention is integrated circuit design and especially low power driver design.BACKGROUND OF THE INVENTIONControl logic consumes a large percentage of power in today's microprocessors and microcontrollers. A significant amount of this control logic power is consumed by the storage elements. These are typically edge-triggered D flip-flops. Simulations conducted on a microcontroller family estimates that a 30% reduction in flip-flop power consumption translates to about 6% power reduction for the entire integrated circuit. This is the greatest power reduction of all hardware techniques identified. Gated clock methodology translates to only less than 2% saving of the total integrated circuit power. Therefore it is essential that the library of design cells for low power application specific integrated circuits (ASIC's) include low power versions of the flip-flops.SUMMARY OF THE INVENTIONA buffer circuit includes a chain of a plurality of inverters. A first inverter has transistors with a size to present a first predetermined capacitive loading at its input. This is selected with regard to the target operating frequency of the driving circuit (typically a flip-flop) and the transistor size selected for this driving circuit. Each inverter has transistors with a size a predetermined size factor greater than the transistors of a preceding inverter. The first inverter has transistors the size factor larger than the driving circuit. The size factor is preferably 3. The number of inverters in the chain is selected so that the last inverter has transistors with a size to drive its output capacitive loading with a maximum rise and fall time corresponding to the target frequency. If the number of inverters is even, then the buffer input is connected to a normal output of the driving circuit. If the number of inverters is odd, then the buffer input is connected to an inverted output of the driving circuit.BRIEF DESCRIPTION OF THE DRAWINGSThese and other aspects of this invention are illustrated in the drawings, in which:FIG. 1 illustrates in schematic diagram form a common D flip flop circuit of the prior art;FIG. 2 illustrates in schematic diagram form a common low area D flip flop circuit of the prior art;FIG. 3 illustrates in schematic diagram form a common low power D flip flop circuit of the prior art;FIG. 4 illustrates in schematic diagram form a push pull D flip flop circuit of the prior art;FIG. 5 illustrates the inverter chain buffer of a first embodiment of this invention;FIG. 6 illustrates the inverter chain buffers of a second embodiment of this invention;FIG. 7 illustrates the construction of a representative inverter from the inverter chains of FIGS. 5 and 6;FIG. 8 illustrates the steps in the design method of this invention;FIG. 9 illustrates an example of an inverter chain of an even number of inverters connected to a normal flip-flop output; andFIG. 10 illustrates an example of an inverter chain of an odd number of inverters connected to an inverted flip-flop output.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSThis invention reduces power consumption of the library cells significantly at the expense of gate delay or area or both. In the low power design methodology power consumption takes priority over gate delay. This is the opposite of conventional design where performance as measured by gate delay is the first priority. Low power design is especially critical to battery powered portable systems. In battery power portable systems low power consumption in electronics translates into increased operation between battery replacement or recharging or reduced battery weight for the same battery life.Control logic consumes a large percentage of power in today's microprocessors and microcontrollers. A significant amount of this control logic power is consumed by the storage elements. These are typically edge-triggered D flip-flops. Simulations conducted on a microcontroller family estimates that a 30% reduction in flip-flop power consumption translates to about 6% power reduction for the entire integrated circuit. This is the greatest power reduction of all hardware techniques identified. Gated clock methodology translates to only less than 2% saving of the total integrated circuit power. Therefore it is essential that the library of design cells for low power application specific integrated circuits (ASIC's) include low power versions of the flip-flops. A number of prior art techniques are described below.FIG. 1 illustrates a negative edge-triggered D flip-flop 100 of the prior art. This circuit 100 consists of a master latch 110 and a slave latch 120. Master latch 110 includes an input transmission gate 111, a first inverter 112, a second inverter 113 and a feedback transmission gate 114. The input signal D is supplied to input transmission gate 111 of master latch 110. The output of the master latch supplies the input of the slave latch. The slave latch likewise includes an input transmission gate 121, a first inverter 122, a second inverter 123 and a feedback transmission gate 124. Note that the input transmission gate 111 of master latch 110 is clocked in the opposite phase than the input transmission gate 121 of slave latch 120. Thus these transmission gates conduct on opposite phase of the clock signal C. Inverter 122 of slave latch 120 generates the circuit output signal Q. An inverse Q output can be taken from the input of inverter 122. This circuit illustrated in FIG. 1 can be constructed with 16 MOSFETs. The speed of this regular D flip-flop is limited by a two-gate delay after the clock signal C transitions from logic 1 to 0. The advantage of this D flip-flop design is that it involves minimum design risk.A common approach in the prior art to reduce area of the regular D flip-flop 100 is to remove the two feedback transmission gates 114 and 124. FIG. 2 illustrates this low-area D flip-flop (LADFF) circuit 200. Low-area D flip-flop circuit 200 can be constructed using 12 MOSFETS, 25% fewer transistors than the D flip-flop circuit 100. This reduces the integrated circuit area needed to construct the D flip-flop. The low-area D flip-flop circuit 200 has the disadvantage of consuming more power. The strength of feedback inverters 213 and 223 can be weakened to minimize the short-circuit power dissipation due to voltage contention. This has the disadvantage of increasing the rise and fall times, thus decreasing the maximum frequency of operation. Design simulations indicate that the low-area D flip-flop circuit 200 consumes more total power and is slower than that of the regular D flip-flop circuit 100.FIG. 3 illustrates another approach in the prior art to optimize the D flip-flop for power dissipation is to replace the inverter 113 and transmission gate 114 in the feedback path of the master latch 110 with a tri-state inverter. Another tri-state inverter replaces the inverter 123 and the transmission gate 124 in the feedback path of the slave latch. FIG. 3 illustrates this low-power D flip-flop circuit 300. The tri-state inverter in the master latch 310 includes P-type MOSFETs 313 and 314, and N-type MOSFETs 315 and 316. The tri-state inverter in the slave latch 320 includes P-type MOSFETs 323 and 324, and N-type MOSFETs 325 and 326. Low-power D flip-flop circuit 300 can be constructed with 16 MOSFETS, the same number as the regular D flip-flop circuit 100. Only one of MOSFETs 314 or 315 is conductive at one time depending upon the polarity of the clock signal C. Similarly, only one of MOSFETs 324 or 325 is conductive at one time. This avoids short-circuit power dissipation in the feedback path. Design simulations indicate that this yields only a small reduction in total power and a slightly slower speed when compared to the regular D flip-flop circuit 100. Circuit simulations indicate that low-power D flip-flop circuit 300 is comparable to the regular D flip-flop circuit 100 in area and energy efficiency.FIG. 4 illustrates a further alternative circuit. To optimize for speed, an inverter 431 and a transmission gate 432 are added between outputs of master latch 410 and slave latch 420. This accomplishes a push-pull effect at slave latch 420. This adds four MOSFETs, but reduces the clock-to-output (C-to-Q) delay from two gates in the regular D flip-flop circuit 100 to one gate. To offset the four added MOSFETs in the push-pull circuit, the two transmission gates 114 and 124 in feedback paths of the regular D flip-flop circuit 100 are eliminated in a manner similar to the low-area D flip-flop circuit 200. This yields a circuit employing 16 MOSFETs, the same as the regular D flip-flop circuit 100. Compared to the regular D flip-flop circuit 100, circuit simulations indicate that this push-pull D flip-flop circuit 400 is 31% faster but employs 22% more power. This increase in speed more than offsets the increase in power, yielding a circuit having a higher energy efficiency than the regular D flip-flop circuit 100.The switching power consumed by a circuit is proportional to the effective capacitance and the switching rate. For application specific integrated circuit cells the primary contributor to cell capacitance is the CMOS gate capacitance. Each CMOS transistor must move charge to and from the gate of succeeding CMOS transistors. The gate capacitance is directly proportional to the transistor size. Therefore reducing the transistor size should reduce switching power. This is especially true for circuits which have high switching activity such as clock distribution nets. However for deep submicron circuits, circuits formed with transistor feature sizes less than one micron, reduction of transistor size may not reduce the power consumed. Reduction of the transistor size tends to lengthen the voltage rise and fall times. This is because the reduced transistor size reduces the drive current available for charge transport. This increase in voltage rise and fall times thus tends to increase the proportion of short circuit power dissipation in the circuit. Accordingly, in some instances reduction of transistor size does not reduce circuit power consumption.This is shown in Table 1. Table 1 shows the comparative average and peak currents of two flip-flop circuits which toggle at a rate of 20% of the clock frequency. The second circuit (Reduced Size) is constructed of transistors about one third smaller than the original circuit (Original ORG). The two circuits have same drive strength. Reducing the switching capacitance by reducing the transistor size reduces the peak current but increases the average current and the power dissipation by 15%. This is directly attributed to the increase in short circuit power dissipation because rise/fall times increased proportionately to the reduction in transistor size.<tb> <sep> <sep>TABLE 1<tb> <sep> <sep>Toggle D at 20% of<sep>Average current<sep>Peak current<tb> <sep> <sep>clock frequency<sep>[mu]A<sep>[mu]A<tb> <sep> <sep>Original ORG<sep>20.2<sep>801<tb> <sep> <sep>Reduced Size<sep>23<sep>588This invention is a design technique for building low power versions of library cells in deep sub-micron technology. The technique presents a "pseudo-load" to the flip-flop which is orders of magnitude less than the real load to the logic portion of the cell. Therefore the transistors in the logic portion of the cell are can be reduced in size without a corresponding increase in rise/fall times on the nets. For these flip-flops the short-circuit power dissipation remains unchanged but switching power is reduced due to a decrease in the gate capacitances. The net effect is a decease in the total power dissipation. Alternatively, one can reduce short-circuit power dissipation without resizing the transistors using an appropriate pseudo-load. This maintains the switching power constant because the rise/fall times are improved when driving the pseudo-load. The penalty for this better power efficiency is the increase in delay due to the inverter chain.FIG. 5 illustrates the inverter chain buffer of this invention. Flip-flop 500 represents the logic output driving the inverter chain buffer. In this example the Q output of flip-flop 500 is buffered. The output transistors of flip-flop 500 are constructed of a predetermined channel width W, selected to provide the desired gate capacitance loading CIN to prior circuits within flip-flop 500. First inverter 501 is constructed using transistors having a channel a predetermined factor k greater than the channel width of the output circuit of flip-flop 500. Thus inverter 501 includes transistors having a channel width of kW, as illustrated in FIG. 5. The output of inverter 501 drives the input of inverter 502. Inverter 502 includes transistors having a channel width factor k greater than the channel width of inverter 501. Thus inverter 502 includes transistors having a channel width of k<2>W, as illustrated in FIG. 5. Inverter 503 having an input from the output of inverter 502 includes transistors having a channel width factor k greater than the channel width of inverter 502, thus having a channel width of k<3>W, as illustrated in FIG. 5. Each succeeding inverter has a channel width a factor of k greater than the preceding inverter to Nth inverter 508 having a transistors of channel width of k<N>W, as illustrated in FIG. 5. This transistor channel width is sufficient to drive an output capacitive loading CL with a desired rise and fall time. The number of inverters in the chain is selected based upon the input capacitive loading CIN , the output capacitive loading CL and the size factor k. This enables the flip-flop 500 to have transistors with channel widths to drive input capacitive loading CIN with a selected rise and fall time while permitting the inverter chain buffer to drive the output capacitive loading CL without degradation of the rise and fall times.It should be apparent to one skilled in the art if the number of inverters in the inverter chain is odd then the logic state at the output can be maintained by taking the inverter chain input from a {overscore (Q)} output of flip-flop 500. An inverted {overscore (Q)} output can be obtained from the input of inverter 122 of D flip flop 100 illustrated in FIG. 1, the input of inverter 222 of low area D flip flop 200 of FIG. 2, the input of inverter 322 of low power tri-state D flip flop 300 of FIG. 3 and the input of inverter 422 of push pull D flip flop 400 illustrated in FIG. 4.The value of the input capacitance CIN presented to the driving device can be made as small as desired by choosing an appropriate number of inverters in the inverter chain. For each stage the input capacitive load is a factor of k less than the output capacitive load for the same rise and fall times. The rise and fall times have the following ratio to the capacitive load:TR/F=R * C/wwhere: TR/F is the rise and fall times of the circuit; R is a proportionality constant; C is the capacitive load presented; and w is the channel width of the transistor driving the load. Accordingly, for any circuit that sees its load decreased by a factor of k, the channel width of the transistor driving this load can be correspondingly reduced by a factor of k without a degradation in rise/fall time.The size factor k is selected to provide balance between the load presented to the preceding inverter in the chain and the drive capacity. Inverters with large transistors can drive large capacitive loads due to the large conductance of their channels. However, large transistors also have large gates which present a large input capacitive load. The size factor k of the width of transistors in between stages in the inverter chain is preferably 3. This ratio reduces the extra delay due to the inverter chain to a bare minimum. Experimental results show that the power-delay product of the newly designed cells are consistently superior to the low power D flip-flop 100 illustrated in FIG. 1.FIG. 6 illustrates an alternative use of this invention. Logic function 600 includes at least two inputs In1 and In2 and at least two outputs Out1 and Out2. A first inverter chain consisting of inverters 601, 602, 603 through 608 is coupled to the first output Out1. As shown in FIG. 5, each inverter in this first inverter chain is constructed of transistors having a channel width a factor of k larger than the preceding inverter. A second similarly constructed inverter chain consisting of inverters 611, 612, 613 through 618 is coupled to the second output Out2. Each of these inverter chains provides a suitably reduced capacitive load to the corresponding output Out1 and Out2. Therefore the channel widths of the transistors of the drive circuits for outputs Out1 and Out2 may be sized to produce a desired rise and fall time into this reduced load. Alternatively, the channel widths may be retained with a consequent decrease in rise and fall times. Such a reduction in switching times would permit higher frequency operation.FIG. 7 illustrates an example of the construction of inverters 501 and 502 illustrated in FIG. 5. Inverter 501 is of conventional design and includes P-channel transistor 711 and N-channel transistor 712. Each of these transistors has a channel width of W1. Inverter 502 similarly includes P-channel transistor 721 and N-channel transistor 722. Each of these transistors has a larger channel width of W2. In accordance with this invention W2=k*W1.This invention exploits the idea that circuits with the highest switching rates should have the lowest capacitive loading. This tends to reduce the power consumed in charge transport. However, it is not helpful to merely reduce transistor sizes to reduce the gate capacitance. The reduced transistor size tends to increase the circuit rise and fall times. In the inverter circuits illustrated in FIG. 7 an increase in rise and fall times increases the proportion of the cycle when both transistors are conducting. This causes increased short circuit power consumption. In this invention capacitive loads can be redistributed so that circuits with the lowest switching rates experience the greatest capacitive loading. This frees integrated circuit area which can be devoted to the inverter chain of the present invention permitting reduced transistor size on the fastest switching circuits while maintaining suitable short rise and fall times. A net reduction in power consumption results because circuits experiencing the highest capacitive loading are have the least switching rate.FIG. 8 illustrates the steps in buffer design method 800 of this invention. The design method begins at start block 801. Initially the target frequency of operation is selected (processing block 802). Note that the frequency of operation implies a maximum rise and fall time. Next, the transistor size of the circuit is selected (processing block 803). These selections enable calculation of the maximum capacitive load that these circuits can drive (processing block 804). Note that the frequency selection determined the longest rise and fall times permitted and the transistor size selection determines the channel width. Thus the maximum capacitive loading permitted can be determined from the equation TR/F=R * C/w above. Next the output load is determined (processing block 805). The number and type circuits driven by the buffer determine this capacitive load Next the size factor k is determined (processing block 806). As previously described, the size factor k is preferably 3, however other factors are feasible. Next, the number of inverters needed to transform from a capacitive loading of CIN to CL is computed (processing block 807) Note that the ratio of the initial capacitive loading CIN to the output capacitive loading CL is proportional to k<N>, where N is the number of inverters. The relationship permits calculation of the number of inverters needed. The method determines if this calculated number of inverters is even (decision block 808). If so, then the input of the inverter chain is connected to a normal output of the driving circuit (processing block 809). This is illustrated in FIG. 9 where the normal output of flip-flop 900 drives a chain of four inverters 901, 902, 903 and 904. If not, then the input of the inverter chain is connected to an inverted output of the driving circuit (processing block 810). This is illustrated in FIG. 10 where an inverted output of flip-flop 1000 drives a chain of three inverters 1001, 1002 and 1003. These normal and inverted outputs correspond to the Q and the {overscore (Q)} outputs of the flip-flops illustrated in FIGS. 1 to 4. The method is then complete (end block 811).
Some embodiments include an assembly having conductive structures distributed along a level within a memory array region and another region proximate the memory array region. The conductive structures include a first stack over a metal-containing region. A semiconductor material is within the first stack. A second stack is over the conductive structures, and includes alternating conductive tiers and insulative tiers. Cell-material-pillars are within the memory array region. The cell-material-pillars include channel material. The semiconductor material directly contacts the channel material. Conductive post structures are within the other region. Some of the conductive post structures are dummy structures and have bottom surfaces which are entirely along an insulative oxide material. Others of the conductive post structures are live posts electrically coupled with CMOS circuitry. Some embodiments include methods of forming assemblies.
CLAIMS l/we claim,1. An integrated assembly, comprising: a memory array region and another region proximate the memory array region; conductive structures distributed along a level within the memory array region and the other region; the conductive structures including a metal-containing region and a first stack over the metal- containing region; the first stack including alternating semiconductor- material-containing regions and intervening regions; one of the semiconductor-material-containing regions being a central semiconductor-material-containing region and being vertically between two others of the semiconductor-material-containing regions; a second stack over the level and over the conductive structures; the second stack comprising alternating first and second tiers; the first tiers comprising conductive material and the second tiers comprising insulative material; cell-material-pillars within the memory array region; the cell- material-pillars extending through the second stack and into the first stack; the cell-material-pillars including channel material and including other materials laterally outward of the channel material; the central semiconductor material penetrating laterally through the other materials and directly contacting the channel material; and conductive post structures within the other region; the conductive post structures extending through the second stack and through the conductive structures; some of the conductive post structures being dummy structures and having bottom surfaces which are entirely along an insulative oxide material, and others of the conductive post structures being live structures and being electrically coupled with CMOS circuitry which is beneath the level.2. The integrated assembly of claim 1 wherein the metal- containing region includes WSi, where the chemical formula indicates primary constituents rather than a specific stoichiometry.3. The integrated assembly of claim 2 wherein the metal- containing region includes metal-containing material under the WSi; and wherein said metal-containing material includes one or both of TiN and WN, where the chemical formulas indicate primary constituents rather than specific stoichiometries.4. The integrated assembly of claim 1 wherein one of the conductive structures is within the memory array region and is configured as a source structure.5. The integrated assembly of claim wherein the conductive post structures comprise tungsten.6. The integrated assembly of claim 1 wherein the conductive post structures consist essentially of tungsten.7. The integrated assembly of claim 1 wherein the central semiconductor-material-containing region comprises doped silicon.8. The integrated assembly of claim 7 wherein the others of the semiconductor-material-containing regions comprise doped silicon.9. The integrated assembly of claim 8 wherein the intervening regions comprise conductive material.10. The integrated assembly of claim 8 wherein the intervening regions comprise insulative material.11. The integrated assembly of claim 1 wherein the first tiers areNAND wordline tiers.12. The integrated assembly of claim 11 wherein the NAND wordline tiers include metal, and wherein the insulative material of the second tiers includes silicon dioxide.13. The integrated assembly of claim 1 wherein the other region includes a staircase region and a crest region adjacent to the staircase region; wherein the dummy structures are within the staircase region; and wherein the live structures are within the crest region.14. A method of forming an assembly, comprising: forming a construction which comprises structures distributed along a level; a first set of the structures being over metal-containing interconnects and a second set of the structures being only over an insulative oxide; the structures each including a metal-containing region and including a first stack over the metal-containing region; the first stack including a central region between two outer regions, with the central region being spaced from the outer regions by intervening regions; the central region comprising a first sacrificial material; forming a stack of alternating first and second tiers over the structures; the first tiers comprising a second sacrificial material and the second tiers comprising a first insulative material; forming openings to extend through the stack and into the structures; lining the openings with a second insulative material; punching through bottoms of the lined openings with etching conditions that utilize one or more halides; after the punching through the bottoms of the lined openings, forming conductive post material within the lined openings; the conductive post material within the first set of the structures being directly against the metal-containing interconnects and comprising a same composition as the metal-containing interconnects; removing the first sacrificial material to form void regions; forming semiconductor material within the void regions; and replacing at least some of the second sacrificial material with conductive wordline material.15. The method of claim 14 wherein the etching conditions utilize one or more chlorine, bromine and fluorine.16. The method of claim 14 wherein the etching conditions utilize one or more of CF4, CHF3, HBr and SiCU.17. The method of claim 14 wherein the outer regions are doped semiconductor-material-containing regions.18. The method of claim 17 wherein the first sacrificial material is a semiconductor material, and is less doped than the outer regions.19. The method of claim 14 wherein said same composition comprises tungsten.20. The method of claim 14 wherein said same composition consists of tungsten.21. The method of claim 14 wherein the insulative oxide comprises silicon dioxide.22. A method of forming an assembly, comprising: forming a construction having a memory array region and another region proximate the memory array region; the construction including structures within the memory array region and the other region; the structures including a metal-containing region and a first stack over the metal-containing region; the first stack including a central region between two outer regions, including a first intervening region between the central region and one of the outer regions, and including a second intervening region between the central region and the other of the outer regions; the central region comprising a first sacrificial material; a first set of the structures within the other region being over metal-containing interconnects and a second set of the structures within the other region being only over an insulative oxide; forming a stack of alternating first and second tiers over the structures; the first tiers comprising a second sacrificial material and the second tiers comprising a first insulative material; forming openings to extend through the stack and into the conductive structures within the other region; lining the openings with a second insulative material; punching through bottoms of the lined openings utilizing etching conditions that slow upon reaching the insulative oxide and the metal-containing interconnects; after the punching through the bottoms of the lined openings, forming conductive post material within the lined openings; forming cell-material-pillars within the memory array region; the cell-material-pillars extending through the second stack and into the first stack; the cell-material-pillars including channel material and other materials laterally outward of the channel material; removing the first sacrificial material to form void regions, and extending the void regions laterally through the other materials to expose the channel material; forming doped semiconductor material within the void regions; and replacing at least some of the second sacrificial material with conductive wordline material.23. The method of claim 22 wherein the conductive interconnects comprise a first lateral width along a cross-section, and wherein the lined openings comprise a second lateral width along the cross-section which is about the same as the first lateral width.24. The method of claim 22 wherein the conductive interconnects comprise a first lateral width along a cross-section, and wherein the lined openings comprise a second lateral width along the cross-section which is different than the first lateral width.25. The method of claim 22 wherein the conductive interconnects comprise a first lateral width along a cross-section, and wherein the lined openings comprise a second lateral width along the cross-section which is greater than the first lateral width.26. The method of claim 22 wherein the etching conditions utilize one or more halides.27. The method of claim 22 wherein the etching conditions utilize one or more of CF4, CHF3, HBr and SiCU.28. The method of claim 22 wherein the outer regions are semiconductor-material-containing regions.29. The method of claim 28 wherein first sacrificial material is a semiconductor material, and is less doped than the outer regions.30. The method of claim 22 wherein the conductive post material and the conductive interconnects consist of tungsten.31. The method of claim 22 wherein the insulative oxide comprises silicon dioxide.32. The method of claim 22 wherein the second sacrificial material comprises silicon nitride.
INTEGRATED ASSEMBLIES AND METHODS OF FORMING INTEGRATED ASSEMBLIESRELATED PATENT DATAThis application claims priority to and the benefit of U.S.Patent Application Serial No. 17/211 ,580, filed March 24, 2021 , the disclosure of which is incorporated herein by reference.TECHNICAL FIELDIntegrated assemblies (e.g., NAND assemblies) having conductive posts extending through stacks of alternating materials (e.g., alternating levels of wordline material and insulative material). Methods of forming integrated assemblies.BACKGROUNDMemory provides data storage for electronic systems. Flash memory is one type of memory, and has numerous uses in modern computers and devices. For instance, modern personal computers may have BIOS stored on a flash memory chip. As another example, it is becoming increasingly common for computers and other devices to utilize flash memory in solid state drives to replace conventional hard drives. As yet another example, flash memory is popular in wireless electronic devices because it enables manufacturers to support new communication protocols as they become standardized, and to provide the ability to remotely upgrade the devices for enhanced features.NAND may be a basic architecture of flash memory, and may be configured to comprise vertically-stacked memory cells.Before describing NAND specifically, it may be helpful to more generally describe the relationship of a memory array within an integrated arrangement. FIG. 1 shows a block diagram of a prior art device 1000 which includes a memory array 1002 having a plurality of memory cells 1003 arranged in rows and columns along with access lines 1004 (e.g., wordlines to conduct signals WL0 through WLm) and first data lines 1006 (e.g., bitlines to conduct signals BL0 through BLn). Access lines 1004 and first data lines 1006 may be used to transfer information to and from the memory cells 1003. A row decoder 1007 and a column decoder 1008 decode address signals A0 through AX on address lines 1009 to determine which ones of the memory cells 1003 are to be accessed. A sense amplifier circuit 1015 operates to determine the values of information read from the memory cells 1003. An I/O circuit 1017 transfers values of information between the memory array 1002 and input/output (I/O) lines 1005. Signals DQ0 through DON on the I/O lines 1005 can represent values of information read from or to be written into the memory cells 1003. Other devices can communicate with the device 1000 through the I/O lines 1005, the address lines 1009, or the control lines 1020. A memory control unit 1018 is used to control memory operations to be performed on the memory cells 1003, and utilizes signals on the control lines 1020. The device 1000 can receive supply voltage signals Vcc and Vss on a first supply line 1030 and a second supply line 1032, respectively. The device 1000 includes a select circuit 1040 and an input/output (I/O) circuit 1017. The select circuit 1040 can respond, via the I/O circuit 1017, to signals CSEL1 through CSELn to select signals on the first data lines 1006 and the second data lines 1013 that can represent the values of information to be read from or to be programmed into the memory cells 1003. The column decoder 1008 can selectively activate the CSEL1 through CSELn signals based on the A0 through AX address signals on the address lines 1009. The select circuit 1040 can select the signals on the first data lines 1006 and the second data lines 1013 to provide communication between the memory array 1002 and the I/O circuit 1017 during read and programming operations.The memory array 1002 of FIG. 1 may be a NAND memory array, and FIG. 2 shows a schematic diagram of a three-dimensional NAND memory device 200 which may be utilized for the memory array 1002 of FIG. 1. The device 200 comprises a plurality of strings of charge-storage devices. In a first direction (Z-Z’), each string of charge-storage devices may comprise, for example, thirty-two charge- storage devices stacked over one another with each charge-storage device corresponding to one of, for example, thirty-two tiers (e.g., Tier0-Tier31 ). The charge-storage devices of a respective string may share a common channel region, such as one formed in a respective pillar of semiconductor material (e.g., polysilicon) about which the string of charge-storage devices is formed. In a second direction (X- X’), each first group of, for example, sixteen first groups of the plurality of strings may comprise, for example, eight strings sharing a plurality (e.g., thirty-two) of access lines (i.e., “global control gate (CG) lines”, also known as wordlines, WLs). Each of the access lines may couple the charge-storage devices within a tier. The charge- storage devices coupled by the same access line (and thus corresponding to the same tier) may be logically grouped into, for example, two pages, such as P0/P32, P1/P33, P2/P34 and so on, when each charge-storage device comprises a cell capable of storing two bits of information. In a third direction (Y-Y’), each second group of, for example, eight second groups of the plurality of strings, may comprise sixteen strings coupled by a corresponding one of eight data lines. The size of a memory block may comprise 1 ,024 pages and total about 16MB (e.g., 16 WLs x 32 tiers x 2 bits = 1 ,024 pages/block, block size = 1 ,024 pages x 16KB/page = 16MB). The number of the strings, tiers, access lines, data lines, first groups, second groups and/or pages may be greater or smaller than those shown in FIG. 2.FIG. 3 shows a cross-sectional view of a memory block 300 of the 3D NAND memory device 200 of FIG. 2 in an X-X’ direction, including fifteen strings of charge-storage devices in one of the sixteen first groups of strings described with respect to FIG. 2. The plurality of strings of the memory block 300 may be grouped into a plurality of subsets 310, 320, 330 (e.g., tile columns), such as tile columm, tile columnj and tile columnK, with each subset (e.g., tile column) comprising a “partial block” (sub-block) of the memory block 300. A global drain-side select gate (SGD) line 340 may be coupled to the SGDs of the plurality of strings. For example, the global SGD line 340 may be coupled to a plurality (e.g., three) of sub-SGD lines 342, 344, 346 with each sub-SGD line corresponding to a respective subset (e.g., tile column), via a corresponding one of a plurality (e.g., three) of sub- SGD drivers 332, 334, 336. Each of the sub-SGD drivers 332, 334, 336 may concurrently couple or cut off the SGDs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global source-side select gate (SGS) line 360 may be coupled to the SGSs of the plurality of strings. For example, the global SGS line 360 may be coupled to a plurality of sub-SGS lines 362, 364, 366 with each sub-SGS line corresponding to the respective subset (e.g., tile column), via a corresponding one of a plurality of sub- SGS drivers 322, 324, 326. Each of the sub-SGS drivers 322, 324, 326 may concurrently couple or cut off the SGSs of the strings of a corresponding partial block (e.g., tile column) independently of those of other partial blocks. A global access line (e.g., a global CG line) 350 may couple the charge-storage devices corresponding to the respective tier of each of the plurality of strings. Each global CG line (e.g., the global CG line 350) may be coupled to a plurality of sub-access lines (e.g., sub-CG lines) 352, 354, 356 via a corresponding one of a plurality of sub-string drivers 312, 314 and 316. Each of the sub-string drivers may concurrently couple or cut off the charge-storage devices corresponding to the respective partial block and/or tier independently of those of other partial blocks and/or other tiers. The charge-storage devices corresponding to the respective subset (e.g., partial block) and the respective tier may comprise a “partial tier” (e.g., a single “tile”) of charge-storage devices. The strings corresponding to the respective subset (e.g., partial block) may be coupled to a corresponding one of sub-sources 372, 374 and 376 (e.g., “tile source”) with each sub-source being coupled to a respective power source.The NAND memory device 200 is alternatively described with reference to a schematic illustration of FIG. 4.The memory array 200 includes wordlines 202i to 202N, and bitlines 228i to 228M.The memory array 200 also includes NAND strings 206i to 206M. Each NAND string includes charge-storage transistors 208i to 208N. The charge-storage transistors may use floating gate material (e.g., polysilicon) to store charge, or may use charge-trapping material (such as, for example, silicon nitride, metallic nanodots, etc.) to store charge.The charge-storage transistors 208 are located at intersections of wordlines 202 and strings 206. The charge-storage transistors 208 represent non-volatile memory cells for storage of data. The charge- storage transistors 208 of each NAND string 206 are connected in series source-to-drain between a source-select-device (e.g., source- side select gate, SGS) 210 and a drain-select device (e.g., drain-side select gate, SGD) 212. Each source-select-device 210 is located at an intersection of a string 206 and a source-select line 214, while each drain-select device 212 is located at an intersection of a string 206 and a drain-select line 215. The select devices 210 and 212 may be any suitable access devices, and are generically illustrated with boxes in FIG. 4.A source of each source-select-device 210 is connected to a common source line 216. The drain of each source-select-device 210 is connected to the source of the first charge-storage transistor 208 of the corresponding NAND string 206. For example, the drain of source- select-device 2101 is connected to the source of charge-storage transistor 208i of the corresponding NAND string 206i. The source- select-devices 210 are connected to source-select line 214.The drain of each drain-select device 212 is connected to a bitline (i.e., digit line) 228 at a drain contact. For example, the drain of drain-select device 2121 is connected to the bitline 228i . The source of each drain-select device 212 is connected to the drain of the last charge-storage transistor 208 of the corresponding NAND string 206. For example, the source of drain-select device 212i is connected to the drain of charge-storage transistor 208N of the corresponding NAND string 206i .The charge-storage transistors 208 include a source 230, a drain 232, a charge-storage region 234, and a control gate 236. The charge-storage transistors 208 have their control gates 236 coupled to a wordline 202. A column of the charge-storage transistors 208 are those transistors within a NAND string 206 coupled to a given bitline 228. A row of the charge-storage transistors 208 are those transistors commonly coupled to a given wordline 202.The vertically-stacked memory cells of three-dimensional NAND architecture may be block-erased by generating hole carriers beneath them, and then utilizing an electric field to sweep the hole carriers upwardly along the memory cells.Gating structures of transistors may be utilized to provide gate- induced drain leakage (GIDL) which generates the holes utilized for block-erase of the memory cells. The transistors may be the source- side select (SGS) devices described above. The channel material associated with a string of memory cells may be configured as a channel material pillar, and a region of such pillar may be gatedly coupled with an SGS device. The gatedly coupled portion of the channel material pillar is a portion that overlaps a gate of the SGS device.It can be desired that at least some of the gatedly coupled portion of the channel material pillar be heavily doped. In some applications it can be desired that the gatedly coupled portion include both a heavily-doped lower region and a lightly-doped upper region; with both regions overlapping the gate of the SGS device. Specifically, overlap with the lightly-doped region provides a non-leaky “OFF” characteristic for the SGS device, and overlap with the heavily-doped region provides leaky GIDL characteristics for the SGS device. The terms “heavily-doped” and “lightly-doped” are utilized in relation to one another rather than relative to specific conventional meanings. Accordingly, a “heavily-doped” region is more heavily doped than an adjacent “lightly-doped” region, and may or may not comprise heavy doping in a conventional sense. Similarly, the “lightly-doped” region is less heavily doped than the adjacent “heavily-doped” region, and may or may not comprise light doping in a conventional sense. In some applications, the term “lightly-doped” refers to semiconductor material having less than or equal to about 1018atoms/cm3of dopant, and the term “heavily-doped” refers to semiconductor material having greater than or equal to about 1022atoms/cm3of dopant.The channel material may be initially doped to the lightly-doped level, and then the heavily-doped region may be formed by out- diffusion from an underlying doped-semiconductor-material.It is desired to develop improved methods of forming integrated memory (e.g., NAND memory). It is also desired to develop improved memory devices.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 shows a block diagram of a prior art memory device having a memory array with memory cells.FIG. 2 shows a schematic diagram of the prior art memory array of FIG. 1 in the form of a 3D NAND memory device.FIG. 3 shows a cross-sectional view of the prior art 3D NAND memory device of FIG. 2 in an X-X’ direction.FIG. 4 is a schematic diagram of a prior art NAND memory array.FIG. 5 is a diagrammatic top-down view showing example regions of an example memory device.FIGS. 6-15 are diagrammatic cross-sectional side views of a region of an integrated assembly at example sequential process stages of an example method for forming an example memory array. FIG. 16 is a diagrammatic cross-sectional side view of a region of the example integrated assembly of FIG. 15, and shows an additional vertically-extended portion of such assembly.FIGS. 17-19 are diagrammatic cross-sectional side views of a region of an example integrated assembly at example sequential process stages of an example method. The process stage of FIG. 17 may follow that of FIG. 6.DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSSome embodiments include methods of forming conductive posts within integrated assemblies. One or more of the conductive posts may be “live” posts utilized, for example, for coupling circuitry with CMOS under the posts. Alternatively, one or more of the conductive posts may be “dummy” posts utilized solely for structural support. Some embodiments include integrated assemblies (e.g., assemblies comprising memory arrays suitable for utilization in NAND applications). Example embodiments are described with reference to FIGS. 5-19.FIG. 5 shows a top-down view along several example regions of an example integrated assembly 10. The illustrated regions of the assembly 10 include a pair of memory regions (memory array regions) 12a and 12b (Array-1 and Array-2), and include an intermediate region 14 between the memory regions. In some embodiments, the memory regions 12a and 12b may be referred to as first regions which are laterally displaced relative to one another (laterally offset from one another), and the intermediate region 14 may be referred to as another region (or as a second region) which is between the laterally- displaced (laterally-offset) first regions.Memory structures (e.g., NAND memory cells) may be formed within the memory array regions 12a and 12b. The memory structures may have associated wordlines, bitlines, SGD devices, SGS devices, etc. The intermediate region 14 may comprise, for example, staircase regions, crest regions, bridging regions, etc. Conductive posts may be formed within the intermediate region, with some of the conductive posts being utilized solely for support (e.g., being “dummy” structures), and with some of the posts being utilized for providing electrical connection to one or more components associated with the memory structures of the memory array regions (e.g., being “live” structures).FIG. 6 shows a diagrammatic cross-sectional side view through portions of the regions 14 and 12a at an example process stage. The region 14 is illustrated to be a “Staircase/crest” region, and is shown to comprise a staircase region 15 adjacent to a crest region 17. The region 12a is illustrated to be an “Array” region.Conductive blocks 16 are formed along a level I within the regions 12a and 14. Some of the conductive blocks are electrically coupled with CMOS circuitry 18. The CMOS circuitry may comprise control circuitry, sensing circuitry and/or any other suitable circuitry. At least some of the CMOS circuitry may be beneath the level I.The conductive blocks 16 may comprise any suitable electrically conductive composition(s); such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the conductive blocks 16 may comprise, consist essentially of, or consist of tungsten.The CMOS 18 may be supported by a semiconductor material (not shown). Such semiconductor material may, for example, comprise, consist essentially of, or consist of monocrystalline silicon (Si). The semiconductor material may be referred to as a semiconductor base, or as a semiconductor substrate. The term"semiconductor substrate" means any construction comprising semiconductive material, including, but not limited to, bulk semiconductive materials such as a semiconductive wafer (either alone or in assemblies comprising other materials), and semiconductive material layers (either alone or in assemblies comprising other materials). The term "substrate" refers to any supporting structure, including, but not limited to, the semiconductor substrates described above. The configurations described herein may be referred to as integrated configurations supported by a semiconductor substrate, and accordingly may be considered to be integrated assemblies.Conductive structures 20 are distributed along a level II within the regions 12a and 14. Some of the conductive structures 20 are coupled with underlying conductive blocks 16 through conductive interconnects 22. The conductive interconnects 22 may comprise any suitable electrically conductive composition(s) 23; such as, for example, one or more of various metals (e.g., titanium, tungsten, cobalt, nickel, platinum, ruthenium, etc.), metal-containing compositions (e.g., metal silicide, metal nitride, metal carbide, etc.), and/or conductively-doped semiconductor materials (e.g., conductively-doped silicon, conductively-doped germanium, etc.). In some embodiments, the composition 23 of the conductive interconnects 22 may comprise, consist essentially of, or consist of tungsten.The conductive structures 20 include a metal-containing region 24 and a stack 26 over the metal-containing region.The illustrated metal-containing region 24 includes a pair of metal-containing compositions (i.e., metal-containing materials) 28a and 28b. In some embodiments, the metal-containing composition 28b may comprise tungsten (W) and silicon (Si); and may, for example, comprise, consist essentially of, or consist of WSi, where the chemical formula indicates primary compositions rather than a specific stoichiometry. In some embodiments, the metal-containing composition 28b may be considered to comprise WSix, where x is a number.In some embodiments, the metal-containing composition 28a may include one or more of titanium (Ti), tungsten (W) and nitrogen (N); and may, for example, include one or both of TiN and WN, where the chemical formulas indicate primary constituents rather than specific stoichiometries.The stack 26 includes a central region 30a between two outer regions 30b and 30c. The regions 30b and 30c comprise semiconductor material 34, and the region 30a comprises sacrificial material 36.The semiconductor material 34 within the regions 30b and 30c may comprise conductively-doped semiconductor material, such as, for example, conductively-doped silicon. In some embodiments, the silicon may be n-type doped, and accordingly may be doped with one or both of phosphorus and arsenic. The conductively-doped silicon of regions 30b and 30c may be doped to a concentration of at least about 1022atoms/cm3with one or more suitable conductivity enhancing dopant(s). The semiconductor material within the region 30b may be the same as that within the region 30c, as shown, or may be different than that within the region 30c.The sacrificial material 36 within the region 30a may comprise any suitable composition(s); and in some embodiments may comprise undoped semiconductor material, such as, for example, undoped silicon. The term “undoped” doesn’t necessarily mean that there is absolutely no dopant present within the semiconductor material, but rather means that any dopant within such semiconductor material is present to an amount generally understood to be insignificant. For instance, undoped silicon may be understood to comprise a dopant concentration of less than about 1016atoms/cm3, less than aboutI O15atoms/cm3, etc., depending on the context. In some embodiments, the material 36 may comprise, consist essentially of, or consist of silicon. Intervening regions 32 alternate with the regions 30 within the stack 26. A first of the intervening regions is labeled 32a, and is between the central region 30a and the outer region 30c; and a second of the intervening regions is labeled 32b and is between the central region 30a and the outer region 30b.The regions 32 comprise material 38. The material 38 may be insulative, conductive, etc. In some embodiments, the material 38 may be insulative, and may comprise, consist essentially of, or consist of one or more of silicon dioxide, aluminum oxide, hafnium oxide, silicon nitride, silicon oxynitride, etc. In some embodiments, the material 38 may be conductive, and may comprise one or more metals, metal- containing compositions, etc.The regions 32a and 32b may comprise the same composition as one another (as shown), or may comprise different compositions relative to one another. One or both of the regions 32a and 32b may comprise a homogeneous composition (as shown) or may comprise a laminate of two or more different compositions.Although the stack 26 is shown comprising three of the regions 30 and two of the intervening regions 32, it is to be understood that the stack may comprise any suitable number of the regions 30 and 32. In some embodiments, the stack 26 may comprise at least three of the regions 30 and at least two of the intervening regions 32.The regions 30 may be formed to any suitable thicknesses, and in some embodiments may be formed to thicknesses within a range of from about 100 nanometers (nm) to about 300 nm. The regions 32 may be formed to any suitable thicknesses, and in some embodiments may be formed to thicknesses within a range of from about 5 nm to about 20 nm.An insulative material 40 is shown to extend over and around the blocks 16, and to extend between the structures 20. The insulative material 40 may comprise any suitable composition(s), and in some embodiments may correspond to an insulative oxide (e.g., may comprise, consist essentially of, or consist of one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, etc.).Some of the structures 20 within the staircase/crest region 14 are only over the insulative material 40, while others of the structures 20 within the staircase/crest region 14 are over the conductive interconnects 22. The structures 20 which are only over the insulative material 40 may be considered to correspond to structures of one set 42, while the structures 20 which are over the interconnects 22 may be considered to correspond to structures of another set 44. One of the sets 42 and 44 may be referred to as a first set, and the other of such sets may be referred to as a second set.In some embodiments, the structures 18, 16, 22, 24 and 26, together with the insulative material 40, may be considered to correspond to a construction 46.The stack 26 may be referred to as a first stack, with such first stack being within the construction 46.A second stack 48 is formed over the first stack 26. The second stack comprises alternating first and second tiers 50 and 52, respectively. The first tiers 50 comprise sacrificial material 54, and the second tiers 52 comprise insulative material 56. The stack 48 may comprise any suitable number of the tiers 50 and 52, and may, for example, comprise at least 20 of such tiers, at least 40 of such tiers, at least 100 of such tiers, at least 200 of such tiers, etc.The sacrificial material 54 may be referred to as a second sacrificial material to distinguish it from the first sacrificial material 36. The second sacrificial material 54 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon nitride.The insulative material 56 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.An insulative material 58 is formed over the stack 48. The insulative material 58 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide. Accordingly, the insulative materials 58 and 56 may comprise the same composition as one another in some embodiments.Referring to FIG. 7, openings 60 are formed to extend through the second stack 48 and into the structures 20. The array region 12a (FIG. 6) is not shown in FIG. 7 and instead only the staircase/crest region 14 is shown in order to simplify the drawing.The openings 60 are shown to project downwardly into the sacrificial material 36 of the structures 20 (i.e., are shown to extend into the central regions 30a of such structures). In other embodiments, the openings 60 may extend to a different depth within the structures 20. For instance, the openings 60 may only extend into the upper regions 30b of such structures, may extend into the lower regions 30c of such structures, etc.The openings 60 may be formed with any suitable processing. For instance a patterned mask (not shown) may be utilized to define locations of the openings, the openings may be extended into such locations with one or more suitable etches, and then the mask may be removed to leave the illustrated configuration of FIG. 7.The openings 60 may have any suitable close shapes when viewed from above, and may, for example, have elliptical shapes, square shapes, rectangular shapes, circular shapes, etc.Although the vertically-extending openings 60 are shown to have straight sidewalls along the materials 54 and 56 of the second stack 48, it is to be understood that in other embodiments the openings may have other configurations. For instance, the sidewalls may be tapered. Additionally, or alternatively, the sidewalls may project into the sacrificial material 54 (e.g., silicon nitride) to form laterally-projecting (horizontally-projecting) cavities along the vertically-extending openings.Referring to FIG. 8, insulative material 62 is formed within the openings 60 to line the openings. The insulative material 62 may be referred to as a second insulative material to distinguish it from the first insulative material 56 within the stack 48.The insulative material 62 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, etc.Referring to FIG. 9, the insulative material 62 is subjected to anisotropic etching to remove the material from along bottoms of the openings 60, while leaving the material along sidewalls of the openings as liners.Referring to FIG. 10, the openings 60 are extended through the structures 20 with one or more suitable etches. In some embodiments, the etching of FIG. 10 may be considered to correspond to punching through the bottoms of the lined openings 60. The etching conditions may be chosen to slow, or even stop, upon reaching the insulative material 40 and upon reaching the conductive material of the metal- containing interconnects 22. In the illustrated embodiment, the etching conditions have penetrated to a minor extent into the insulative material 40, and to minor extent into the conductive material 23 of the interconnects 22. The etching conditions may utilize one or more halides (e.g., may utilize one or more of chlorine, bromine and fluorine), and in some embodiments may utilize one or more of CF4, CHFs, HBr and SiCU.In the illustrated embodiment of FIG. 10, the interconnects 22 comprise a first lateral width Wi along the illustrated cross-section of the figure, and the openings 60 comprise a second lateral width W2 along such illustrated cross-section, with the second lateral width being about the same as the first lateral width (e.g., being the same to within reasonable tolerances of fabrication and measurement). In other embodiments, the openings 60 may have a lateral width different than the lateral width of the interconnects 22. For instance the openings 60 may be wider than the interconnects 22 (which may advantageously provide additional tolerances for potential mask misalignment). An example embodiment in which the openings 60 are wider than the interconnects 22 is described below with reference to FIGS. 16-18.Referring still to FIG. 10, the openings 60 may be considered to penetrate through the structures 20 of the set 42 to the insulative oxide 40, and to penetrate through the structures 20 of the set 44 to the metal-containing material 23 of the interconnects 22.Referring to FIG. 11 , conductive post material 64 is formed within the openings 60. The conductive post material is patterned into conductive posts 66. Some of the posts 66 are live posts 66a which are coupled to the CMOS 18 through the interconnects 22, and others are dummy posts 66b. In the illustrated embodiment, the live posts are within the crest region 17, and the dummy posts are within the staircase region 15. In other embodiments, one or more dummy posts may be within the crest region and/or one or more live posts may be within the staircase region.The conductive post material 64 may comprise a same composition as the metal-containing material 23 of the interconnects 22. For instance, in some embodiments the conductive post material 64 and the metal-containing material 23 may both comprise, consist essentially of, or consist of tungsten. The utilization of the same material for the conductive post material 64 and the metal-containing material 23 may reduce resistance along an interface between the conductive post material and the metal-containing material as compared to structures in which the conductive post material is directly against a material having a different composition than the conductive post material.A planarized surface 65 is formed to extend across the materials 58, 62 and 64. The planarized surface 65 may be formed with any suitable processing, such as, for example, chemical-mechanical polishing (CMP).Referring to FIG. 12, a cell-material-pillar 70 is formed within the memory region 12a. The pillar 70 may be representative of a large number of cell-material-pillars which are formed within the memory regions 12a and 12b (with such memory regions being shown in FIG. 5). The cell-material-pillars may be substantially identical to one another, with the term “substantially identical” meaning identical to within reasonable tolerances of fabrication and measurement. The pillars 70 may be configured in a tightly-packed arrangement within each of the memory regions 12a and 12b, such as, for example, a hexagonal close packed (HCP) arrangement. There may be hundreds, thousands, hundreds of thousands, millions, etc., of the pillars 70 arranged within each of the memory regions 12a and 12b.The illustrated pillar 70 of FIG. 12 comprises an outer region 72 containing memory cell materials, a channel material 74 adjacent the outer region 72, and an insulative material 76 laterally surrounded by the channel material 74.The cell materials within the region 72 may comprise tunneling material, charge-storage material and charge-blocking material. The tunneling material (also referred to as gate dielectric material) may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, etc. The charge-storage material may comprise any suitable composition(s); and in some embodiments may comprise floating gate material (e.g., polysilicon) or charge-trapping material (e.g., one or more of silicon nitride, silicon oxynitride, conductive nanodots, etc.). The charge-blocking material may comprise any suitable composition(s); and in some embodiments may comprise one or more of silicon dioxide, aluminum oxide, hafnium oxide, zirconium oxide, etc.The channel material 74 comprises semiconductor material. The semiconductor material may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc.; with the term lll/V semiconductor material referring to semiconductor materials comprising elements selected from groups III and V of the periodic table (with groups III and V being old nomenclature, and now being referred to as groups 13 and 15). In some embodiments, the semiconductor material may comprise, consist essentially of, or consist of appropriately-doped silicon.The channel material 74 may be considered to be configured as a channel-material-pillar 78. In the illustrated embodiment, the channel-material-pillar 78 is configured as an annular ring in top-down view (not shown), with such annular ring surrounding the insulative material 76. Such configuration of the channel-material-pillar may be considered to correspond to a “hollow” channel configuration, with the insulative material 76 being provided within the hollow of the channel- material-pillar. In other embodiments, the channel material 74 may be configured as a solid pillar.The insulative material 76 may comprise any suitable composition(s), and in some embodiments may comprise, consist essentially of, or consist of silicon dioxide.Referring to FIG. 13, the sacrificial material 36 (FIG. 12) is removed to leave voids regions (conduits) 80 within the central region 30a of the stack 26.The conduits 80 may be formed with any suitable processing, and in some embodiments may be formed utilizing one or more etchants containing hydrofluoric acid. Such etchants may be flowed into one or more slits that are out of the plane of the cross-section of FIG. 13, and that extend through the stack 48 to the sacrificial material 36 (FIG. 12) of the stack 26. In the shown embodiment, the intervening regions 32a and 32b remain after formation of the conduits 80. In other embodiments, such intervening regions may be removed during formation of the conduits, depending on the composition(s) of the intervening regions and of the etchant(s) utilized to remove the sacrificial material 36.The conduits 80 are also extended through the cell materials of the outer regions 72 to expose sidewall surfaces of the semiconductor material (channel material) 74. Such may or may not be conducted with a different etchant than that utilized to remove the sacrificial material 36.Referring to FIG. 14, conductively-doped-semiconductor- material 82 is formed within the conduits 80 (FIG. 13). The semiconductor material 82 becomes the central region 30a of the stack 26.The semiconductor material 82 may comprise any suitable composition(s); and in some embodiments may comprise, consist essentially of, or consist of one or more of silicon, germanium, lll/V semiconductor material (e.g., gallium phosphide), semiconductor oxide, etc. In some embodiments, the semiconductor material 82 may comprise silicon which is heavily doped (e.g., doped to a concentration of at least about 1022atoms/cm3) with n-type dopant (e.g., phosphorus, arsenic, etc.). The conductive material 82 may be considered to be configured as a source-structure-component 84 which is coupled with the lower region of the channel-material-pillar 78. In some embodiments, the materials within the first stack 26 of FIG. 14 may all be considered to be part of a conductive source structure 86 provided within the memory array region 12a.The structures 20 within the staircase/crest region 14 may be considered to be conductive structures at the process stage of FIG. 14. Specifically, the primary materials 34 and 82 within the structures 20 may all correspond to conductively-doped semiconductor material (e.g., conductively-doped silicon).The regions 32a and 32b may be removed during the formation of the conduits 80 of FIG. 13 (as discussed above with reference to FIG. 13) so that such regions are not part of the source structure 86 and the conductive structures 20. In some embodiments, the regions 32a and 32b may be conductive, or may be insulative and kept very thin so that they do not problematically influence electrical conduction along the source structure 86. The doped-semiconductor-material 82 directly contacts the channel material 74 of the channel-material-pillar 78 in the shown embodiment.Dopant may be out-diffused from the conductively-doped- semiconductor-material 82 into the semiconductor material (channel material) 74 to form a heavily-doped region within a lower portion of the channel-material-pillar 78. Such heavily-doped region may be advantageously utilized during formation of SGS devices at a subsequent process stage (discussed below with reference to FIG. 15).The out-diffusion from the doped material 82 into the semiconductor material 74 may be accomplished with any suitable processing, including, for example, suitable thermal processing (e.g., thermal processing at a temperature exceeding about 300°C for a duration of at least about two minutes).Referring to FIG. 15, the sacrificial material 54 (FIG. 14) of the first tiers 50 is removed and replaced with conductive material 88. Although the conductive material 88 is shown to entirely fill the first tiers 50, in other embodiments at least some of the material provided within the first tiers 50 may be insulative material (e.g., dielectric- barrier material). The dielectric-barrier material may comprise any suitable composition(s); and in some embodiments may comprise one or more of aluminum oxide, hafnium oxide, zirconium oxide, etc.The conductive material 88 may comprise any suitable composition(s), and in some embodiments may comprise a tungsten core at least partially surrounded by titanium nitride.The first tiers 50 of FIG. 15 are conductive tiers, and the stack 48 may be considered to comprise alternating insulative tiers 52 and conductive tiers 50.The assembly 10 of FIG. 15 may be considered to comprise regions of a memory device which includes memory cells 100 and select devices (SGS devices) 102 (with only one or such SGS devices being in shown in FIG. 15). A lowermost of the conductive levels 50 is labeled 50a, and the doped region within the channel material 74 (described above with reference to FIG. 14 as being formed by out- diffusion into the channel material 74) may extend to the conductive level 50a. The conductive level 50a comprises the SGS devices 102. The dopant may extend partially across the level 50a to achieve the desired balance between non-leaky OFF characteristics and leaky GIDL characteristics for the SGS devices.Although only one of the conductive levels is shown incorporated into the SGS devices, in other embodiments multiple conductive levels may be incorporated into the SGS devices. The conductive levels may be electrically coupled with one another (ganged together) to be incorporated into long-channel SGS devices. If multiple of the conductive levels are incorporated into the SGS devices, the out-diffused dopant may extend upwardly across two or more of the conductive levels 50 which are incorporated into the SGS devices.The memory cells 100 (e.g., NAND memory cells) are vertically- stacked one atop another. Each of the memory cells comprises a region of the semiconductor material (channel material) 74, and comprises regions (control gate regions) of the conductive levels 50. The regions of the conductive levels 50 which are not comprised by the memory cells 100 may be considered to be wordline regions (routing regions) which couple the control gate regions with driver circuitry and/or with other suitable circuitry. The memory cells 100 comprise the cell materials (e.g., the tunneling material, charge- storage material and charge-blocking material) within the regions 72.In some embodiments, the conductive levels 50 associated with the memory cells 100 may be referred to as wordline/control gate levels (or memory cell levels), in that they include wordlines and control gates associated with vertically-stacked memory cells of NAND strings. The NAND strings may comprise any suitable number of memory cell levels. For instance, the NAND strings may have 8 memory cell levels, 16 memory cell levels, 32 memory cell levels, 64 memory cell levels, 512 memory cell levels, 1024 memory cell levels, etc.The wordline levels (NAND wordline levels) 50 may be coupled to control circuitry (e.g., wordline driver circuitry) with interconnects (not shown) formed in the staircase region to couple to individual wordline levels.The source structure 86 may be analogous to the source structures 216 described in the “Background” section. The source structure is shown to be coupled with control circuitry (e.g., CMOS) 18a, as shown. The control circuitry may be under the source structure (as shown), or may be in any other suitable location. The source structure may be coupled with the control circuitry 18a at any suitable process stage.FIG. 16 shows the configuration of FIG. 15, and shows the cell- material-pillar 70 vertically-extended and coupled with a bitline 108.An SGD device 110 is diagrammatically illustrated as being adjacent to the upper region of the pillar 70, and to be beneath the bitline 108.The bitline 108 may extend in and out of the page relative to the cross-sectional view of FIG. 16.The pillar 70, bitline 108, SGD device 110, SGS device 102 and memory cells 100 may be together considered to form a NAND-type configuration analogous to those described above with reference to FIGS. 1 -4.The SGD device 110 is indicated to be coupled to one of the conductive posts 66b in the view of FIG. 16. Accordingly, in some embodiments SGD devices 110 associated with the memory region 12a may be coupled to the CMOS (e.g., logic circuitry) 18 through the conductive posts 66b associated with the intermediate region 14.The SGD devices 110 are examples of components that may be associated with the cell-material-pillars 70 and coupled with the CMOS 18 through the conductive posts 66. In other embodiments, other components may be coupled to the CMOS through one or more of the conductive posts 66, either in addition to, or alternatively to, the SGD devices 110. For instance, the bitlines may be coupled to the CMOS through the conductive posts 66, and in such embodiments the CMOS may include sensing circuitry (e.g., sense-amplifier-circuitry) coupled to the bitlines through the conductive posts 66. Generally, one or more components may be operatively proximate to the cell- material-pillars 70 (and/or the channel-material-pillars 78), and may be coupled to the CMOS 18 through the conductive posts 66 (and specifically through the live conductive posts 66b).As discussed above with reference to FIG. 10, in some embodiments the lined openings 60 may be formed to be larger than the interconnects 22. Such may be advantageous in that it may enable increased tolerance for mask misalignment that may occur during the aligning of the openings 60 to the interconnects 22. An example method of forming and utilizing lined openings which are larger than underlying interconnects is described with reference to FIGS. 17-19.Referring to FIG. 17, a region of the assembly 10 is shown at a process stage analogous to that described above with reference to FIG. 7. Flowever, the opening 60 has a much larger width along the cross-section of FIG. 17 than does the interconnect 22.Referring to FIG. 18, the assembly 10 is shown at a process stage analogous to that of FIG. 10. The insulative material 62 is provided within the opening 60 to line the opening, and then the opening is extended through the structure 20 to the interconnect 22. Since the etch utilized to punch through the structure 20 will stop, or at least slow down, upon reaching the metal-containing material 23 and the insulative oxide 40, the etch controllably exposes an upper surface of the metal-containing material 23 of the interconnect 22 without problematically over-etching around such conductive material.The configuration of FIG. 18 is similar to that of FIG. 10, except that the width W2 of the opening 60 is larger than the width Wi of the interconnect 22. Referring to FIG. 19, the conductive material 64 is formed within the opening 60 with processing analogous to that described above with reference to FIG. 11. Subsequently, the assembly of FIG. 19 may be subjected to processing analogous to that described above with reference to FIGS. 12-15 to form a configuration similar to that described above with reference to FIG. 15.The assemblies and structures discussed above may be utilized within integrated circuits (with the term “integrated circuit” meaning an electronic circuit supported by a semiconductor substrate); and may be incorporated into electronic systems. Such electronic systems may be used in, for example, memory modules, device drivers, power modules, communication modems, processor modules, and application-specific modules, and may include multilayer, multichip modules. The electronic systems may be any of a broad range of systems, such as, for example, cameras, wireless devices, displays, chip sets, set top boxes, games, lighting, vehicles, clocks, televisions, cell phones, personal computers, automobiles, industrial control systems, aircraft, etc.Unless specified otherwise, the various materials, substances, compositions, etc. described herein may be formed with any suitable methodologies, either now known or yet to be developed, including, for example, atomic layer deposition (ALD), chemical vapor deposition (CVD), physical vapor deposition (PVD), etc.The terms “dielectric” and “insulative” may be utilized to describe materials having insulative electrical properties. The terms are considered synonymous in this disclosure. The utilization of the term “dielectric” in some instances, and the term “insulative” (or “electrically insulative”) in other instances, may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow, and is not utilized to indicate any significant chemical or electrical differences.The terms “electrically connected” and “electrically coupled” may both be utilized in this disclosure. The terms are considered synonymous. The utilization of one term in some instances and the other in other instances may be to provide language variation within this disclosure to simplify antecedent basis within the claims that follow.The particular orientation of the various embodiments in the drawings is for illustrative purposes only, and the embodiments may be rotated relative to the shown orientations in some applications. The descriptions provided herein, and the claims that follow, pertain to any structures that have the described relationships between various features, regardless of whether the structures are in the particular orientation of the drawings, or are rotated relative to such orientation.The cross-sectional views of the accompanying illustrations only show features within the planes of the cross-sections, and do not show materials behind the planes of the cross-sections, unless indicated otherwise, in order to simplify the drawings.When a structure is referred to above as being “on”, “adjacent” or “against” another structure, it can be directly on the other structure or intervening structures may also be present. In contrast, when a structure is referred to as being “directly on”, “directly adjacent” or “directly against” another structure, there are no intervening structures present. The terms "directly under", "directly over", etc., do not indicate direct physical contact (unless expressly stated otherwise), but instead indicate upright alignment.Structures (e.g., layers, materials, etc.) may be referred to as “extending vertically” to indicate that the structures generally extend upwardly from an underlying base (e.g., substrate). The vertically- extending structures may extend substantially orthogonally relative to an upper surface of the base, or not.Some embodiments include an integrated assembly having a memory array region and another region proximate the memory array region. Conductive structures are distributed along a level within the memory array region and the other region. The conductive structures include a metal-containing region and a first stack over the metal- containing region. The first stack includes alternating semiconductor- material-containing regions and intervening regions. One of the semiconductor-material-containing regions is a central semiconductor- material-containing region and is vertically between two others of the semiconductor-material-containing regions. A second stack is over the level and is over the conductive structures. The second stack includes alternating first and second tiers. The first tiers include conductive material, and the second tiers include insulative material. Cell- material-pillars are within the memory array region. The cell-material- pillars extend through the second stack and into the first stack. The cell-material-pillars include channel material and include other materials laterally outward of the channel material. The central semiconductor material penetrates laterally through the other materials and directly contacts the channel material. Conductive post structures are within the other region. The conductive post structures extend through the second stack and through the conductive structures. Some of the conductive post structures are dummy structures and have bottom surfaces which are entirely along an insulative oxide material, and others of the conductive post structures are live structures and are electrically coupled with CMOS circuitry beneath the level.Some embodiments include a method of forming an assembly. A construction is formed to comprise structures distributed along a level. A first set of the structures is over metal-containing interconnects, and a second set of the structures is only over an insulative oxide. The structures each include a metal-containing region and a first stack over the metal-containing region. The first stack includes a central region between two outer regions, with the central region being spaced from the outer regions by intervening regions. The central region comprises a first sacrificial material. A stack of alternating first and second tiers is formed over the structures. The first tiers comprise a second sacrificial material and the second tiers comprise a first insulative material. Openings are formed to extend through the stack and into the structures. The openings are lined with a second insulative material. Bottoms of the lined openings are punched through with etching conditions that utilize one or more halides. After punching through the bottoms of the lined openings, conductive post material is formed within the lined openings. The conductive post material within the first set of the structures is directly against the metal-containing interconnects and comprises a same composition as the metal-containing interconnects. The first sacrificial material is removed to form void regions. Semiconductor material is formed within the void regions. At least some of the second sacrificial material is replaced with conductive wordline material.Some embodiments include a method of forming an assembly. A construction is formed to have a memory array region and another region proximate the memory array region. The construction includes structures within the memory array region and the other region. The structures include a metal-containing region and a first stack over the metal-containing region. The first stack includes a central region between two outer regions, includes a first intervening region between the central region and one of the outer regions, and includes a second intervening region between the central region and the other of the outer regions. The central region comprises a first sacrificial material. A first set of the structures within the other region are over metal- containing interconnects and a second set of the structures within the other region are only over an insulative oxide. A stack of alternating first and second tiers is formed over the structures. The first tiers comprise a second sacrificial material and the second tiers comprise a first insulative material. Openings are formed to extend through the stack and into the conductive structures within the other region. The openings are lined with a second insulative material. Bottoms of the lined openings are punched through utilizing etching conditions that slow upon reaching the insulative oxide and the metal-containing interconnects. After the punching through the bottoms of the lined openings, conductive post material is formed within the lined openings. Cell-material-pillars are formed within the memory array region. The cell-material-pillars extend through the second stack and into the first stack. The cell-material-pillars include channel material and other materials laterally outward of the channel material. The first sacrificial material is removed to form void regions. The void regions are extended laterally through the other materials to expose the channel material. Doped semiconductor material is formed within the void regions. At least some of the second sacrificial material is replaced with conductive wordline material. In compliance with the statute, the subject matter disclosed herein has been described in language more or less specific as to structural and methodical features. It is to be understood, however, that the claims are not limited to the specific features shown and described, since the means herein disclosed comprise example embodiments. The claims are thus to be afforded full scope as literally worded, and to be appropriately interpreted in accordance with the doctrine of equivalents.
Apparatuses and methods for performing multithread, concurrent access of different partition of a memory are disclosed herein. An example apparatus may include a non-volatile memory array comprising a plurality of partitions, each may include a respective plurality of memory cells. The apparatus may further include a plurality of local controllers that are each configured to independently and concurrently access a respective one of the plurality of partitions to execute a respective memory access command responsive to receiving the respective memory access command. The example apparatus may further include a controller configured to receive the plurality of memory access commands and determine a respective target partition for each of the plurality of memory access commands. The controller may be further configured to provide each of the plurality of memory access commands to a local controller of the plurality of local controllers associated with the respective target partition.
CLAIMSWhat is claimed is:1. An apparatus, comprising:a non-volatile memory array comprising a plurality of partitions, wherein each of the plurality of partitions comprises a respective plurality of memory cells;a plurality of local controllers, wherein each of the plurality of local controllers is configured to independently and concurrently access a respective one of the plurality of partitions to execute a respective memory access command of a plurality of memory access commands responsive to receiving the respective memory access command; and a controller configured to receive the plurality of memory access commands and to determine a respective target partition of the plurality of partitions for each of the plurality of memory access commands, the controller further configured to provide each of the plurality of memory access commands to a local controller of the plurality of local controllers associated with the respective target partition.2. The apparatus of claim 1 , wherein a local controller of the plurality of local controller comprises:respective sense amplifiers configured to sense data during execution of the memory access command;respective drivers configured to drive voltages along access lines; and respective sequencers configured to execute an algorithm associated with the memory access command.3. The apparatus of claim 1, wherein the controller comprises a command and address user interface circuit configured to determine a memory access command type and the respective target partition of the plurality of target partitions for each of the plurality of memory access commands.4. The apparatus of claim 3, wherein the controller further comprises a command and address interface circuit configured to receive each of the plurality of memory access commands from a memory controller and to provide each of the plurality of memory access commands to the command and address user interface circuit.5. The apparatus of claim 1, further comprising a plurality of data buffers, wherein each of the plurality of data buffers is configured to independently and concurrently receive data from or provide data to a respective one of the plurality of partitions.6. The apparatus of claim 5, wherein the controller further comprises a data block configured to receive read data from or provide write data to each of the plurality of data buffers via a data bus responsive to a memory access command of the plurality of memory access commands.7. The apparatus of claim 7, wherein the controller further comprises a data input/output interface circuit configured to receive write data from a memory controller and to provide the write data to the data block or to receive read data from the data block and to provide the read data to the memory controller.8. An apparatus, comprising:a non-volatile memory comprising a plurality of partitions and a plurality of local controllers, wherein each of the plurality of local controllers is configured to independently access a respective one of the plurality of partitions, wherein each of the plurality of partitions comprises a respective plurality of memory cells;a memory controller configured to provide memory access commands to the non-volatile memory according to separation timing rules for the memory access commands, wherein the memory controller provides a first memory access command of a first type to a first partition of the plurality of partitions, and, responsive to providing a second memory access command of the first type to the first partition of the plurality of partitions, the memory controller configured to provide the second memory access command a minimum of a first time after the first memory access command, and further, responsive to providing the second memory access command of the first type to a second partition of the plurality of partitions, the memory controller configured to provide the second memory access command a minimum of a second time after the first memory access command.9. The apparatus of claim 8, wherein, responsive to providing a second memory access command of a second type to the first partition of the plurality of partitions, the memory controller is configured to provide the second memory access command a minimum of a third time after the first memory access command.10. The apparatus of claim 9, wherein the memory access command of the first type comprises a read memory access command and the memory access command of the second type comprises a write memory access command.11. The apparatus of claim 8, wherein the non-volatile memory further comprises a plurality of data buffers, wherein a data buffer of the plurality of data buffers is coupled to a respective one of the plurality partitions, wherein the data buffer is configured to latch data from the respective one of the plurality of partitions responsive to a signal from a local controller of the plurality of local controllers coupled to the respective one of the plurality of local controllers.12. The apparatus of claim 8, wherein the non-volatile memory further comprises a controller configured to receive the memory access commands from the memory controller and determine a respective target partition of the plurality of partitions, the controller further configured to provide the memory access command to a local controller of the plurality of local controllers associated with the target partition.13. The apparatus of claim 8, wherein the plurality of local controllers of the non-volatile memory are configured to independently access respective ones of the plurality of partitions concurrently.14. A method, comprising:receiving a first memory access command and a second memory access command at a controller of a non-volatile memory;determining a first target partition of the non-volatile memory for the first memory access command and a second target partition of the non-volatile memory for the second memory access command;providing the first memory access command to a first local controller of the non-volatile memory coupled to the first target partition and the second memory access command to a second local controller of the non-volatile memory coupled to the second target partition; executing a memory access of the first target partition associated with the first memory access command; andconcurrent with execution of the memory access of the first partition, executing a memory access of the second target partition associated with the second memory access command.15. The method of claim 14, wherein the first memory access command is a write command, the method further comprising:receiving write data at the controller; andproviding the write data to a first data buffer of the non-volatile memory via a data bus, wherein the first data buffer is coupled to the first target partition, wherein executing the first memory access command comprises writing the write data to the first target partition.16. The method of claim 15, wherein the second memory access command is a read command, wherein executing the second memory access command comprises latching read data from the second partition at a second data buffer of the non-volatile memory coupled to the second target partition.17. The method of claim 16, further comprising providing the read data from the second data buffer to the controller via a data bus.18. The method of claim 14, further comprising prior to providing the first memory access command to the first local controller, determining whether the first local controller has finished executing a previous memory access command.19. The method of claim 24, wherein determining the first target partition of the non-volatile memory for the first memory access command is based on an address of the first access command.20. A method, comprising:providing a first memory access command to a non-volatile memory;determining whether time elapsed since providing the first memory access command satisfies a separation timing rule associated with a second memory access command and the first memory access command, wherein the separation timing rule is based on a first target partition of the non-volatile memory associated with the first memory access command a second target partition of the non-volatile memory associated with the second memory access command;responsive to the separation timing rule being met, providing the second memory access command to the non-volatile memory.21. The method of claim 20, wherein the first target partition and the second target partition are the same partition.22. The method of claim 20, further comprising determining a first command type associated with the first memory access command and a second command type associated with the second memory access command, wherein the separation timing rule is further based on the first command type and the second command type.23. The method of claim 20, further comprising concurrently executing the first memory access command at the first target partition and the second memory access command at the second target partition.24. The method of claim 20, further comprising looking up the timing separation rule in a table.
APPARATUSES AND METHODS FOR CONCURRENTLY ACCESSING MULTIPLE PARTITIONS OF A NON- VOLATILE MEMORYBACKGROUND[001] Memories may be provided in a variety of apparatuses, such as computers or other devices, including but not limited to portable storage devices, solid state drives, music players, cameras, phones, wireless devices, displays, chip sets, set top boxes, gaming systems, vehicles, and appliances. There are many different types of memory including volatile memory (e.g., dynamic random access memory (DRAM)) and non-volatile memory (e.g., flash memory, phase change memory, etc.).[002] In non-volatile memories, memory arrays may be divided into partitions.Dividing a memory into partitions may break up rows or columns into smaller sections for accessing during memory access operations. However, current memory architectures may allow access to only a single partition of the memory at a time.SUMMARY[003] Apparatuses and methods for performing multithread, concurrent access of different partition of a memory are disclosed herein. In one aspect of the disclosure, an apparatus may include a non-volatile memory array comprising a plurality of partitions. Each of the plurality of partitions may include a respective plurality of memory cells. The apparatus may further include a plurality of local controllers that are each configured to independently and concurrently access a respective one of the plurality of partitions to execute a respective memory access command of a plurality of memory access commands responsive to receiving the respective memory access command. The example apparatus may further include a controller configured to receive the plurality of memory access commands and to determine a respective target partition of the plurality of partitions for each of the plurality of memory access commands. The controller may be further configured to provide each of the plurality of memory access commands to a local controller of the plurality of local controllers associated with the respective target partition.[004] In another aspect, an apparatus includes a non-volatile memory and a memory controller. The non-volatile memory includes a plurality of partitions and a plurality of local controllers, wherein each of the plurality of local controllers is configured to independently access a respective one of the plurality of partitions, wherein each of the plurality of partitions comprises a respective plurality of memory cells. The memory controller is configured to provide memory access commands to the non-volatile memory according to separation timing rules for the memory access commands, wherein the memory controller provides a first memory access command of a first type to a first partition of the plurality of partitions. Responsive to providing a second memory access command of the first type to the first partition of the plurality of partitions, the memory controller is configured to provide the second memory access command a minimum of a first time after the first memory access command. Responsive to providing the second memory access command of the first type to a second partition of the plurality of partitions, the memory controller is configured to provide the second memory access command a minimum of a second time after the first memory access command.[005] In another aspect, a method includes receiving a first memory access command and a second memory access command at a controller of a non-volatile memory and determining a first target partition of the non-volatile memory for the first memory access command and a second target partition of the non-volatile memory for the second memory access command. The method further includes providing the first memory access command to a first local controller of the non-volatile memory coupled to the first target partition and the second memory access command to a second local controller of the non-volatile memory coupled to the second target partition, executing a memory access of the first target partition associated with the first memory access command, and concurrent with execution of the memory access of the first partition, executing a memory access of the second target partition associated with the second memory access command.[006] In another aspect, a method includes providing a first memory access command to a non-volatile memory and determining whether time elapsed since providing the first memory access command satisfies a separation timing rule associated with a second memory access command and the first memory access command, wherein the separation timing rule is based on a first target partition of the non-volatile memory associated with the first memory access command a second target partition of the non-volatile memory associated with the second memory access command. The method further includes, responsive to the separation timing rule being met, providing the second memory access command to the non-volatile memory.BRIEF DESCRIPTION OF THE DRAWINGS[007] Figure 1 is a block diagram of an apparatus including a memory according to an embodiment of the present disclosure.[008] Figure 2 is a block diagram of memory according to an embodiment of the present disclosure.[009] Figure 3 is a block diagram of memory according to an embodiment of the present disclosure.[010] Figure 4 is a separation timing rule lookup table according to an embodiment of the present disclosure.DETAILED DESCRIPTION[011] Apparatuses and methods for multithread, concurrent access of multiple partitions of a memory are disclosed herein. Certain details are set forth below to provide a sufficient understanding of embodiments of the disclosure. However, it will be clear to one having skill in the art that embodiments of the disclosure may be practiced without these particular details. Moreover, the particular embodiments of the present disclosure described herein are provided by way of example and should not be used to limit the scope of the disclosure to these particular embodiments. In other instances, well-known circuits, control signals, timing protocols, and software operations have not been shown in detail in order to avoid unnecessarily obscuring the disclosure.[012] Figure 1 is a block diagram of an apparatus 100 (e.g., an integrated circuit, a memory device, a memory system, an electronic device or system, a smart phone, a tablet, a computer, a server, etc.) including a memory 150 according to an embodiment of the present disclosure. The memory 150 is configured to perform multithread, concurrent access of multiple partitions. The apparatus 100 may include a controller 1 10 coupled to a memory 150 via a command, address, and data (CAD) bus 130. The memory 150 may be configured to receive commands and/or addresses from the controller 110 over the CAD bus 130, and the memory may be configured to receive data and/or provide data over the CAD bus 130.[013] In some examples, the memory 150 may be a non-volatile memory. Examples of non-volatile memory include NA D flash, NOR flash, PCM, PCMS, 3D cross point memory, PRAM, stacked memory, OUM, OUMS, etc. The memory 150 may include an array of cells organized across multiple memory partitions. The memory partitions may be divided into blocks, with each block having multiple memory cell pages. Each page may include memory cells that are coupled to access lines. The memory 150 may be configured to perform multithread, concurrent access of two or more partitions. The memory 150 may include control circuitry (e.g., local controllers and data buffers) that is configured to independently access individual partitions concurrently. For example, the memory 150 may include an internal controller that receives memory access commands (e.g., command, address, and data information) from the CAD bus 130, and provides the command and address information to a local controller associated with a target partition. The local controller may also send the data associated with the memory access command to a data buffer associated with the target partition. The internal controller may be configured to initiate the memory access command while a previously received memory access command continues to be executed. Thus, memory access commands may be executed in two or more different partitions concurrently.[014] Typically, a memory must complete processing of a memory access command prior to processing a subsequent memory access command. As previously discussed, the memory 150 may be divided into multiple partitions with associated control circuitry (e.g., local controllers and data buffers). Thus, during operation, the memory 150 may be configured to receive and concurrently process multiple memory access command threads from the controller 110 by leveraging the multiple partitions and control circuitry. For example, the controller 110 may provide a first memory access command (e.g., first command, first address, and/or first data) directed to a first partition of the memory 150 via the CAD bus 130. The first memory access command may include a read command and address, a write command, address, and write data, or other memory access command, for example. The memory 150 may receive and begin processing the first memory access command. As the first memory command is being processed at the first partition of the memory 150, the controller 110 may issue a second memory access command directed to a second partition of the memory 150 via the CAD bus 130. The memory 150 may begin processing the second memory access command at the second partition concurrently with processing of the first memory access command by the first partition.[015] The intemal controller of the memory 150 may determine a target partition of the memory 150 and provide the memory access command information to the control circuitry associated with the target partition. In some embodiments, the internal controller of the memory 150 may use the address associated with the first memory access command to determine the target partition. Thus, in an example, the intemal controller may provide the first memory access command to a first local controller associated with the first partition to execute the first memory access command. Further, the intemal controller may provide the second memory access command to a second local controller associated with the second partition to execute the second memory access command. If either or both of the first or second memory access commands are write commands, the internal controller may provide associated data to the first or second data buffer, respectively.[016] To avoid collisions on the respective data/command buses or corrupting data in the respective data buffers or the local controllers, the controller 110 may implement timing rules that govern separation timing between memory access commands. The timing may be based on a type of memory access command (e.g., read vs. write) for a current and a previous command, as well as a target partition for each. For example, a separation timing rule for consecutive read commands directed to different partitions may be different than a separation timing rule for a read command to a second partition that follows a write command to a first partition.[017] By complying with timing separation rules for memory access commands, and including control circuitry in the memory 150 that facilitates multiple concurrent memory access threads, data throughput can be increased as compared with a memory that is only capable of processing a single memory access command at a time.[018] Figure 2 illustrates an apparatus that includes a memory device 200 according to an embodiment of the present invention. The memory device 200 includes a memory array 280 with a plurality of memory cells that are configured to store data. The memory cells may be accessed in the array through the use of various signal lines, word lines (WLs) and/or bit lines (BLs). The memory cells may be non-volatile memory cells, such as NAND or NOR flash cells, phase change memory cells, or may generally be any type of memory cells. The memory cells of the memory array 280 can be arranged in a memory array architecture. For example, in one embodiment, the memory cells are arranged in a 3D cross-point architecture. In other embodiments, other memory array architectures may be used, for example, a single-level cross-point architecture, among others. The memory cells may be single level cells configured to store data for one bit of data. The memory cells may also be multi-level cells configured to store data for more than one bit of data. The memory 200 may be implemented in the memory 150 of Figure 1. In some examples, the array 280 may be divided into a plurality of partitions.[019] A data strobe signal DQS may be transmitted through a data strobe bus (not shown). The DQS signal may be used to provide timing information for the transfer of data to the memory device 200 or from the memory device 200. The I/O bus 228 is connected to an internal controller 260 that routes data signals, address information signals, and other signals between the I/O bus 228 and an internal data bus 222 and/or an internal address bus 224. The internal address bus 224 may be provided address information by the internal controller 260. The internal address bus 224 may provide block-row address signals to a row decoder 240 and column address signals to a column decoder 250. The row decoder 240 and column decoder 250 may be used to select blocks of memory cells for memory operations, for example, read and write operations. The row decoder 240 and/or the column decoder 250 may include one or more signal line drivers configured to provide a biasing signal to one or more of the signal lines in the memory array 280. The I/O control circuit 220 include a status register that is configured to store status bits responsive to a read status command provided to the memory device 200. The status bits may have respective values to indicate a status condition of various aspects of the memory and its operation. The internal controller 260 may update the status bits as status conditions change.[020] The internal controller 260 may also receive a number of control signals 238, either externally or internally to control the operation of the memory device 200. The control signals 238 and the I/O bus 228 may be received on a combined a command, address, and data bus, such as the CAD bus 130 of Figure 1. The control signals 238 may be implemented with any appropriate interface protocol. For example, the control signals 238 may be pin based, as is common in dynamic random access memory and flash memory (e.g., NAND flash), or op-code based. Example control signals 238 include clock signals, read/write signals, clock enable signals, etc. The internal controller 260 may initiate multiple, concurrent memory access threads to different partitions of the array 280 using the row decoder 240, the column decoder 250, and the data I/O circuit 270, that are capable of independently accessing individual partitions in parallel. For example, the internal controller 260 may sequentially receive memory access commands (e.g., command, address, and/or data information), and may provide (e.g., send) signals to the column decoder 250, the row decoder 240, and the data I/O circuit 270 to initiate execution of the sequentially received memory access commands. In some embodiments, the timing of provision of the signals associated with the memory access commands to the column decoder 250, the row decoder 240, and the data I/O circuit 270 may be based on the type of memory access command and based on whether the target partition is currently executing a memory access command operation.[021] The internal controller 260 may include a command register store information received by the internal controller 260. The internal controller 260 may be configured to provide internal control signals to various circuits of the memory device 200. For example, responsive to receiving a memory access command (e.g., read, write), the internal controller 260 may provide internal control signals to control various memory access circuits to perform a memory access operation. The various memory access circuits are used during the memory access operation, and may generally include circuits such as row and column decoders, charge pump circuits, signal line drivers, data and cache registers, I/O circuits, as well as others.[022] The data I/O circuit 270 includes one or more circuits configured to facilitate data transfer between the internal controller 260 and the memory array 280 based on signals received from the internal controller 260. In various embodiments, the data I/O circuit 270 may include one or more registers, buffers, and other circuits for managing data transfer between the memory array 280 and the internal controller 260. In an embodiment, the data I/O circuit 270 may include separate data buffers for each partition of the memory array 280. In an example write operation, the internal controller 260 receives the data to be written through the I/O bus 228 and provides the data to the data I/O circuit 270 via the internal data bus 222. The data I/O circuit 270 writes the data to the memory array 280 based on control signals provided by the internal controller 260 at a location specified by the row decoder 240 and the column decoder 250. During a read operation, the data I/O circuit 270 reads data from the memory array 280 based on control signals provided by the internal controller 260 at an address specified by the row decoder 240 and the column decoder 250. The data I/O circuit 270 provides the read data to the internal controller 260 via the internal data bus 222. The internal controller 260 then provides the read data on the I/O bus 228. In some examples, for each partition of the array 280, the data I/O circuit 270 may include independently controlled data buffers that may be used to independently receive data from or provide data to a respective partition of the array 280.[023] Figure 3 illustrates a portion of a memory 300 configured to concurrently access multiple memory partitions according to an embodiment of the present disclosure. The memory 300 includes an internal controller 360 to process received memory access commands from an external controller (e.g., the controller 110 of Figure 1) and a memory array including a plurality of partitions 372(0)-372(N). Each of the partitions 372(0)-372(N) may include a respective plurality of memory cells. The partitions 372(0)-372(N) may each be coupled to a respective local controller 374(0)-374(N) and to respective data buffers 376(0)-376(N) to facilitate multithread, concurrent access of different partitions 372(0)-372(N). The value of "N" may be a positive, non-zero number. The memory 300 may be implemented in the memory 150 of Figure 1 and/or the memory 200 of Figure 2. The memory cells may be non-volatile memory cells, or may generally be any type of memory cells.[024] The internal controller 360 may include a data I/O interface 362 coupled to a data block 364 and a command/address interface 366 coupled to a command UI block 368. The data I/O interface 362 may provide data received from the external controller (e.g., responsive to a write access command) to the data block 364, and may provide data received from the data block 364 (e.g., responsive to a read access command) to the external controller. The data block 364 may provide data to (e.g., write memory access) and receive data from (e.g., read memory access) to data buffers 376(0)-376(N) via a data bus 390 responsive to control signals from the command UI block 368.[025] The command/address interface 366 may provide command and address information received from the external controller to the command UI block 368. The command UI block 368 may determine a target partition of the partitions 372(0)-372(N) and provide the received command and address information to the 374(0)-374(N) associated with the target partition 372(0)-372(N) via a command/address bus 380.[026] The partitions 372(0)-372(N) may each be independently accessible during memory access operations by the local controllers 374(0)-374(N). For example, during memory access operations, partition 372(0) may be accessed independently of partition 372(1). Each of the partitions 372(0)-372(N) may be coupled to a respective local controller 374(0)-374(N) that is configured to perform the memory access of the respective partition 372(0)-372(N). Each of the local controllers 374(0)-374(N) may include respective sense amplifiers, sequencers (e.g., that access and execute algorithms based on the type of memory access), and driver circuits (e.g., voltage or current driver circuits) to perform memory access operations, such as read accesses or write accesses. The sense amplifiers may be configured to sense data during execution of the memory access command. The sequencers may be configured to execute the algorithm associated with thee memory access command. The driver circuits may be configured to drive voltages along access lines of the partition. Each partition 372(0)-372(N) may also be coupled to a respective data buffer 376(0)-376(N). The data buffers 376(0)-376(N) may be configured to provide data to or receive data from the respective partition 372(0)-372(N). The data buffers 376(0)-376(N) may be controlled by the internal controller 360 or the respective local controllers 374(0)-374(N). Data received from the respective memory partition 372(0)-372(N) may be latched at the data buffers 376(0)-376(N), respectively. The data latched by the respective data buffers 376(0)-376(N) may be provided to the data block 364 via the internal data bus.[027] In operation, the internal controller 360 may receive a memory access command (e.g., command and address information) via a command and address bus (not shown), and may receive data via a data bus (not shown). The internal controller 360 may determine a respective target partition of the partitions 372(0)-372(N) for each memory access command (e.g., based at least in part on the address information associated with each respective memory access command), and may provide each memory access command to a respective local controller 374(0)-374(N) associated with the target partition. The internal controller 360 may also provide the data to the data buffer 376(0)-376(N) associated with the target partition during a write operation, and may receive data from the data buffers 376(0)-376(N) during a read operation. [028] More specifically, the command/address interface 366 may receive the command and address information from an external command and address bus, and may provide the received command and address information to the command UI block 368. The command UI block 368 may determine a target partition 372(0)-372(N) and a command type. The command UI block 368 may provide the command and address information to the local controller 374(0)-374(N) via the command and address bus 380 based on the target partition 372(0)-372(N). In some embodiments, the timing of provision of the command and address information to the local controller 374(0)-374(N) may be based on the command type and/or whether the local controller 374(0)-374(N) is currently executing a memory access command. The command UI block 368 may also provide a control signal to the data block 364 based on the command type to instruct the data block 364 to retrieve data from the data I/O interface 362 and provide the data to one of the data buffers 376(0)-376(N) via the data bus (e.g., write access) or to retrieve data from one of the data buffers 376(0)-376(N) via the data bus and provide the retrieved data to the data I/O interface 362 (e.g., read access).[029] During a write operation, the local controllers 374(0)-374(N) may employ drivers and sequencers to write data from the associated data buffer 376(0)-376(N) to the associated partition 372(0)-372(N).[030] During a read operation, the local controllers 374(0)-374(N) may employ sense amplifiers, drivers, and sequencers to read data from the associated partition 372(0)-372(N) and latch the data read at the associated data buffer 376(0)-376(N). Each of the local controllers 374(0)-374(N) may be configured to operate independently of each other to access the associated partition 372(0)-372(N). Thus, the individual partitions 372(0)-372(N) may be concurrently accessed without interfering with access of another partition 372(0)-372(N), which may improve throughput and efficiency as compared with a memory that is limited to accessing a single partition at a given time.[031] As previously discussed, separation timing rules may be used to avoid collisions on the respective data command buses or corrupting data in the respective data buffers or the local controllers. Correct operation and execution of the memory access commands are managed by complying with the separation timing rules. As further previously discussed, the timing of the separation timing rules may be based on a type of memory access command (e.g., read vs. write) for a current and a previous command, as well as a target partition for each. [032] Figure 4 provides a table depicting exemplary timing rules. For example, a Read to Read command to the same partition may have an XI ns separation rule, and a Read to Read command to different partitions may have an X2 ns separation rule. In a specific example, a first Read command to a first partition is received by the memory and handled accordingly by the local controller associated with the first partition. The soonest a second Read command to the first partition may be provided to the memory is XI ns. Providing a second Read command to the first partition before XI ns relative to the first Read command will cause an error in the data read during the operation for the first Read command. If, however, the second Read command is to a different partition, the soonest the second Read command to the first partition may be provided to the memory is X2 ns. In contrast, if a first Write command to the first partition is to be provided following the first Read command to the first partition, the soonest the first Write command to the first partition may be provided following the first Read command to the first partition is X5 ns. The time X5 may be different from times X2 and XI . In some embodiments, the time X5 may be equal to X2 and/or XI . The timing variables XI -X8 are exemplary, and are not intended to have a multiple relationship, such as time X2 being twice as long as time XI or time X8 being eight times as long as time XI . Generally, multiple operations directed to the same partition have longer separation timing than multiple operations directed to different partitions. In some examples, some of the times XI -X8 have the same value and in other embodiments, the times XI -X8 may all be different.[033] Each separation rule must be met by the controller 1 10 in order for a memory access command to be received and properly performed by the memory. For example, the controller 1 10 may send a first read command to a first partition and a second read command to a second partition. Before the controller 110 can send a first write command to the first partition, the timing separation rule for the first read command to the first partition should be met and the timing separation rule for the second read command to the second partition should be met as well before sending the first write command to the first partition. If both timing separation rules are met, the controller may send the first write command to the memory 150. The timing separation rules may be based, for example, on architecture and latency characteristics of the memory 150 for each memory access command type. 34] From the foregoing it will be appreciated that, although specific embodiments of the disclosure have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the disclosure. Accordingly, the disclosure is not limited except as by the appended claims.
Embodiments of apparatus and methods for forming dual metal interconnects are described herein. Other embodiments may be described and claimed.
CLAIMS What is claimed is: 1. A method of forming an interconnect comprising: depositing a dielectric layer over a patterned metal layer, etching the dielectric layer to form a trench and an opening to expose an underlying metal surface, forming a refractory interconnect in the pretreated opening and directly adjacent to the underlying metal surface, depositing a barrier layer and a seed layer on the trench and the refractory interconnect, and forming a low resistivity metal on the seed layer. 2. The method claim 1, further comprising forming the refractory interconnect by electroless deposition. 3. The method of claim 2, wherein electro lessly depositing the refractory interconnect comprises electrolessly depositing refractory material selected from the group comprising cobalt (Co), nickel (Ni), palladium (Pd), platinum (Pt), tungsten (W), ruthenium (Ru), and alloys thereof. 4. The method of claim 1, further including pretreating the opening and the underlying metal surface. 5. The method of claim 4, wherein pretreating the opening comprises using argon (Ar) ion bombardment or a plasma process formed using a mixture of hydrogen (H2) and helium (He) or a mixture of H2 and Ar. 6. The method of claim 1, wherein the opening is a high aspect ratio feature, with an aspect ratio at or above 3: 1. 7. The method of claim 6, wherein an opening width of the opening is substantially equal to or larger than 50 nanometers (nm). 8. A method comprising: providing a substrate having formed thereon a dielectric layer, a trench andan opening to expose an underlying metal surface; depositing a refractory interconnect in the opening directly adjacent to the underlying metal surface, wherein the refractory interconnect is deposited by bottom-up electroless plating on the underlying metal surface; depositing a trench interconnect comprising a seed layer and a conductor in the trench, wherein the seed layer acts as a nucleation surface for the conductor, and; planarizing the trench interconnect and the dielectric layer. 9. The method of claim 2, wherein electrolessly depositing the refractory interconnect comprises electrolessly depositing refractory material selected from the group comprising cobalt (Co), nickel (Ni), palladium (Pd), platinum (Pt), tungsten (W), ruthenium (Ru), and alloys thereof. 10. The method of claim 8, further including pretreating the opening and the underlying metal surface. 11. The method of claim 10, wherein pretreating the opening comprises using argon (Ar) ion bombardment or a plasma process formed using a mixture of hydrogen (H2) and helium (He) or a mixture of H2 and Ar. 12. The method of claim 8, wherein the opening is a high aspect ratio feature, with an aspect ratio at or above 3: 1. 13. The method of claim 12, wherein an opening width of the opening is substantially equal to or larger than 50 nanometers (nm). 14. The method of claim 8 wherein the dielectric layer is formed from a low-k dielectric material. 15. A microelectronic device comprising: a substrate having formed thereon a low-k dielectric and an opening having an opening width substantially equal to or larger than 50 nanometers (nm) formedin the low-k dielectric; an underlying metal directly adjacent to the opening; a refractory interconnect in the opening, wherein the refractory interconnect is directly adjacent to the underlying metal and a wall of the opening and the refractory interconnect substantially fills the opening; a barrier layer on the refractory interconnect; and a trench interconnect on the barrier layer. 16. The microelectronic device of claim 15, wherein the refractory interconnect is formed by electrolessly depositing from the bottom up. 17. The microelectronic device of claim 16, wherein electrolessly depositing the refractory interconnect comprises electrolessly depositing refractory material selected from the group comprising cobalt (Co), nickel (Ni), palladium (Pd), platinum (Pt), tungsten (W), ruthenium (Ru), and alloys thereof. 18. The method of claim 17, further including pretreating the opening and the underlying metal. 19. The method of claim 18, wherein pretreating the opening comprises using argon (Ar) ion bombardment or a plasma process formed using a mixture of hydrogen (H2) and helium (He) or a mixture of H2 and Ar. 20. The method of claim 15, wherein the opening is a high aspect ratio feature with an aspect ratio at or above 3: 1.
DUAL METAL INTERCONNECTS FOR IMPROVED GAP-FILL, RELIABILITY, AND REDUCED CAPACITANCE FIELD OF THE INVENTION The field of invention relates generally to the field of semiconductor integrated circuit manufacturing and, more specifically but not exclusively, relates to forming dual metal interconnect structures for increased reliability and reduced capacitance. BACKGROUND INFORMATION The fabrication of microelectronic devices involves forming electronic components on microelectronic substrates, such as silicon wafers. These electronic components may include transistors, resistors, capacitors, and the like, with intermediate and overlying metallization patterns at varying levels, separated by dielectric materials. The metallization patterns interconnect, hence the term "interconnects", the electrical components to form integrated circuits. The term interconnect is defined herein to include all interconnection components including trenches and openings or vias filled with conductive material. One process used to form interconnects is known as a "damascene process". In a typical damascene process, a photoresist material is patterned on a dielectric layer and the dielectric material is etched through the photoresist material patterning to form a hole or a via (hereinafter collectively referred to as "an opening" or "openings") to form a pathway between an underlying metal and an adjacent trench or other interconnect structure. The photoresist material is removed and the opening and trench are commonly coated with a barrier and a seed layer then filled with a low resistivity metal to form a conductive pathway through the opening and trench. Formation of the conductive pathway through high aspect openings using common barrier, seed, and trench materials can compromise continuity of the seed layer on high aspect ratio opening surfaces leading to incomplete film coverage, can increase electromigration in the openings leading to reliability failures, and can limit thickness of the dielectric layer as a result of gap-fill constraints. Turning now to the figures, the illustration in FIG. 1 (Prior Art) is a cross-sectional view of an opening 110 formed adjacent to a trench 120 formed over and directly adjacent to the opening 110, the opening 110 having an opening width 112 and an opening height114. A barrier 130 is formed using a physical vapor deposition (PVD) process on a trench surface 140, an opening sidewall 150, and an underlying metal surface 160. Deposition of the barrier 130 using the PVD process results in a non-conformal barrier 130 thickness along the opening sidewall 150 due to the anisotropic nature of the deposition process. The non-conformal barrier 130 in the opening 110 can result in areas with thin or missing portions of a barrier 130 along a portion of the opening sidewall 150, leaving at least a portion of the opening sidewall 150 exposed. The barrier 130 is a multi-layer film that typically consists of a tantalum nitride (TaN) film and a tantalum (Ta) film stack that is used to minimize or substantially prevent diffusion of contaminants across the barrier 130. An underlying metal 170 of copper (Cu) is formed in the dielectric region 180 using methods known to one skilled in the art. The dielectric region 180 is selectively formed of a dielectric material to electrically isolate conductors, reduce resistance capacitance ("RC") delay and improve device performance, such as silicon dioxide (SiC^). FIG. 2 (Prior Art) illustrates the device in FIG. 1 after forming a conductive layer 210 on the barrier 130. The conductive layer 210 is a multi-layer film of Cu that typically consists of a seed layer comprising Cu deposited using a PVD process followed by a thicker Cu layer deposited using an electroplating process to form the conductive layer in the opening 110 and the trench 120. Deposition of the PVD seed layer can exacerbate nonconformity exhibited by the barrier 130 when forming the conductive layer 210, leading to one or more voids 220 in the opening 110. Formation of the conductive layer 210 is challenging since the seed layer must be continuously formed along the opening sidewalls 150 using a largely anisotropic process to deposit the layer along vertical or nearly vertical opening sidewalls 150, meaning that a direction rate in the direction normal to a surface is much higher than in a direction parallel to the surface. Formation of the conductive layer 210, when formed with minimal voids (not shown), creates a seam near a center of the opening 110 created when the conductive layer 210 fills the opening 110 from substantially laterally opposite sidewalls. The opening sidewalls 150 may be tapered (not shown) to provide a more robust seed layer deposition process, however via resistance and reliability is compromised since the tapered profile increases current density near the bottom of the opening 110 as the opening thickness 112 shrinks. As a result, an aspect ratio of the opening 110, or the ratio of the opening height 114 to the opening width 112 is limited to allow filling of theopening 110 using traditional methods. Limiting the aspect ratio forces a reduction in the opening height 114 as the opening width 112 continues to shrink, while increasing capacitance. Further, deposition of the barrier 130 on the underlying metal surface 160 creates an electrical barrier that also increases resistance to electrical flow between the conductive layer 210 and the underlying metal 170. BRIEF DESCRIPTION OF THE DRAWINGS The present invention is illustrated by way of example and not as a limitation in the figures of the accompanying drawings, in which FIG. 1 (Prior Art) is an illustration of a cross-sectional view of an opening formed adjacent to an overlying trench with a barrier formed on the trench and the opening. FIG. 2 (Prior Art) illustrates the device in FIG. 1 after forming a seed layer and conductive layer on the barrier. FIG. 3 is an illustration of a top view of a trench and an opening filled with refractory interconnect over an underlying metal layer. FIG. 4 is a cross-sectional view of FIG. 3 through line A-A illustrating the opening filled with refractory interconnect. FIG. 5 illustrates the device in FIG. 4 after depositing a barrier layer, a seed layer and a conductive layer in a trench adjacent to and on the refractory interconnect. FIG. 6 illustrates a cross-sectional view of a dual metal interconnect in a device. FIG. 7 illustrates a system with a central processing unit comprising dual metal interconnects. FIG. 8 is a flowchart describing one embodiment of a fabrication process used to form dual metal interconnect structures. DETAILED DESCRIPTION An apparatus and methods for forming dual metal interconnect structures are described in various embodiments. In the following description, numerous specific details are set forth such as a description of a method to fabricate dual metal interconnect structures while allowing for continued miniaturization of interconnect openings and increased interconnect layer thickness. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.It would be an advance in the art of microelectronic device fabrication to form an interconnect using conventional dual damascene patterning techniques while providing a robust gap fill process for increased reliability and reduced capacitance. Fabrication of reliable vias, contacts, and other features with high aspect ratios, i.e., height divided by width, is necessary to support circuit density increases over a unit area of a substrate. One such method may comprise depositing a dielectric layer over a patterned metal layer and etching the dielectric layer to form a trench and an opening to expose the patterned metal layer. The opening and the exposed patterned metal layer is pretreated and a refractory interconnect is formed in the pretreated opening. A barrier layer and a seed layer is formed on the trench and the refractory interconnect. A low resistivity metal is formed on the seed layer to provide an interconnect through the dielectric layer from the patterned metal layer to the low resistivity metal. As device density continues to increase, it is imperative to reduce capacitance, power consumption and related heat generation in microelectronic devices. Formation of interconnects with increasing aspect ratios and relatively thick interlayer dielectric layers require that openings used to form interconnects between the metal layers continue to shrink. Elimination of barrier layers and a reliable method for forming a continuous conducting plug in the opening 110 is of increasing importance. FIG. 3 is an illustration of a top view of the trench 120 and the opening 110 in the dielectric region 180 filled with a refractory interconnect 310 over the underlying metal 170, which is part of a patterned metal layer, in accordance with one embodiment of the invention. The refractory interconnect 310 may be a contact, a line, a via, or another conducting element with an opening width 112 substantially equal to or larger than 50 nm, where the opening width 112 is a nominal width of the opening 110. The trench 120 may be shaped in a block pattern, a v-shaped pattern, a semi-circular pattern, and/or an irregular pattern etched or otherwise formed in the dielectric region 180. The dielectric region 180 may be formed using silicon oxide, lightly doped silicon oxide, a fluoropolymer, a porous silicon oxide, silicon oxynitride, and/or silicon nitride. In this embodiment, the trench 120 is positioned directly above the refractory interconnect 310. Alternately, the trench 120 may be positioned on a side of the refractory interconnect 310 (not shown) and directly adjacent to the refractory interconnect 310 to provide an exposed surface of the refractory interconnect 310. The trench 120 may be pretreated using argon (Ar) ion bombardment or a plasma process formed using a mixture of hydrogen (H2) andhelium (He), and/or a mixture of H2 and Ar. Pretreatment of the trench 120 is performed, in one example, to reduce an oxide layer on exposed metal surfaces to promote catalytic behavior. The pretreatment process may be performed in a plasma chamber at a temperature ranging substantially between 100 to 200 degrees Celsius (0C) and preferably about 150 (0C). The plasma process may be applied substantially between 20 to 60 seconds using an applied power substantially between 200-1000 Watts. The refractory interconnect 310 may be formed in the opening 110 using a selective deposition process that substantially fills high aspect ratio features, particularly at or above 3: 1, originating from the bottom of the feature to avoid creation of voids, seams, and/or other defects in the opening 110. For example, the refractory interconnect 310 may be deposited in whole or at least in part by using an electroless deposition process that operates, for example, from a spontaneous reduction of a metal from a solution of its salt with a reducing agent or similar source of electrons in the presence of a catalyst or catalyst surface such as the underlying metal surface 160. In one embodiment, the refractory interconnect 310 is a metal that is selectively designed to diffuse slowly through the dielectric region 180 while providing electromigration resistance. Formation of the refractory interconnect 310 without deposition of an intervening barrier 130 between the dielectric region 180 and the refractory interconnect 310, which would otherwise consume a portion of the opening 110 while increasing process complexity and manufacturing cost, reduces resistance to electrical flow between the refractory interconnect 310 and an underlying metal 170. As a result, the refractory interconnect 310 may be formed directly on or adjacent to the underlying metal 170 and one or more walls or sides of the opening 110 without first forming a barrier 130. The trench 120 is formed using an etch process or another erosion process used to remove a portion of the dielectric region 180. FIG. 4 is a cross-sectional view of FIG. 3 through line A-A illustrating the opening 110 filled with the refractory interconnect 310. The refractory interconnect 310 is selectively formed using a bottom-up formation process to prevent voids that would increase current density through the refractory interconnect 310. A process used to form the refractory interconnect 310 fills the opening 110 from the underlying metal surface 160 until the entire opening 110 is substantially filled, as shown in FIG. 4. The refractory interconnect 310 may be formed using electroless plating of cobalt (Co), nickel (Ni), palladium (Pd), platinum (Pt), tungsten (W), ruthenium (Ru), and their alloys. In one embodiment, the refractory interconnect 310 is formed from the bottom up. In anotherembodiment, the refractory interconnect 310, particularly in alloy form may be doped with, or contain small amounts of boron and/or phosphorus to impart amorphous properties. FIG. 5 illustrates the structure of FIG. 4 after depositing a barrier layer 410, a trench interconnect 420 in the trench 120 adjacent to and on the refractory interconnect 310. In one embodiment, the barrier layer 410 has a thickness is generally in a range between 50 to 200 Angstroms. Also in this example, the trench interconnect 420, comprising a seed layer and an interconnect layer, may range substantially between 450 to 1800 Angstroms, resulting in a multi-layer stack with a total film thickness approximately between 500 to 2000 Angstroms. The seed layer, formed using a process such as physical vapor deposition (PVD) acts as a nucleation surface for the interconnect layer. The trench interconnect 420, comprising a seed layer and an interconnect layer, or conductor may be formed of the same material or from different materials. The trench interconnect 420, may be formed using one or more low resistivity metals such as silver (Ag), copper (Cu), aluminum (Al), and their alloys. The refractory interconnect 310 and the trench interconnect 420 are formed of two different materials, referred to here as a dual metal interconnect. In this embodiment, the trench interconnect 420 is separated from the underlying metal 170 by the dielectric region 180 with a thickness roughly equivalent to the opening height 114. While the opening width 112 continues to shrink to allow greater device density, the opening height 114 remains relatively thick in comparison. Embodiments of the invention allow for progressively higher aspect ratio openings, or the ratio of the opening height 114 to the opening width 112, that would have otherwise been prohibited due to gap-fill constraints, thereby reducing capacitance and making the microelectronic device more power efficient. FIG. 6 illustrates a cross-sectional view of a dual metal interconnect in a microelectronic device 600, such as a central processing unit or a memory unit, in accordance with one embodiment. The microelectronic device 600 contains a substrate 605 that may comprise silicon, gallium arsenide (GaAs), or indium antimonide (InSb) in monocrystalline form. The substrate 605 may further comprise buried layers such as one or more silicon-on- insulator layers. One or more front end films are formed on the substrate 605 to form a pre-metal dielectric 610. The pre-metal dielectric 610 may comprise one or more films typically used in contemporary device fabrication known toone skilled in the art, such as silicon oxide, silicon nitride, doped or un-doped polysilicon, lanthanum oxide, tantalum oxide, titanium oxide, hafnium oxide, zirconium oxide, lead- zirconate-titanate (PZT), barium-strontium-titanate (BST), or aluminum oxide. The pre- metal dielectric layer 610 may be deposited using methods such as thermal deposition, plasma enhanced chemical vapor deposition (PECVD), high density chemical vapor deposition (HDCVD), and/or sputtering. A series of interlayer dielectric layers 620 comprising refractory interconnects 410, a trench barrier 420, and a trench conductor 430 are formed over the pre-metal dielectric layer 610. The interlayer dielectric layers 620 may comprise a silicon oxide, silicon nitride, or a low k dielectric (e.g., k<3) such as carbon-doped oxide (CDO). The interlayer dielectric layers 620 may be planarized, or polished using a process such as chemical mechanical planarization (CMP). The planarization process erodes a top portion of the dielectric material to create a uniform surface while improving the optical resolution of subsequent lithography steps. In one embodiment, the refractory interconnects 310 are filled with one or more refractory metals such as cobalt (Co), nickel (Ni), palladium (Pd), platinum (Pt), tungsten (W), ruthenium (Ru), and their alloys while the trench interconnects 420 and underlying metals 170 are formed by a damascene or dual- damascene process with copper or a copper alloy using an electroplating process to fill recesses such as trenches 120 in the interlayer dielectric layers 620. The trench interconnects 420 and the interlayer dielectric layers 620 may be planarized using a CMP process or another planarizing process known to one skilled in the art. An interface dielectric 630 is formed over the interlayer dielectric layers 620, refractory interconnects 310, and the trench interconnects 420. The interface dielectric 630 is formed from a dielectric film with barrier properties, such as a silicon nitride or silicon oxynitride film. In another embodiment, a spin-on polymer "buffer coat" is applied on top of the silicon nitride or silicon oxynitride film. The interface dielectric 630 is patterned and etched using methods known to one skilled in the art to form a pathway to the underlying trench interconnects 420 and refractory interconnects 310. FIG. 7 illustrates a communications system 700 with a central processing unit (CPU) 710 comprising dual metal interconnects in accordance with one embodiment. The communications system 700 may include a motherboard 720 with the CPU 710, and a networking interface 730 coupled to a bus 740. More specifically, the CPU 710 may comprise the earlier described dual metal interconnect structure and/or its method offabrication. Depending on the applications, the communications system 700 may additionally include other components described herein, including but are not limited to volatile and non-volatile memory, a graphics processor, a digital signal processor, a crypto processor, a chipset, mass storage (such as hard disk, compact disk (CD), digital versatile disk (DVD)), and so forth. One or more of these components may also include the earlier described dual metal interconnect structure and/or its method of fabrication. In various embodiments, communications system 700 may be a personal digital assistant (PDA), a mobile device, a tablet computing device, a laptop computing device, a desktop computing device, a set-top box, an entertainment control unit, a digital camera, a digital video recorder, a CD player, a DVD player, or other digital device of the like. FIG. 8 is a flowchart describing one embodiment of a fabrication process used to form dual metal interconnect structures. In element 800, a dielectric layer is deposited over a patterned metal layer. In element 810, the dielectric layer is etched to form a damascene pattern with a trench and an opening to expose the patterned metal layer. The opening is pretreated and the patterned metal layer is exposed in element 820. In element 830, a refractory interconnect 310 is formed in the opening to substantially fill the opening. A barrier layer 410 is deposited and a seed layer is formed on the trench and the refractory interconnect 310 in element 840. A low resistivity metal is formed on the seed layer in element 850 to form a trench interconnect 420. The process described in FIG. 8 may be repeated one or more times to provide a plurality of additional conductors. A plurality of embodiments of an apparatus and methods for forming dual metal interconnect structures have been described. The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. This description and the claims following include terms, such as left, right, top, bottom, over, under, upper, lower, first, second, etc. that are used for descriptive purposes only and are not to be construed as limiting. For example, terms designating relative vertical position refer to a situation where a device side (or active surface) of a substrate or integrated circuit is the "top" surface of that substrate; the substrate may actually be in any orientation so that a "top" side of a substrate may be lower than the "bottom" side in a standard terrestrial frame of reference and still fall within the meaning of the term "top." The term "on" as used herein (including in the claims) does not indicate that a first layer "on" a second layer is directly on and in immediate contact with thesecond layer unless such is specifically stated; there may be a third layer or other structure between the first layer and the second layer on the first layer. The embodiments of a device or article described herein can be manufactured, used, or shipped in a number of positions and orientations. However, one skilled in the relevant art will recognize that the various embodiments may be practiced without one or more of the specific details, or with other replacement and/or additional methods, materials, or components. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of various embodiments of the invention. Similarly, for purposes of explanation, specific numbers, materials, and configurations are set forth in order to provide a thorough understanding of the invention. Nevertheless, the invention may be practiced without specific details. Furthermore, it is understood that the various embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale. Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention, but do not denote that they are present in every embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily referring to the same embodiment of the invention. Furthermore, the particular features, structures, materials, or characteristics may be combined in any suitable manner in one or more embodiments. Various additional layers and/or structures may be included and/or described features may be omitted in other embodiments. Various operations will be described as multiple discrete operations in turn, in a manner that is most helpful in understanding the invention. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations need not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above teaching. Persons skilled in the art willrecognize various equivalent combinations and substitutions for various components shown in the Figures. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
Systems and methods for multiple network access by mobile computing equipment are disclosed. In one embodiment, a plurality of baseband processor endpoints are coupled to a plurality of network accesscards using a data bus such that each baseband processor endpoint may communicate with any network access card on the data bus. A modem and an application processor operate as baseband processor endpoints. The baseband processor endpoint includes a modem and the network access card is a subscriber interface module card or a universal integrated circuit card. The bus interface may attach an address to data placed on the data bus, and may place the data on the data bus according to a time division multiplexing protocol. By allowing each baseband processor endpoint to use any network access card, the mobile computing equipment can use different networks for different purposes. Use of a single bus in this manner may allow greater scalability while also saving pin count, silicon area, board area, and power consumption within the computing equipment.
1.A computing system including:Multiple baseband processor endpoints; andA communication interface configured to be coupled to a data bus and allow a serial connection from each baseband processor end point of the plurality of baseband processor end points to any one of the plurality of network access cards Xinghua communication.2.The computing system of claim 1, wherein the plurality of baseband processor endpoints include modems.3.The computing system of claim 1, wherein the multiple baseband processor endpoints are distributed among multiple integrated circuits.4.The computing system of claim 1, wherein the multiple baseband processor endpoints are located in a single integrated circuit.5.8. The computing system according to claim 1, wherein at least one of the plurality of baseband processor endpoints includes an application processor.6.The computing system of claim 1, wherein the communication interface is configured to be serialized through a time division multiplexing (TDM) serialization process.7.The computing system of claim 1, wherein the communication interface comprises a first pin and a second pin, the first pin is configured to communicate a clock signal, and the second pin is configured To communicate data signals.8.8. The computing system of claim 7, wherein the communication interface further comprises a power pin configured to communicate a power signal.9.3. The computing system of claim 1, further comprising the data bus, wherein the data bus includes a clock channel and a data channel.10.9. The computing system of claim 9, further comprising a plurality of network access card interfaces coupled to the data bus, each network access card interface being configured to receive a network access card.11.The computing system according to claim 10, wherein at least one of the plurality of network access cards comprises a card selected from the group consisting of: a subscriber interface module (SIM) card, Universal integrated circuit card, and virtual network access card.12.3. The computing system of claim 1, wherein the communication interface includes a two-bit or more-bit encoder/decoder.13.The computing system of claim 1, wherein the communication interface includes multiplexer/demultiplexer logic.14.The computing system of claim 1, wherein the communication interface is configured to communicate through analog transmission and reception.15.A computing system including:Multiple network access card interfaces, each network access card interface is configured to receive a physical removable network access card;Multiple baseband processor endpoints; andA data bus, the data bus including a data channel and a clock channel, the data bus being coupled to each of the plurality of network access card interfaces and the plurality of baseband processor endpoints, so that any baseband processor The endpoint can communicate with a network access card located in any network access card interface among the plurality of network access card interfaces.16.The computing system of claim 15, wherein the data bus further includes a power channel.17.15. The computing system of claim 15, wherein at least one of the plurality of network access card interfaces is configured to accept a subscriber interface module (SIM) card.18.15. The computing system of claim 15, wherein at least one of the plurality of network access card interfaces is configured to accept a universal integrated circuit card (UICC).19.15. The computing system of claim 15, further comprising a virtual network access card coupled to the data bus.20.15. The computing system of claim 15, wherein the plurality of subsets of the plurality of baseband processor endpoints each include a corresponding bus interface.21.15. The computing system of claim 15, wherein multiple subsets of the multiple baseband processor endpoints share the bus interface.22.A method for assembling a mobile terminal includes:Provide serial data bus;Coupling multiple network access cards to the serial data bus; andA plurality of baseband processor endpoints are coupled to the serial data bus, so that any baseband processor endpoint of the plurality of baseband processor endpoints can be connected to any of the plurality of network access cards Communication.23.The method of claim 22, wherein coupling the plurality of network access cards to the serial data bus comprises coupling a plurality of network access card interfaces to the serial data bus.24.The method of claim 22, wherein coupling the plurality of baseband processor endpoints to the serial data bus comprises: coupling a single bus interface to the serial data bus, the single bus interface Shared among the multiple baseband processor endpoints.25.The method of claim 22, further comprising: coupling a virtual network access card to the serial data bus.26.The method of claim 22, further comprising: providing a clock signal on the serial data bus.27.The method of claim 22, further comprising: providing power on the serial data bus.28.A method of operating a computing system includes:Each of the multiple baseband processor endpoints is allowed to communicate with each of the multiple network access cards on the serial data bus.29.The method of claim 28, wherein at least one of the plurality of network access cards comprises a subscriber interface module (SIM) card.
System and method for multiple network access by mobile computing equipmentIn this case, the international filing date is May 14, 2015, the international application number is PCT/US2015/030711, the Chinese national filing date is May 14, 2015, the application number is 201580024308.2, and the title of the invention is “used by mobile computing devices. A divisional application of the patent application of “System and Method for Multiple Network Access”.Priority claimThis application requires the serial number of the US patent application entitled "SYSTEMS AND METHODS FOR MULTIPLENETWORK ACCESS BY MOBILE COMPUTING DEVICES (system and method for multiple network access by mobile computing devices)" filed on May 21, 2014 The priority of 14/283,977, the application is fully incorporated herein by reference.backgroundI. Public domainThe technology of the present disclosure generally relates to mobile computing devices and subscriber access cards that enable the mobile computing devices to interoperate with subscriber networks.II. BackgroundMobile computing devices have become more and more common in daily life. The ability to use devices such as mobile phones, tablets, laptops, and other small portable wireless communication devices to keep in touch with friends, family, colleagues, collaborators, etc. is considered to be of great value to users of such devices . In most cases, such users contact the service provider (such as , etc.) and agree to a service contract, which provides the user with access to the subsidized mobile terminal and through the mobile terminal for maintenance by the service provider Wireless network access. Other service providers provide pay-as-you-go contracts, etc.In order to control access to the wireless network maintained by the service provider, the service provider may require the mobile terminal to have a credential, and use the credential to authenticate the mobile terminal to the wireless network. Such credentials can be stored in a secure format on a subscriber interface module (SIM) card or a universal integrated circuit card (UICC) received in the casing of the mobile terminal, and accessed by the control system of the mobile terminal as needed to pass the credentials to the wireless The internet. UICC is generally a single card on which all SIM applications can be placed, including SIM (the original Global System for Mobile Communications (GSM) subscriber identity module), USIM (user SIM), CSIM (CDMA SIM) and RUIM (removable In addition to user identity module). Each of these SIM types is regarded as an application, whereby one or many of these SIM types can coexist on the physical UICC. Other systems, such as systems that rely on code division multiple access (CDMA) protocols, can use virtual network access cards to store such credentials.Although many users may be content with having a single service provider, there may be instances where the user may need to access multiple service providers. In this case, the user may need to have multiple SIM cards or UICCs so that the mobile terminal can authenticate with each service provider. Therefore, there is a need to provide a mobile terminal that can efficiently interact with multiple SIM cards and/or UICCs.Public overviewThe embodiments disclosed in this detailed description include systems and methods for multiple network access by mobile computing devices. In various exemplary embodiments, a data bus is used to couple multiple baseband processor endpoints to multiple network access cards, so that each baseband processor endpoint can be connected to any network in the network access card on the data bus. Card communication. In an exemplary non-limiting embodiment, the baseband processor endpoint may be a modem, and the network access card may be a subscriber interface module (SIM) card or a universal integrated circuit card (UICC). By allowing each of the baseband processor endpoints to use any of the network access cards, the mobile computing device can use different networks for different purposes. In addition, the use of a single bus in this way can allow for greater scalability while also saving pin counts, silicon area, board area, and power consumption in mobile computing devices. Such savings ultimately improve equipment costs.In this regard, in one embodiment, a computing system is disclosed. The computing system includes multiple baseband processor endpoints. The computing system also includes a communication interface configured to be coupled to the data bus and allowing communication from each of the plurality of baseband processor endpoints to any one of the plurality of network access cards Serialized communication of network access card.In another embodiment, a computing system is disclosed. The computing system includes a plurality of network access card interfaces, each network access card interface being configured to receive a physical removable network access card. The computing system also includes multiple baseband processor endpoints. The computing system further includes a data bus, the data bus including a data channel and a clock channel. The data bus is coupled to each of the plurality of network access card interfaces and the plurality of baseband processor endpoints, so that any baseband processor endpoint can communicate with those located in the plurality of network access card interfaces Any network access card interface in the network access card communication.In another embodiment, a method of assembling a mobile terminal is disclosed. The method includes: providing a serial data bus. The method also includes coupling a plurality of network access cards to the serial data bus. The method further includes: coupling a plurality of baseband processor endpoints to the serial data bus, so that any baseband processor endpoint of the plurality of baseband processor endpoints can be connected to the plurality of network access cards Any network access card communication.In another embodiment, a method of operating a computing system is disclosed. The method includes: allowing each of the plurality of baseband processor endpoints to communicate with each of the plurality of network access cards on the serial data bus.Brief description of the drawingsFigure 1 is a simplified illustration of an exemplary mobile terminal in multiple communication networks;Figure 2 is a simplified block diagram of a transceiver and control system in a mobile terminal (such as the mobile terminal of Figure 1);Figure 3 is a simplified block diagram of a first embodiment of a computing system having a serial data bus for coupling multiple network access cards to multiple distributed baseband processor endpoints;4 is a simplified block diagram of a second embodiment of a computing system having a serial data bus for coupling multiple network access cards to multiple integrated baseband processor endpoints;Figure 5 is a simplified cross-sectional view of an exemplary data bus such as can be used with the various embodiments of Figures 3 and 4;FIG. 6 is a simplified flowchart illustrating the assembly of a data bus according to an exemplary embodiment of the present disclosure;FIG. 7 is a simplified flowchart illustrating the operation of a data bus according to an exemplary embodiment of the present disclosure; andFIG. 8 is a block diagram of an exemplary processor-based system that may include the data bus of FIGS. 3 and 4. FIG.A detailed descriptionReferring now to the drawings, several exemplary embodiments of the present disclosure are described. The word "exemplary" is used herein to mean "serving as an example, instance, or illustration." Any embodiment described herein as "exemplary" need not be construed as being superior or superior to other embodiments.The various embodiments disclosed in this detailed description include systems and methods for multiple network access by mobile computing devices. In various exemplary embodiments, a data bus is used to couple multiple baseband processor endpoints to multiple network access cards, so that each baseband processor endpoint can communicate with any network in the network access card on the data bus. Access card communication. In an exemplary non-limiting embodiment, the baseband processor endpoint may be a modem, and the network access card may be a subscriber interface module (SIM) card or a universal integrated circuit card (UICC). By allowing each of the baseband processor endpoints to use any of the network access cards, the mobile computing device can use different networks for different purposes. In addition, the use of a single bus in this way can allow for greater scalability while also saving pin counts, silicon area, board area, and power consumption in mobile computing devices. Such savings ultimately improve equipment costs.Before describing the exemplary embodiments of the present disclosure, additional materials on the nature of the SIM card are provided. Although in some instances each SIM card is usually operated with a defined wireless provider, with the help of roaming or other agreements between providers, it is possible to use SIM cards that support two or more wireless providers. This shared use can be configured in the SIM card. This agreement may sometimes be referred to as "multi-SIM" or "multi-SIM technology", which allows multiple SIM credentials to be aggregated onto one physical card.As an additional background, each SIM is usually provided with a unique international mobile subscriber identity (or IMSI), which uniquely identifies an identity among all operators throughout the world. Some SIMs can also be provided with multiple profiles or policies, and each profile or policy is distinguished by a unique IMSI. One such application is dual IMSI, which supports two subscriptions (for example, two different phone numbers) for business and personal needs.In this regard, FIG. 1 is a simplified diagram of a communication environment 10 with a mobile terminal 12 operating within the networks 14, 16. The networks 14, 16 may be wireless (e.g., cellular). The network 14 is formed by a first network provider 18 that operates one or more base stations 20 through a communication network 22. In an exemplary embodiment, the communication network 22 may be a part of a public land mobile network (PLMN), a public switched telephone network (PSTN), and/or the Internet or include parts of the PLMN, PSTN, and/or the Internet. The network 16 is formed by a second network provider 24 that operates one or more base stations 26 through a communication network 28. In an exemplary embodiment, the communication network 28 may be part of or include parts of the PLMN, PSTN, and/or the Internet. In another exemplary embodiment, the network providers 18, 24 may be competitors, such as and so on. The mobile terminal 12 operating according to various exemplary embodiments of the present disclosure can operate in both the networks 14 and 16. Since the network providers 18, 24 are competitors, they usually have proprietary measures that exclude unauthorized use of the corresponding networks 14, 16. In an exemplary embodiment, the proprietary measurement takes the form of a SIM card or UICC installed in the mobile terminal 12. When a mobile terminal 12 attempts to access a given network 14, 16, before providing access, the mobile terminal 12 may be required to provide credentials from the SIM card or UICC. Although there are exceptions such as emergency calls (for example, 911 calls), in general, the mobile terminal 12 must have an appropriate network access card to access the network (such as the networks 14, 16 of FIG. 1).The exemplary embodiments of the present disclosure provide a system that simplifies the coexistence of multiple network access cards in the mobile terminal 12 of FIG. 1 so that the mobile terminal 12 can easily operate with multiple proprietary networks (such as networks 14, 16) And method. By providing access to multiple private networks, the user can have greater flexibility in the use of the mobile terminal 12. For example, for an area where the first network (e.g., network 14) has poor coverage, the mobile terminal 12 may operate in the second network (e.g., network 16), and vice versa. Similarly, if the user has reached the upper limit of the data plan with the first network provider 18, the mobile terminal 12 can be used to access data from the second network provider 24. Other uses for multiple network access are also possible.It is worth noting that conventional mobile terminals that operate without the benefits of the present disclosure may be permitted to access multiple networks by using dual SIM dual standby (DSDS) or dual SIM dual access (DSDA) implementations. DSDS and DSDA provide a network access card coupled to the baseband processor of each network (ie, one network access card is attached to one baseband processor, and the other network access card is attached to another baseband processor ). The various embodiments of the present disclosure allow for the consolidation of the data link between the network access card and the baseband processor. In addition, various embodiments of the present disclosure allow for greater flexibility and scalability by allowing multiple baseband processors to communicate with multiple network access cards instead of the one-to-one arrangement of DSDS and DSDA.Figure 2 provides more details about some of the components within the mobile terminal 12 of Figure 1. In this regard, the mobile terminal 12 may include a receiver path 30, a transmitter path 32, an antenna 34, a switch 36, a baseband processor (BBP) 38, a control system 40, a frequency synthesizer (not illustrated), a user interface 44, and The memory 46 in which the software 48 is stored.The receiver path 30 receives information-carrying radio frequency (RF) signals from one or more remote transmitters provided by a base station (such as base station 20 in FIG. 1). A low noise amplifier (not shown) amplifies the signal. The filter (not shown) minimizes the broadband interference in the received signal, while the down-conversion and digitizing circuit system (not shown) down-converts the filtered received signal to an intermediate frequency signal or a baseband frequency signal, which is then Digitize into one or more digital streams. The receiver path 30 typically uses one or more mixing frequencies generated by a frequency synthesizer. BBP 38 processes the digitized received signal to extract the information or data bits conveyed in the signal. As such, BBP 38 is usually implemented in one or more digital signal processors (DSP).Continuing to refer to FIG. 2, on the transmitting side, the BBP 38 receives digitized data from the control system 40, which can represent voice, data, or control information, and the BBP 38 encodes the digitized data for transmission. The encoded data is output to transmitter path 32, which is used by a modulator (not shown) to modulate the carrier signal at the desired transmission frequency. The RF power amplifier (not shown) amplifies the modulated carrier signal to a level suitable for transmission, and delivers the amplified and modulated carrier signal to the antenna 34 through the switch 36. The receiver path 30, the transmitter path 32, and the frequency synthesizer can be collectively considered as the transceiver 50.With continued reference to FIG. 2, the user may interact with the mobile terminal 12 via the user interface 44 (such as through the microphone 52, the speaker 54, the keyboard 56, and/or the display 58). It should be noted that in some embodiments, the keyboard 56 and the display 58 may be combined into a touch screen display. The audio information encoded in the received signal is recovered by the BBP 38 and converted into an analog signal suitable for driving the speaker 54. The keyboard 56 and the display 58 enable the user to interact with the mobile terminal 12. For example, the keyboard 56 and the display 58 may enable the user to enter a number to be dialed, access address book information or similar information, and monitor call progress information. As mentioned above, the memory 46 may have software 48 that can implement or facilitate the operation of the mobile terminal 12.As mentioned, the exemplary embodiments of the present disclosure allow the mobile terminal 12 to communicate with more than one network 14, 16 by allowing the mobile terminal 12 to operate with multiple network access cards. Although each network access card is made for each network 14, 16 (the mobile terminal 12 will use this network 14, 16 to operate (for example, DSDS or DSDA)) of course it is operated with a corresponding transceiver (for example, transceiver 50) It is possible, but such repeated operations consume area in the mobile terminal 12, require routing of many repeated conductors, and are generally a waste of resources in the mobile terminal 12.The exemplary embodiments of the present disclosure help reduce the duplicate conductors and waste cited above by consolidating communications to and from the network access card onto a single data bus. Multiple network access cards allow the use of multiple networks, as explained above, this is the desired flexibility in its own form. Similarly, each BBP endpoint can access the data bus. By connecting each BBP endpoint and each network access card to a single data bus, each BBP endpoint can communicate with each network access card. This arrangement provides the BBP endpoint with the flexibility to choose which network (to which a voice/data call will be established) based on the conditions in the device or based on the policies established in the device. This arrangement further allows for savings in pin count, silicon area, board area, power consumption, and cost on the circuit within the mobile terminal 12. Similarly, if desired, this arrangement provides the ability to scale to almost any number of network access cards and allows the use of virtual network access cards. This type of virtual network access card can be used in a code division multiple access (CDMA) system (such as CDMA2000). In the CDMA system, there is no specific requirement for a physical network access card, and the credentials are communicated with the mobile network operator via the BBP endpoint ( MNO) secure communication between them. This process can result in the maintenance of the "security key" in the memory 46 or the BBP 38 memory. Still further, depending on the design criteria, this arrangement allows the interface for the network access card to be hosted in the application processor, modem, or both. This improved flexibility is a benefit to the designer. Since the BBP or application processor can be physically located within the same integrated circuit (IC) or across multiple chips, further flexibility is achieved. This flexibility provides an advantage to the designer, as product capabilities can be determined late in the product development cycle to adapt to changing market conditions and requirements.In this regard, FIG. 3 illustrates a first example of a data bus 60 coupling a plurality of network access cards (NAC) 62 (and particularly NAC interface 63) to a bus interface 64 (also referred to herein as a communication interface)性实施例。 Sexual embodiment. In the exemplary embodiment, the data bus 60 is a serial bus. As illustrated, the bus interface 64 may be located within the modem 66 or the application processor 68 (sometimes referred to as the host). The modem 66 and the application processor 68 may further include a BBP (not shown) that operates as a BBP endpoint. Each bus interface 64 may include a multiplexer/demultiplexer (MUX/DEMUX) (not shown) bus arbitration and may include voltage conversion logic as needed or desired. In addition, the bus interface 64 may include a serializer to serialize the data before placing the data on the data bus 60. In addition, the bus interface 64 may include a deserializer to deserialize the data from the data bus 60. In an exemplary embodiment, the bus interface 64 may attach an address to the data placed on the data bus 60, and may place the data on the data bus 60 according to a time division multiplexing (TDM) protocol. The source address of the sending endpoint and one or more destination addresses can be placed in the protocol field before the payload message sent on the data bus 60. It should be appreciated that the NAC interface 63 is basically the same as the bus interface 64, but provides these functions for the NAC 62. Generally, data is transferred between one BBP (source address) and one NAC 62 (destination address) in a point-to-point message, but one BBP to multiple NAC 62 may be involved in a broadcast or multicast message. Similarly, data exchange can occur between two BBPs and two NAC62s. It can be appreciated that data exchange can occur between any number of endpoints on the data bus 60. In an exemplary embodiment of one data line, only one message from source to destination is enabled on the data bus 60, and when the message transfer is completed, the next message that may be suspended due to bus occupation will be sent.The NAC 62 can be a SIM card or UICC as needed or desired. Similarly, NAC 62 may be virtual, such as in the CDMA system described above. Although "virtual", such a virtual NAC can still be a physical endpoint that can communicate on the data bus 60 in the designed silicon. In another exemplary embodiment, NAC 62 may be a multi-SIM card, such as discussed above. In this context, each NAC 62 is provided with a bus address on the data bus 60, and each "sub-SIM" is provided with a sub-address for each bus address. In another exemplary embodiment, the NAC 62 may be a solder-in, non-slot SIM card that can be used in specific applications, where environmental factors such as vibration, heat, and other factors such as anti-theft can prevent Use of connectors and removability (for example, cars).Similarly, although not explained, it should be appreciated that the NAC 62 can be a removable card that can be inserted into the NAC interface, the NAC interface has appropriate conductors for interoperability with the NAC 62, and includes a slot, the The size of the slot is set to accommodate the removable NAC 62. In an exemplary embodiment, NAC 62 may have proprietary new form factors including SIM card and UICC. It should be appreciated that each of the modem 66 and the application processor 68 may be implemented as a different and separate integrated circuit, or may be separate components within a single integrated circuit. The NAC interface may similarly include a serializer and a deserializer to serialize the data placed on the data bus 60 and deserialize the data received from the data bus 60. As mentioned above, the NAC interface can be designed to operate according to the TDM protocol when sending and receiving data from the data bus 60. The NAC interface can be further designed to provide power to the NAC 62 on one or more discrete conductors. Alternatively, the NAC power can be provided separately from the bus interface. This "out-of-band" power can be derived from a separate power management chip.FIG. 4 illustrates a second exemplary embodiment of the data bus 60 coupling a plurality of NACs 62 to the mobile station modem 70. The mobile station modem 70 may include one or more modems 72, and an application processor 74 (sometimes referred to as a host). It should be appreciated that the modem 72 and the application processor 74 may each include a BBP and operate as a BBP endpoint. It should be appreciated that the mobile station modem 70 may be implemented as a single integrated circuit or may be part of a system on chip (SOC). As mentioned above, the data bus 60 may operate according to the TDM protocol, and each interface may include a serializer and a deserializer for data conversion from and to the data bus 60. Instead of the aforementioned TDM protocol, a frequency division multiplexing (FDM) protocol can be used in this embodiment, or in the embodiment of FIG. 3. In this FDM protocol, each master device is assigned a given frequency channel. Other protocols can also be used to avoid conflicts on the data bus 60.The data bus 60 is better illustrated in FIG. 5 as a cross-sectional view of a ribbon cable with two conductors 80, 82 and an optional conductor 84. The first conductor 80 is a data channel. The second conductor 82 is the clock channel. Optional conductor 84 is a power channel. Although illustrated as a ribbon cable, it should be appreciated that the conductors 80, 82, 84 may be wire traces or other arrangements on a printed circuit board without departing from the scope of this disclosure. As mentioned above, each endpoint of the data bus 60 uses a serializer and TDM protocol to place data on the data bus 60. The deserializer and the correct addressing scheme allow the destination endpoint to extract data from the data bus 60. Although not shown, the other conductor may be a ground conductor. Although not shown, the other conductor may be a second data conductor, and the data is considered to be transmitted independently on each data conductor, or the two conductors may be grouped to send a two-bit code in each clock cycle yuan. For bit-to-symbol conversion in the transmitter and symbol-to-bit conversion in the receiver, a symbol encoder and decoder will be required. In an exemplary embodiment, the clock channel on the second conductor 82 may carry data such as in the CCIe (Camera Control Interface Extension) protocol that has been presented to the Mobile Industry Processor Interface (MIPI) Alliance. In addition, although conceived as a digital interface, an analog interface can just be used in conjunction with power, ground, and analog conductors.For this description of the structure, a method of using various embodiments of the present disclosure is provided with reference to FIGS. 6 and 7. In this regard, FIG. 6 illustrates a process 90 for assembling the mobile terminal 12 (and especially for assembling the data bus 60 within the mobile terminal 12). The process 90 begins by providing the data bus 60 (block 92). The NAC interface is coupled to the data bus 60 (block 94). Insert the NAC 62(s) into the corresponding NAC interface slot (block 96). The BBP endpoint is then coupled to the data bus 60 (block 98). A clock signal is provided on the data bus 60 (block 100). The TDM data signal is then provided on the data bus 60 (block 102).FIG. 7 illustrates the process 110 of operating the computing system in the mobile terminal 12. Process 110 begins by serializing the data at the BBP endpoint (block 112). The serialized data is sent via the data bus 60 to any of the NACs 62 coupled to the data bus 60 using the appropriate address (block 114). The data is deserialized at NAC 62 (block 116). The process 110 reverses the communication process by serializing the data at the NAC 62 (block 118) and sending the data on the data bus 60 with the address of the BBP endpoint (block 120). The data is then deserialized at the BBP endpoint (block 122).In some cases, a wireless local area network (WLAN) may require credentials stored on the SIM card or UICC. The various embodiments of the present disclosure are easily adapted to be used in such situations. That is, the NAC 62 with WLAN credentials can be coupled to the data bus 60. The WLAN modem may be coupled to the data bus 60. The WLAN modem may (or may not) include a BBP endpoint, but will be able to retrieve credentials from the NAC 62 across the data bus 60 and provide it to the WLAN router as needed. In related exemplary embodiments, Near Field Communication (NFC) may use a secure element (SE), and SE may be considered as a type of modem and UICC.It should be appreciated that most area savings are achieved through a single data bus 60, and the system can instantiate more than one bus, such as may be desired due to the complexity of routing of traces on a printed circuit board.The system and method for multiple network access by a mobile computing device according to the embodiments disclosed herein may be provided in or integrated into any processor-based device. Although it is more useful for mobile computing devices or mobile terminals, the present disclosure is not limited thereto. Therefore, non-limiting examples of processor-based devices that can be incorporated into various embodiments of the present disclosure include set-top boxes, entertainment units, navigation devices, communication devices, fixed location data units, mobile location data units, mobile phones, cellular phones , Computer, portable computer, desktop computer, personal digital assistant (PDA), monitor, computer monitor, television, tuner, radio, satellite radio, music player, digital music player, portable music player, digital video Players, video players, digital video disc (DVD) players, and portable digital video players.In this regard, FIG. 8 illustrates an example of a processor-based system 130 that can employ the data bus 60 with the BBP endpoint and the NAC 62 illustrated in FIGS. 3 and 4. In this example, the processor-based system 130 includes one or more central processing units (CPUs) 132, each of which includes one or more processors 134. The CPU(s) 132 may have a cache memory 136 coupled to the processor(s) 134 for fast access to temporarily stored data. The CPU 132 is coupled to the system bus 138. It should be noted that the system bus 138 is not the data bus 60 described above. As is well known, the CPU(s) 132 communicates with these other devices by exchanging address, control, and data information on the system bus 138. For example, the CPU(s) 132 may communicate the bus transaction request to the memory system 140.Other devices can be connected to the system bus 138. As illustrated in FIG. 8, as an example, these devices may include a memory system 140, one or more input devices 142, one or more output devices 144, one or more network interface devices 146, and one or more displays Controller 148. The input device(s) 142 may include any type of input device, including but not limited to input keys, switches, voice processors, and so on. The output device(s) 144 may include any type of output device, including but not limited to audio, video, other visual indicators, and so on. The network interface device(s) 146 may be any device configured to allow data exchange to and from the network 150. The network 150 may be any type of network, including but not limited to: a wired or wireless network, a private or public network, a local area network (LAN), a wide area network (WLAN), and the Internet. The network interface device(s) 146 may be configured to support any type of communication protocol desired.The CPU(s) 132 may also be configured to access the display controller(s) 148 via the system bus 138 to control information sent to one or more displays 152. The display controller(s) 148 sends the information to be displayed to the display(s) 152 via one or more video processors 154, and the video processor 154 processes the information to be displayed into a format suitable for the display(s) 152. The display(s) 152 may include any type of display, including but not limited to: cathode ray tube (CRT), liquid crystal display (LCD), plasma display, etc.Those skilled in the art will further appreciate that the various illustrative logic blocks, modules, circuits, and algorithms described in conjunction with the embodiments disclosed herein can be implemented as electronic hardware, stored in a memory or in another computer-readable medium, and configured by Instructions executed by a processor or other processing device, or a combination of the two. As an example, the devices described herein can be used in any circuit, hardware component, integrated circuit (IC), or IC chip. The memory disclosed herein can be any type and size of memory, and can be configured to store any type of information required. To clearly explain this interchangeability, various illustrative components, blocks, modules, circuits, and steps have been generally described above in their functional form. How such functionality is implemented depends on the specific application, design choices, and/or design constraints imposed on the overall system. The skilled person may implement the described functionality in different ways for each specific application, but such implementation decisions should not be construed as causing a departure from the scope of the present disclosure.Various illustrative logic blocks, modules, and circuits described in conjunction with the embodiments disclosed herein can be designed to perform the functions described herein as processors, DSPs, application-specific integrated circuits (ASICs), and field programmable gate arrays (FPGAs) Or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination thereof. The processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. The processor may also be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in cooperation with a DSP core, or any other such configuration.The various embodiments disclosed herein can be embodied as hardware and instructions stored in the hardware, and can reside in, for example, random access memory (RAM), flash memory, read-only memory (ROM), and electrically programmable ROM (EPROM) , Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, removable disk, CD-ROM, or any other form of computer readable medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from and write information to the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and the storage medium may reside in the ASIC. The ASIC may reside in the remote station. In the alternative, the processor and storage medium may reside as discrete components in a remote station, base station, or server.It is also noted that the operation steps described in any exemplary embodiment herein are described for the purpose of providing examples and discussion. The operations described can be performed in many different orders in addition to the illustrated order. In addition, the operations described in a single operation step can actually be performed in multiple different steps. In addition, one or more of the operation steps discussed in the exemplary embodiment may be combined. It will be understood that, as is obvious to those skilled in the art, the operation steps illustrated in the flowchart can be modified in many different ways. Those skilled in the art will also understand that information and signals can be represented using any of a variety of different technologies and techniques. For example, the data, instructions, commands, information, signals, bits (bits), symbols, and chips that may be mentioned throughout the above description can be voltage, current, electromagnetic waves, magnetic fields or magnetic particles, light fields or light particles, or Any combination of it.The previous description of the present disclosure is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to the present disclosure will be readily apparent to those skilled in the art, and the general principles defined herein can be applied to other modifications without departing from the spirit or scope of the present disclosure. Therefore, the present disclosure is not intended to be limited to the examples and designs described herein, but should be granted the broadest scope consistent with the principles and novel features disclosed herein.
The present invention relates to techniques for preconfiguring an accelerator by predicting a bitstream. Technologies for preconfiguring the accelerator by predicting the bitstream include a communication circuit and a computing device. The computing device includes a computing engine to determine one or more bitstreams registered on each of a plurality of accelerators. The computing engine is further used to predict a next job to be requested for acceleration from an application of at least one compute skateboard of the plurality of compute skateboards, predict a predicted bit stream of the next job from the library of bit streams for which the execution request is to be accelerated and determine whether the predicted bit stream has been registered on one of the accelerators. In response to determining that the predicted bitstream is not registered on one of the accelerators, the computing engine is used to select the accelerator from the plurality of accelerators that satisfies the characteristics of the predicted bitstream and to register the predicted bitstream on the determined accelerator.
1.A non-transitory machine-readable storage medium comprising stored thereon a plurality of instructions that, when executed by a computing device, cause the computing device to:storing at least one bitstream in a FPGA bitstream library for different field programmable gate arrays (FPGAs) in different network nodes;Allocate the bitstream in the FPGA bitstream library to one of multiple FPGAs, which can include multiple cores;trace the bitstream on the FPGA; andAt least one FPGA is dynamically allocated to help execute the application.2.The non-transitory machine-readable storage medium of claim 1, wherein the plurality of instructions, when executed, further cause the computing device to offload tasks to the FPGA.3.The non-transitory machine-readable storage medium of claim 1 or 2, wherein the plurality of instructions, when executed, further cause the computing device to support a cloud operating environment.4.The non-transitory machine-readable storage medium of any one of claims 1 to 3, wherein one of the FPGAs is used to fetch a bitstream from an FPGA bitstream library.5.4. The non-transitory machine-readable storage medium of any one of claims 1-4, wherein the plurality of instructions, when executed, further cause the computing device to use one or more secure operations for at least one bitstream .6.5. The non-transitory machine-readable storage medium of any one of claims 1-5, wherein the plurality of instructions, when executed, further cause the computing device to store a timestamp for the bitstream in the FPGA bitstream in the library.7.6. The non-transitory machine-readable storage medium of any one of claims 1 to 6, wherein the plurality of instructions, when executed, further cause the computing device to determine an available FPGA capable of performing the predicted next job, And configure the determined FPGA.8.A method that includes:storing at least one bitstream in a FPGA bitstream library for different field programmable gate arrays (FPGAs) in different network nodes;Allocate the bitstream in the FPGA bitstream library to one of multiple FPGAs, which can include multiple cores;trace the bitstream on the FPGA; andAt least one FPGA is dynamically allocated to help execute the application.9.The method of claim 8, further comprising:Offload tasks to the FPGA.10.The method according to claim 8 or 9, further comprising:Support for cloud operating environments.11.10. The method of any one of claims 8 to 10, wherein one of the FPGAs fetches a bitstream from an FPGA bitstream library.12.The method of any one of claims 8 to 11, further comprising:Use one or more security operations on at least one bitstream.13.The method of any one of claims 8 to 12, further comprising:Store timestamps against the bitstream in the FPGA bitstream library.14.The method of any one of claims 8 to 13, further comprising:Identify available FPGAs capable of performing the predicted next job; andConfigure the identified FPGA.15.A system that includes:at least one Field Programmable Gate Array (FPGA) for receiving the distribution of the bitstream from a remote orchestration server;a remote orchestration server for storing at least one bit stream in the FPGA bit stream library for different FPGAs in different network nodes, the remote orchestration server for distributing the bit streams in the FPGA bit stream library to multiple FPGAs For one, an FPGA can include multiple cores, and a remote orchestration server is used to track the bitstream on the FPGA and dynamically allocate at least one FPGA to help execute applications.16.16. The system of claim 15, wherein the remote orchestration server is used to offload tasks to the FPGA.17.16. The system of claim 15 or 16, wherein the remote orchestration server is used to support a cloud operating environment.18.18. The system of any of claims 15 to 17, wherein the remote orchestration server is adapted to use one or more security operations for the at least one bitstream.19.19. The system of any one of claims 15 to 18, wherein the remote orchestration server is adapted to store timestamps for the bitstreams in the FPGA bitstream repository.20.19. A system as claimed in any one of claims 15 to 19, wherein the remote orchestration server is used to determine an available FPGA capable of performing the predicted next job and configure the determined FPGA.21.A rack including:two elongated support poles arranged vertically, the elongated support poles extending upward from the floor of the data center when deployed;one or more horizontal pairs of elongated support arms configured to support the skids of the data center;Therein, one elongated support arm of the pair of elongated support arms extends outwardly from the elongated support strut and the other elongated support arm extends outwardly from the elongated support strut.22.An accelerator sled comprising accelerator circuits, communication circuits, memory devices and optical data connectors,wherein accelerator circuits, communication circuits and optical data connectors are mounted to the top side of the chassisless circuit board substrate;the memory device of the accelerator sled is mounted to the bottom side of the baseless circuit board substrate; and the memory device is communicatively coupled to the accelerator circuit on the top side via the I/O subsystem;Among other things, each of the accelerator circuits includes a heat sink that is larger than conventional heat sinks used in servers.
Techniques for preconfiguring accelerators by predicting bitstreamsCROSS-REFERENCE TO RELATED APPLICATIONSThis application claims the benefit of Indian Provisional Patent Application No. 201741030632, filed on August 30, 2017, and US Provisional Patent Application No. 62/584,401, filed on November 10, 2017.Background techniqueDemand for accelerator equipment has continued to increase as accelerator equipment provides significantly greater processing capacity than general-purpose accelerators for various technical fields, such as machine learning and genomics. Accelerator devices (such as field programmable gate arrays (FPGAs), cryptographic accelerators, graphics accelerators, and/or compression accelerators (in A typical architecture, referred to herein as an "accelerator device," "accelerator," or "accelerator resource")) may allow a specified amount of shared resources (eg, high-bandwidth memory, data storage, etc.) of an accelerator device to be used in the accelerator device's logic ( For example, static allocation among different parts of a circuit) to perform corresponding operations in one or more workloads. When an accelerator device receives a request from a computing device for work to be accelerated, the accelerator device can obtain a bitstream that can perform the requested work and can register the bitstream on the accelerator device. However, the fetching and registration of the bitstream may consume a non-trivial amount of time, thereby delaying the acceleration of work and reducing the performance benefits associated with using accelerator devices.Description of drawingsThe concepts described herein are illustrated in the accompanying drawings by way of example and not by way of limitation. For simplicity and clarity of illustration, elements illustrated in the figures have not necessarily been drawn to scale. Where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.1 is a simplified diagram of at least one embodiment of a data center for executing workloads with decomposed resources;FIG. 2 is a simplified diagram of at least one embodiment of a pod of the data center of FIG. 1;Figure 3 is a perspective view of at least one embodiment of a rack that may be included in the pod of Figure 2;Figure 4 is a side elevational view of the rack of Figure 3;5 is a perspective view of the rack of FIG. 3 with sleds installed therein;FIG. 6 is a simplified block diagram of at least one embodiment of the top side of the skateboard of FIG. 5;FIG. 7 is a simplified block diagram of at least one embodiment of the bottom side of the slide plate of FIG. 6;8 is a simplified block diagram of at least one embodiment of a computing sled usable in the data center of FIG. 1;9 is a top perspective view of at least one embodiment of the computing sled of FIG. 8;10 is a simplified block diagram of at least one embodiment of an accelerator sled that may be used in the data center of FIG. 1;Figure 11 is a top perspective view of at least one embodiment of the accelerator slide of Figure 10;12 is a simplified block diagram of at least one embodiment of a storage sled usable in the data center of FIG. 1;Figure 13 is a top perspective view of at least one embodiment of the storage slide of Figure 12;14 is a simplified block diagram of at least one embodiment of a memory sled usable in the data center of FIG. 1; and15 is a simplified block diagram of a system that may be established within the data center of FIG. 1 to execute workloads utilizing managed nodes composed of decomposed resources;16 is a simplified block diagram of at least one embodiment of a system for predicting a bitstream to preconfigure an accelerator;Figure 17 is a simplified block diagram of the orchestrator server of Figure 16;Figure 18 is a simplified block diagram of at least one embodiment of an environment that may be established by the orchestrator server of Figures 16 and 17; andFigures 19-21 are simplified flow diagrams of at least one embodiment of a method for predicting a bitstream and pre-registering the predicted bitstream on an accelerator that may be performed by the orchestrator server of Figures 16-18.detailed descriptionWhile the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof have been shown by way of example in the accompanying drawings and will herein be described in detail. It should be understood, however, that there is no intention to limit the concepts of the present disclosure to the particular form disclosed, but on the contrary, the present invention is to cover all modifications, equivalents, and alternatives consistent with this disclosure and the appended claims.References in the specification to "one embodiment," "an embodiment," "an illustrative embodiment," etc. indicate that the described embodiment may include a particular feature, structure, or characteristic, but each embodiment may or may not necessarily include Include that particular feature, structure or characteristic. Furthermore, such phrases are not necessarily referring to the same embodiment. Additionally, when a particular feature, structure or characteristic is described in connection with one embodiment, it is believed that it is within the knowledge of one skilled in the art to implement such feature, structure or characteristic in connection with other embodiments, whether explicitly described or not. Furthermore, it should be understood that an entry included in a list in the form "at least one of A, B, and C" may mean (A); (B); (C); (A and B); (A and C); (B and C); or (A, B and C). Similarly, an item listed in the form "at least one of A, B, or C" can mean (A); (B); (C); (A and B); (A and C); (B) and C); or (A, B, and C).The disclosed embodiments may in some cases be implemented in hardware, firmware, software, or any combination thereof. The disclosed embodiments can also be implemented as carried by or stored in a transitory or non-transitory machine-readable (eg, computer-readable) storage medium. Instructions on a medium that can be read and executed by one or more processors. A machine-readable storage medium can be implemented as any storage device, mechanism, or other physical structure (eg, volatile or nonvolatile memory, media disk, or other media device).In the drawings, some structural or method features may be shown in a specific arrangement and/or order. It should be understood, however, that such specific arrangements and/or orders may not be required. Rather, in some embodiments, such features may be arranged in a different manner and/or order than shown in the illustrative figures. Furthermore, the inclusion of structural or method features in a particular figure is not meant to imply that such features are required in all embodiments, and in some embodiments such features may not be included or such features may be combined with other features .Referring now to FIG. 1, a data center 100 in which disaggregated resources can cooperatively execute one or more workloads (eg, applications representing customers) includes a plurality of pods 110, 120, 130, 140, each of which includes a row or multi-row racks. As described in greater detail herein, each rack houses multiple sleds, each sled may be implemented primarily equipped with a particular type of resource (eg, memory devices, data storage devices, accelerator devices, general purpose processors ) computing devices, such as servers. In the illustrative embodiment, the sleds in each pod 110, 120, 130, 140 are connected to multiple pod switches (eg, switches that route data communications to and from the sleds within the pods) ). The pod switches are in turn connected to backbone switches 150 that switch communications among the pods (eg, pods 110 , 120 , 130 , 140 ) in the data center 100 . In some embodiments, the sled may be attached to the structure using Intel Omni-Path technology. As described in greater detail herein, resources within the sleds in data center 100 may be assigned to groups (referred to herein as "managed nodes") that contain information from the Resources shared by one or more other skateboards. Workloads can be executed as if resources belonging to managed nodes were on the same sled. Resources in managed nodes may even belong to skateboards, which belong to different racks, and the resources even belong to different pods 110, 120, 130, 140. Some resources of a single skateboard can be assigned to one managed node, while other resources of the same skateboard can be assigned to different managed nodes (for example, assigning a processor to a managed node and assigning Another processor is assigned to a different managed node). By decomposing resources into slides comprising primarily a single type of resource (eg, a compute slide comprising primarily compute resources, a memory slide comprising primarily memory resources), and selectively allocating and deallocating the decomposed resources to form allocations to execute A managed node for workloads, the data center 100 provides more efficient use of resources than a typical data center including hyperconverged servers (including compute, memory, storage, and perhaps additional resources). Thus, data center 100 may provide greater performance (eg, throughput, operations per second, latency, etc.) than a typical data center with the same number of resources.Referring now to FIG. 2 , in the illustrative embodiment, pod 110 includes a collection of rows 200 , 210 , 220 , 230 of racks 240 . Each rack 240 can accommodate multiple sleds (eg, sixteen sleds) and provide power and data connections to the accommodated sleds, as described in greater detail herein. In the illustrative embodiment, the racks in each row 200 , 210 , 220 , 230 are connected to multiple pod switches 250 , 260 . The pod switch 250 includes a set 252 of ports to which the sleds of the racks of the pod 110 are connected and another set 254 of ports that connect the pod 110 to the backbone switch 150 to provide connectivity to other pods in the data center 100 . Similarly, pod switch 260 includes a set 262 of ports to which the sleds of the racks of pod 110 are connected and a set 264 of ports connecting pod 110 to backbone switch 150 . Thus, the use of pairs of switches 250, 260 provides redundancy to pods 110. For example, if any one of the switches 250, 260 fails, the sleds in the pod 110 can still maintain data communication with the rest of the data center 100 (eg, sleds of other pods) through the other switches 250, 260. Furthermore, in the illustrative embodiment, switches 150, 250, 260 may be implemented as dual-mode optical switches capable of routing Ethernet protocol communications carrying Internet Protocol (IP) packets via an optical signaling medium over fiber optics and Second, the communication of high-performance link layer protocols (eg, Infiniband of Intel's Omni-Path architecture).It should be understood that each of the other pods 120, 130, 140 (and any additional pods of the data center 100) may be similarly structured as the pod 110 shown in and described with respect to FIG. (eg, each pod may have a row of racks that house multiple skids, as described above). Furthermore, although two pod switches 250, 260 are shown, it should be understood that in other embodiments, each pod 110, 120, 130, 140 may be connected to a different number of pod switches (eg, to provide an even larger failover capacity).Referring now to FIGS. 3-5, each illustrative rack 240 of the data center 100 includes two elongated support posts 302, 304 arranged vertically. For example, the elongated support posts 302, 304 may extend upward from the floor of the data center 100 when deployed. Rack 240 also includes one or more horizontal pairs 310 (identified via dashed ovals in FIG. 3 ) of elongated support arms 312 configured to support the sleds of data center 100 , as discussed below. One elongated support arm 312 of the pair of elongated support arms 312 extends outwardly from the elongated support strut 302 and the other elongated support arm 312 extends outwardly from the elongated support strut 304 .In the illustrative embodiment, each sled of data center 100 is implemented as a chassisless sled. That is, each sled has a chassisless circuit board substrate on which physical resources (eg, processors, memory, accelerators, storage devices, etc.) are mounted, as discussed in more detail below. Accordingly, the rack 240 is configured to receive a chassisless sled. For example, each pair 310 of elongated support arms 312 defines a sled slot 320 of the rack 240 that is configured to receive a corresponding chassisless sled. To do so, each illustrative elongated support arm 312 includes a circuit board guide 330 configured to receive a chassisless circuit board substrate of the sled. Each circuit board guide 330 is secured or otherwise mounted to the top side 332 of the corresponding elongated support arm 312 . For example, in the illustrative embodiment, each circuit board guide 330 is mounted at the distal end of the corresponding elongated support arm 312 relative to the corresponding elongated support post 302 , 304 . For the sake of clarity of the drawings, every circuit board guide 330 may not be referenced in every drawing.Each circuit board guide 330 includes an inner wall that defines a circuit board slot 380 configured to receive the chassisless circuit board liner of the sled 400 when the sled 400 is received in the corresponding sled slot 320 of the rack 240 end. To do so, the user (or robot) aligns the chassisless circuit board substrate of the illustrative chassisless sled 400 to the sled slot 320 as shown in FIG. 4 . The user or robot can then slide the chassisless circuit board substrate forward into the sled slot 320 such that each side edge 414 of the chassisless circuit board substrate is in the opposite direction of the elongated support arms 312 that define the corresponding sled slot 320. 310 is received in the corresponding circuit board socket 380 of the circuit board guide 330 as shown in FIG. 4 . By having a robot-accessible and robot-steerable skateboard that includes disintegrated resources, each type of resource can be upgraded independently of each other and at their own optimized refresh rate. Additionally, the sleds are configured to blindly mate with power and data communication cables in each rack 240, enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Thus, in some embodiments, data center 100 may operate on the data center floor without human involvement (eg, performing workloads, undergoing maintenance and/or upgrades, etc.). In other embodiments, a human may facilitate one or more maintenance or upgrade operations in data center 100 .It should be understood that each circuit board guide 330 is double-sided. That is, each circuit board guide 330 includes inner walls that define circuit board sockets 380 on each side of the circuit board guide 330 . In this manner, each circuit board guide 330 can support a chassisless circuit board substrate on either side. Thus, a single additional elongated support pole can be added to the rack 240 to turn the rack 240 into a dual rack solution that can hold twice as many slide slots 320 as shown in FIG. 3 . The illustrative rack 240 includes seven pairs 310 of elongated support arms 312 that define corresponding seven sled slots 320, each configured to receive and support a corresponding sled 400 as discussed above. Of course, in other embodiments, the frame 240 may include additional or fewer pairs 310 of elongated support arms 312 (ie, additional or fewer slide slots 320). It should be appreciated that because the slide 400 is chassisless, the slide 400 may have a different overall height than a typical server. Thus, in some embodiments, the height of each sled slot 320 may be shorter than the height of a typical server (eg, shorter than a single rack unit "1U"). That is, the vertical distance between each pair 310 of elongated support arms 312 may be less than a standard rack unit "1U". Furthermore, due to the relative reduction in the height of the sled slots 320, the overall height of the rack 240 may be shorter than the height of a conventional rack enclosure in some embodiments. For example, in some embodiments, each of the elongated support posts 302, 304 may have a length of six feet or less. Again, in other embodiments, the racks 240 may be of different sizes. Additionally, it should be understood that the rack 240 does not include any walls, enclosures, or the like. Rather, rack 240 is a hoodless rack that is open to the local environment. Of course, in some cases, in those instances where racks 240 form end-of-row racks in data center 100 , end plates may be attached to one of elongated support posts 302 , 304 .In some embodiments, the various interconnects may be routed up or down through the elongated support posts 302 , 304 . To facilitate such routing, each elongated support strut 302, 304 includes an inner wall that defines an interior cavity in which the interconnect may be located. The interconnects routed through the elongated support posts 304, 304 may be implemented as any type of interconnect including, but not limited to, data or communication interconnects for providing a communication connection to each sled slot 320, Each sled slot 320 provides power interconnects and/or other types of interconnects for power.In the illustrative embodiment, rack 240 includes a support platform on which corresponding optical data connectors (not shown) are mounted. Each optical data connector is associated with a corresponding sled slot 320 and is configured to mate with the optical data connector of the corresponding sled 400 when the sled 400 is received in the corresponding sled slot 320 . In some embodiments, optical connections between components in data center 100 (eg, sleds, racks, and switches) are made using blind paired optical connections. For example, doors on each cable prevent dust from contaminating the fibers inside the cable. During connection to the blind-mate optical connector mechanism, the door is pushed open as the end of the cable enters the connector mechanism. Subsequently, the optical fibers in the cable enter the gel in the connector mechanism, and the optical fibers of one cable come into contact with the optical fibers of the other cable in the gel in the connector mechanism.The illustrative rack 240 also includes a fan array 370 coupled to the cross-support arms of the rack 240 . Fan array 370 includes one or more rows of cooling fans 372 aligned in a horizontal line between elongated support posts 302 , 304 . In the illustrative embodiment, fan array 370 includes a row of cooling fans 372 for each sled slot 320 of rack 240 . As discussed above, in the illustrative embodiment, each sled 400 does not include any onboard cooling system, and thus, fan array 370 provides cooling for each sled 400 received in rack 240 . In the illustrative embodiment, each rack 240 also includes a power supply associated with each sled slot 320 . Each power supply is secured to one of the elongated support arms 312 of a pair 310 of elongated support arms 312 that define a corresponding sled slot 320 . For example, the frame 240 may include a power source coupled or secured to each elongated support arm 312 extending from the elongated support pole 302 . Each power supply includes a power connector configured to mate with the power connector of the sled 400 when the sled 400 is received in the corresponding sled slot 320 . In the illustrative embodiment, sled 400 does not include any onboard power supply, and thus, the power provided in rack 240 supplies power to corresponding sled 400 when mounted to rack 240 .Referring now to FIG. 6, in the illustrative embodiment, sleds 400 are configured to be installed in corresponding racks 240 of data center 100 as discussed above. In some embodiments, each skateboard 400 may be optimized or otherwise configured to perform specific tasks, such as computing tasks, acceleration tasks, data storage tasks, and the like. For example, sled 400 may be implemented as compute sled 800 as discussed below with respect to Figures 8-9, accelerator sled 1000 as discussed below with respect to Figures 10-11, storage sled 1200 as discussed below with respect to Figures 12-13, or as an optimized or A sled that is otherwise configured to perform other specialized tasks, such as the memory sled 1400 discussed below with respect to FIG. 14 .As discussed above, the illustrative skateboard 400 includes a chassisless circuit board substrate 602 that supports various physical resources (eg, electrical components) mounted thereon. It should be understood that the circuit board substrate 602 is "chassisless" in that the skateboard 400 does not include a housing or housing. Rather, the chassisless circuit board substrate 602 is open to the local environment. The chassisless circuit board substrate 602 may be formed of any material capable of supporting the various electrical components mounted thereon. For example, in the illustrative embodiment, chassisless circuit board substrate 602 is formed from a FR-4 glass-reinforced epoxy laminate. Of course, in other embodiments, other materials may be used to form the chassisless circuit board substrate 602 .As discussed in greater detail below, the chassisless circuit board substrate 602 includes a number of features that improve the thermal cooling characteristics of the various electrical components mounted on the chassisless circuit board substrate 602 . As discussed, the chassisless circuit board substrate 602 does not include a housing or enclosure, which may improve airflow over the electrical components of the skateboard 400 by reducing those structures that may inhibit airflow. For example, because the chassisless circuit board substrate 602 is not positioned in a separate housing or enclosure, there is no backplane to the chassisless circuit board substrate 602 (eg, the backplane of the chassis) that may inhibit Air flow across electrical components. Additionally, the chassisless circuit board substrate 602 has a geometry configured to reduce the length of the airflow path across the electrical components mounted to the chassisless circuit board substrate 602 . For example, the illustrative chassisless circuit board 602 has a width 604 that is greater than the depth 606 of the chassisless circuit board substrate 602 . In one particular embodiment, for example, the chassisless circuit board substrate 602 has a width of about 21 inches and a depth of about 9 inches compared to a typical server having a width of about 17 inches and a depth of about 39 inches. Therefore, the airflow path 608 extending from the front edge 610 toward the rear edge 612 of the chassisless circuit board substrate 602 has a shorter distance relative to a typical server, which may improve the thermal cooling characteristics of the skateboard 400 . In addition, although not illustrated in FIG. 6, the various physical resources mounted to the chassisless circuit board substrate 602 are mounted in corresponding locations such that no two substantially heat-generating electrical components shield each other, as more below. as discussed in detail. That is, two electrical components that do not generate appreciable heat (ie, greater than the nominal heat sufficient to adversely affect cooling of the other electrical component) during operation are in the direction of the airflow path 608 (ie, along the airflow path 608 ). are mounted to the chassisless circuit board substrate 602 in linear alignment with each other in a direction extending from the front edge 610 toward the rear edge 612 of the chassisless circuit board substrate 602 .As discussed above, the illustrative skateboard 400 includes one or more physical resources 620 mounted to the top side 650 of the chassisless circuit board substrate 602 . Although two physical resources 620 are shown in FIG. 6 , it should be understood that in other embodiments, skateboard 400 may include one, two, or more physical resources 620 . Physical resource 620 may be implemented as any type of processor, controller, or other computing capable of performing various tasks (such as computing functions) and/or controlling the functionality of skateboard 400 , eg, depending on the type of skateboard 400 or intended functionality circuit. For example, as discussed in greater detail below, physical resource 620 may be implemented as a high-performance processor in embodiments in which skateboard 400 is implemented as a computational skateboard, and as a high-performance processor in embodiments in which skateboard 400 is implemented as an accelerator skateboard Implemented as an accelerator coprocessor or circuit, as a memory controller in embodiments where sled 400 is implemented as a memory sled, or as a collection of memory devices in embodiments where sled 400 is implemented as a memory sled .The skateboard 400 also includes one or more additional physical resources 630 mounted to the top side 650 of the chassisless circuit board substrate 602 . In an illustrative embodiment, the additional physical resources include a network interface controller (NIC) as discussed in more detail below. Of course, depending on the type and function of skateboard 400, physical resources 630 may include additional or other electrical components, circuits, and/or devices in other embodiments.Physical resource 630 is communicatively coupled to physical resource 630 via input/output (I/O) subsystem 622 . I/O subsystem 622 may be implemented as circuits and/or components for facilitating input/output operations utilizing physical resources 620 , physical resources 630 , and/or other components of skateboard 400 . For example, the I/O subsystem 622 may be implemented as or otherwise include a memory controller hub, an input/output control hub, an integrated sensor hub, firmware devices, communication links (eg, point-to-point links, bus links, wireline , cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate input/output operations. In the illustrative embodiment, I/O subsystem 622 is implemented as or otherwise includes a double data rate 4 (DDR4) data bus or a DDR5 data bus.In some embodiments, skateboard 400 may also include resource-to-resource interconnection 624 . Resource-to-resource interconnect 624 may be implemented as any type of communication interconnect capable of facilitating resource-to-resource communication. In the illustrative embodiment, resource-to-resource interconnect 624 is implemented as a high-speed point-to-point interconnect (eg, faster than I/O subsystem 622). For example, resource-to-resource interconnect 624 may be implemented as a QuickPath Interconnect (QPI), UltraPath Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to resource-to-resource communication.The sled 400 also includes a power connector 640 that is configured to mate with a corresponding power connector of the rack 240 when the sled 400 is installed in the corresponding rack 240 . The slide 400 receives power from the power supply of the rack 240 via the power connector 640 to supply power to the various electrical components of the slide 400 . That is, the skateboard 400 does not include any local power supply (ie, an onboard power supply) that provides power to the electrical components of the skateboard 400 . The exclusion of local or on-board power supplies facilitates a reduction in the overall footprint of the chassisless circuit board substrate 602, which can increase the thermal cooling characteristics of the various electrical components mounted on the chassisless circuit board substrate 602, as discussed above like that. In some embodiments, power is provided to the processor 820 through vias directly below the processor 820 (eg, through the bottom side 750 of the chassisless circuit board substrate 602 ), thereby providing increased power over typical boards Thermal budget, additional current and/or voltage, and better voltage control.In some embodiments, slide 400 may also include mounting features 642 configured to mate with a robotic mounting arm or other structure to facilitate placement of slide 600 in rack 240 by the robot. The mounting features 642 may be implemented as any type of physical structure that allows the robot to grasp the sled 400 without damaging the chassisless circuit board substrate 602 or the electrical components mounted thereto. For example, in some embodiments, mounting features 642 may be implemented as non-conductive pads attached to chassisless circuit board substrate 602 . In other embodiments, the mounting features may be implemented as brackets, posts, or other similar structures attached to the chassisless circuit board substrate 602 . The specific number, shape, size, and/or composition of mounting features 642 may depend on the design of the robot configured to manage skateboard 400 .Referring now to FIG. 7 , in addition to the physical resources 630 mounted on the top side 650 of the chassisless circuit board substrate 602 , the sled 400 also includes one or more memories mounted to the bottom side 750 of the chassisless circuit board substrate 602 device 720. That is, the chassisless circuit board substrate 602 is implemented as a double-sided circuit board. Physical resource 620 is communicatively coupled to memory device 720 via I/O subsystem 622 . For example, physical resource 620 and memory device 720 may be communicatively coupled by one or more vias extending through chassisless circuit board substrate 602 . In some embodiments, each physical resource 620 may be communicatively coupled to a different set of one or more memory devices 720 . Alternatively, in other embodiments, each physical resource 620 may be communicatively coupled to each memory device 720 .Memory device 720 may be implemented as any type of memory device capable of storing data for physical resource 620 during operation of sled 400, such as any type of volatile (eg, dynamic random access memory (DRAM), etc.) or non-volatile memory. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as dynamic random access memory (DRAM) or static random access memory (SRAM). One specific type of DRAM that can be used in memory modules is Synchronous Dynamic Random Access Memory (SDRAM). In certain embodiments, the DRAM of the memory component may comply with standards promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-3F for DDR4 SDRAM JESD79-4A, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org acquired). Such standards (and similar standards) may be referred to as DDR-based standards, and communication interfaces of memory devices implementing such standards may be referred to as DDR-based interfaces.In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technology. Memory devices may also include next-generation non-volatile devices such as Intel 3D XPoint™ memory or other byte-addressable write-in-place non-volatile memory devices. In one embodiment, the memory device may be or include a memory device using chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory, nanowire memory , ferroelectric transistor random access memory (FeTRAM), antiferroelectric memory, magnetoresistive random access memory (MRAM) memory incorporating memristor technology, resistive memory including metal oxide substrates, oxygen vacancy substrates, and Conductive Bridge Random Access Memory (CB-RAM), or Spin Transfer Torque (STT)-MRAM, spintronic magnetic junction memory based devices, magnetic tunnel junction (MTJ) based devices, DW (domain wall) and A SOT (Spin Orbit Transfer) device, a thyristor based memory device, or a combination of any of the above or other memory. A memory device may refer to the die itself and/or a packaged memory product. In some embodiments, the memory device may include a transistorless stackable crosspoint architecture, where memory cells are located at the intersection of wordlines and bitlines and are individually addressable, and where bit storage is based on bulk resistance changed.Referring now to FIG. 8 , in some embodiments, sled 400 may be implemented as computing sled 800 . The computing sled 800 is optimized or otherwise configured to perform computing tasks. Of course, as discussed above, computing sled 800 may rely on other sleds, such as an acceleration sled and/or a storage sled, to perform such computational tasks. Computing skateboard 800 includes various physical resources (eg, electrical components) similar to those of skateboard 400 , which have been identified in FIG. 8 using the same reference numbers. The descriptions above with respect to such components provided in FIGS. 6 and 7 apply to corresponding components of the computing sled 800 and are not repeated herein for the sake of clarity of the description of the computing sled 800 .In illustrative computing sled 800 , physical resource 620 is implemented as processor 820 . Although only two processors 820 are shown in FIG. 8 , it should be understood that the computing sled 800 may include additional processors 820 in other embodiments. Illustratively, the processor 820 is implemented as a high performance processor 820 and may be configured to operate at a relatively high power rating. Although processors 820 operating at power ratings greater than typical processors (which operate at approximately 155-230 W) generate additional heat, the enhanced thermal cooling characteristics of chassisless circuit board substrate 602 discussed above facilitate higher power operation. For example, in the illustrative embodiment, processor 820 is configured to operate at a power rating of at least 250 W. In some embodiments, the processor 820 may be configured to operate at a power rating of at least 350 W.In some embodiments, computing sled 800 may also include processor-to-processor interconnect 842 . Similar to resource-to-resource interconnect 624 of skateboard 400 discussed above, processor-to-processor interconnect 842 may be implemented as any type of communication interconnect capable of facilitating processor-to-processor interconnect 842 communication. In the illustrative embodiment, processor-to-processor interconnect 842 is implemented as a high-speed point-to-point interconnect (eg, faster than I/O subsystem 622). For example, processor-to-processor interconnect 842 may be implemented as a Quick Path Interconnect (QPI), Ultra Path Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.Computing sled 800 also includes communication circuitry 830 . Illustrative communications circuitry 830 includes a network interface controller (NIC) 832, which may also be referred to as a host fabric interface (HFI). NIC 832 may be implemented as or otherwise include any type of integrated circuit, discrete circuit, controller chip, chipset, interposer, daughter card, network interface card, that may be used by computing sled 800 to communicate with another computing device. Other devices connected (eg, to other skateboards 400). In some embodiments, NIC 832 may be implemented as part of a system-on-chip (SoC) that includes one or more processors, or included on a multi-chip package that also includes one or more processors. In some embodiments, NIC 832 may include a local processor (not shown) and/or local memory (not shown) both local to NIC 832 . In such an embodiment, the local processor of NIC 832 may be capable of performing one or more of the functions of processor 820 . Additionally or alternatively, in such embodiments, the local memory of NIC 832 may be integrated into one or more components of the computing sled at the board level, socket level, chip level, and/or other level.Communication circuitry 830 is communicatively coupled to optical data connector 834 . Optical data connectors 834 are configured to mate with corresponding optical data connectors of rack 240 when computing sled 800 is installed in rack 240 . Illustratively, optical data connector 834 includes a plurality of optical fibers leading from mating surfaces of optical data connector 834 to optical transceiver 836 . The optical transceiver 836 is configured to convert incoming optical signals from the rack-side optical data connectors to electrical signals and convert the electrical signals to outgoing optical signals to the rack-side optical data connectors. Although shown in the illustrative embodiment as forming part of optical data connector 834 , in other embodiments optical transceiver 836 may form part of communication circuit 830 .In some embodiments, computing sled 800 may also include expansion connectors 840 . In such an embodiment, the expansion connectors 840 are configured to mate with corresponding connectors of the expansion chassisless circuit board substrate to provide additional physical resources to the computing sled 800 . Additional physical resources may be used, for example, by processor 820 during operation of computing sled 800 . The extended chassisless circuit board substrate may be substantially similar to the chassisless circuit board substrate 602 discussed above and may include various electrical components mounted thereto. The specific electrical components mounted to the extended chassisless circuit board substrate may depend on the intended functionality of the extended chassisless circuit board substrate. For example, extending the chassisless circuit board substrate may provide additional computing resources, memory resources, and/or storage resources. Accordingly, additional physical resources that extend the chassisless circuit board substrate may include, but are not limited to, processors, memory devices, storage devices, and/or accelerator circuits, including, for example, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), Security co-processors, graphics processing units (GPUs), machine learning circuits or other specialized processors, controllers, devices and/or circuits.Referring now to FIG. 9, an illustrative embodiment of a computing sled 800 is shown. As shown, the processor 820 , the communication circuit 830 and the optical data connector 834 are mounted to the top side 650 of the chassisless circuit board substrate 602 . The physical resources of computing sled 800 may be mounted to chassisless circuit board substrate 602 using any suitable attachment or mounting technique. For example, various physical resources may be installed in corresponding sockets (eg, processor sockets), cages, or brackets. In some cases, some of the electrical components may be mounted directly to the chassisless circuit board substrate 602 via soldering or similar techniques.As discussed above, the various processors 820 and communication circuits 830 are mounted to the top side 650 of the chassisless circuit board substrate 602 such that no two heat-generating electrical components shield each other. In the illustrative embodiment, processor 820 and communication circuit 830 are mounted in corresponding locations on top side 650 of chassisless circuit board substrate 602 such that no two of those physical resources are in the direction of airflow path 608 Aligns linearly with other physical resources. It should be understood that although the optical data connector 834 is aligned with the communication circuit 830, the optical data connector 834 does not generate heat, or generate nominal heat, during operation.The memory device 720 of the computing sled 800 is mounted to the bottom side 750 of the chassisless circuit board substrate 602 as discussed above with respect to the sled 400 . Although mounted to bottom side 750 , memory device 720 is communicatively coupled to processor 820 located on top side 650 via I/O subsystem 622 . Because chassisless circuit board substrate 602 is implemented as a double-sided circuit board, memory device 720 and processor 820 may be accessed by one or more vias, connectors, or other mechanisms extending through chassisless circuit board substrate 602 communicatively coupled. Of course, in some embodiments, each processor 820 may be communicatively coupled to a different set of one or more memory devices 720 . Alternatively, in other embodiments, each processor 820 may be communicatively coupled to each memory device 720 . In some embodiments, memory devices 720 may be mounted to one or more memory interlayers on the bottom side of chassisless circuit board substrate 602 and may be interconnected with corresponding processors 820 through a ball grid array.Each of the processors 820 includes a heat sink 850 secured thereto. Due to the mounting of the memory device 720 to the bottom side 750 of the chassisless circuit board substrate 602 (and corresponding to the vertical spacing of the sleds 400 in the rack 240 ), the top side 650 of the chassisless circuit board substrate 602 includes additional "free" "area or space that facilitates the use of a heat sink 850 having a larger size relative to conventional heat sinks used in typical servers. Furthermore, due to the improved thermal cooling characteristics of the chassisless circuit board substrate 602, none of the processor heat sink 850 includes a cooling fan attached thereto. That is, each of the heat sinks 850 is implemented as a fanless heat sink.Referring now to FIG. 10 , in some embodiments, skateboard 400 may be implemented as accelerator skateboard 1000 . Accelerator skateboard 1000 is optimized or otherwise configured to perform specialized computational tasks, such as machine learning, encryption, hashing, or other computationally intensive tasks. In some embodiments, for example, compute sled 800 may offload tasks to accelerator sled 1000 during operation. The accelerator sled 1000 includes various components similar to those of the sled 400 and/or the computing sled 800, which have been identified in FIG. 10 using the same reference numerals. The descriptions above with respect to such components provided in FIGS. 6 , 7 and 8 apply to corresponding components of the accelerator sled 1000 and are not repeated herein for the sake of clarity of the description of the accelerator sled 1000 .In illustrative accelerator skateboard 1000 , physical resource 620 is implemented as accelerator circuit 1020 . Although only two accelerator circuits 1020 are shown in FIG. 10 , it should be understood that the accelerator sled 1000 may include additional accelerator circuits 1020 in other embodiments. For example, as shown in FIG. 11 , in some embodiments, accelerator sled 1000 may include four accelerator circuits 1020 . Accelerator circuit 1020 may be implemented as any type of processor, coprocessor, computational circuit, or other device capable of performing computational or processing operations. For example, the accelerator circuit 1020 may be implemented as, for example, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a security coprocessor, a graphics processing unit (GPU), a machine learning circuit, or other specialized processors, controllers , equipment and/or circuits.In some embodiments, accelerator sled 1000 may also include accelerator-to-accelerator interconnect 1042 . Similar to resource-to-resource interconnect 624 of skateboard 600 discussed above, accelerator-to-accelerator interconnect 1042 may be implemented as any type of communication interconnect capable of facilitating accelerator-to-accelerator communication. In the illustrative embodiment, accelerator-to-accelerator interconnect 1042 is implemented as a high-speed point-to-point interconnect (eg, faster than I/O subsystem 622 ). For example, accelerator-to-accelerator interconnect 1042 may be implemented as a Quick Path Interconnect (QPI), Ultra Path Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. In some embodiments, accelerator circuit 1020 may be daisy-chained with primary accelerator circuit 1020, which is connected to NIC 832 and memory 720 through I/O subsystem 622, and secondary accelerator circuit 1020, which is Connected to NIC 832 and memory 720 through main accelerator circuit 1020 .Referring now to FIG. 11, an illustrative embodiment of an accelerator sled 1000 is shown. As discussed above, the accelerator circuit 1020 , the communication circuit 830 and the optical data connector 834 are mounted to the top side 650 of the chassisless circuit board substrate 602 . Again, the various accelerator circuits 1020 and communication circuits 830 are mounted to the top side 650 of the chassisless circuit board substrate 602 such that no two heat-generating electrical components shield each other, as discussed above. The memory device 720 of the accelerator sled 1000 is mounted to the bottom side 750 of the baseless circuit board substrate 602 as discussed above with respect to the sled 600 . Although mounted to the bottom side 750 , the memory device 720 is communicatively coupled to the accelerator circuit 1020 located on the top side 650 via the I/O subsystem 622 (eg, through vias). Additionally, each of the accelerator circuits 1020 may include a heat sink 1070 that is larger than conventional heat sinks used in servers. As discussed above with reference to heat spreader 870, heat spreader 1070 may be larger than conventional heat spreaders because the "free" area is provided by memory device 750 located on bottom side 750 of baseless circuit board substrate 602 rather than top side 650 .Referring now to FIG. 12 , in some embodiments, slide 400 may be implemented as storage slide 1200 . Storage sled 1200 is optimized or otherwise configured to store data in data storage 1250 local to storage sled 1200 . For example, compute sled 800 or accelerator sled 1000 may store and retrieve data from data storage 1250 of storage sled 1200 during operation. Storage sled 1200 includes various components similar to those of sled 400 and/or computing sled 800, which have been identified in FIG. 12 using the same reference numerals. The descriptions above with respect to such components provided in FIGS. 6 , 7 and 8 apply to corresponding components of the storage sled 1200 and are not repeated herein for the sake of clarity of the description of the storage sled 1200 .In illustrative storage sled 1200 , physical resource 620 is implemented as storage controller 1220 . Although only two storage controllers 1220 are shown in FIG. 12 , it should be understood that the storage sled 1200 may include additional storage controllers 1220 in other embodiments. Storage controller 1220 may be implemented as any type of processor, controller, or control circuit capable of controlling the storage and retrieval of data into data storage device 1250 based on requests received via communication circuit 830 . In the illustrative embodiment, memory controller 1220 is implemented as a relatively low power processor or controller. For example, in some embodiments, memory controller 1220 may be configured to operate at a rated power of approximately 75 watts.In some embodiments, storage sled 1200 may also include controller-to-controller interconnect 1242 . Similar to resource-to-resource interconnect 624 of skateboard 400 discussed above, controller-to-controller interconnect 1242 may be implemented as any type of communication interconnect capable of facilitating controller-to-controller communication. In the illustrative embodiment, controller-to-controller interconnect 1242 is implemented as a high-speed point-to-point interconnect (eg, faster than I/O subsystem 622 ). For example, the controller-to-controller interconnect 1242 may be implemented as a Quick Path Interconnect (QPI), Ultra Path Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications.Referring now to FIG. 13, an illustrative embodiment of a storage slide 1200 is shown. In the illustrative embodiment, data storage device 1250 is implemented as or otherwise includes a storage cage 1252 configured to house one or more solid state drives (SSD) 1254 . To do so, the storage cage 1252 includes a plurality of mounting slots 1256 , each of which is configured to receive a corresponding solid state drive 1254 . Each of the mounting slots 1256 includes a plurality of drive guides 1258 that cooperate to define an access opening 1260 of the corresponding mounting slot 1256 . The storage cage 1252 is secured to the baseless circuit board substrate 602 such that the access opening faces away from (ie, toward the front thereof) the baseless circuit board substrate 602 . Thus, while the storage sled 1200 is installed in the corresponding rack 204, the solid state drive 1254 is accessible. For example, the solid state drive 1254 can be swapped out of the rack 240 (eg, via a robot) while the storage sled 1200 remains installed in the corresponding rack 240 .Storage cage 1252 illustratively includes sixteen mounting slots 1256 and is capable of mounting and storing sixteen solid state drives 1254 . Of course, in other embodiments, the storage cage 1252 may be configured to store additional or fewer solid state drives 1254 . Also, in the illustrative embodiment, the solid state drive is mounted vertically in the storage cage 1252, but may be mounted in the storage cage 1252 in a different orientation in other embodiments. Each solid state drive 1254 may be implemented as any type of data storage device capable of storing long-term data. To do so, solid state drive 1254 may include the volatile and nonvolatile memory devices discussed above.As shown in FIG. 13 , memory controller 1220 , communication circuitry 830 , and optical data connector 834 are illustratively mounted to top side 650 of baseless circuit board substrate 602 . Again, as discussed above, the electrical components of the storage sled 1200 may be mounted to the baseless circuit board substrate 602 using any suitable attachment or mounting technique, including, for example, sockets (eg, processor sockets), Cages, brackets, welded connections and/or other mounting or securing techniques.As discussed above, the various memory controllers 1220 and communication circuits 830 are mounted to the top side 650 of the baseless circuit board substrate 602 such that no two heat-generating electrical components shield each other. For example, the memory controller 1220 and the communication circuit 830 are mounted in corresponding locations on the top side 650 of the baseless circuit board substrate 602 such that no two of those electrical components interact with other electrical components in the direction of the airflow path 608 Align linearly.The memory devices 720 of the storage sled 1200 are mounted to the bottom side 750 of the baseless circuit board substrate 602 as discussed above with respect to the sled 400 . Although mounted to bottom side 750 , memory device 720 is communicatively coupled to memory controller 1220 located on top side 650 via I/O subsystem 622 . Again, because the pedestalless circuit board substrate 602 is implemented as a double-sided circuit board, the memory device 720 and the memory controller 1220 may be formed by one or more vias, connectors, or other vias extending through the pedestalless circuit board substrate 602 The mechanisms are communicatively coupled. Each of the storage controllers 1220 includes a heat sink 1270 secured thereto. As discussed above, due to the improved thermal cooling characteristics of the baseless circuit board substrate 602 of the storage skateboard 1200, no heat sink 1270 includes a cooling fan attached thereto. That is, each of the heat sinks 1270 is implemented as a fanless heat sink.Referring now to FIG. 14 , in some embodiments, sled 400 may be implemented as memory sled 1400 . Storage sled 1400 is optimized or otherwise configured to provide other sleds 400 (eg, compute sled 800, accelerator sled 1000, etc.) a pool of memory (eg, two or more sets of memory devices 720) local to memory sled 1200 1430, 1432) access. For example, during operation, compute sled 800 or accelerator sled 1000 may remotely write to and/or from one or more of memory sets 1430, 1432 of memory sled 1200 using the logical address space. One or more reads in 1432, the logical address space is mapped to physical addresses in the memory sets 1430, 1432. Memory sled 1400 includes various components similar to those of sled 400 and/or computing sled 800, which have been identified in FIG. 14 using the same reference numerals. The descriptions above with respect to such components provided in FIGS. 6 , 7 and 8 apply to corresponding components of the memory sled 1400 and are not repeated herein for the sake of clarity of the description of the memory sled 1400 .In illustrative memory sled 1400 , physical resource 620 is implemented as memory controller 1420 . Although only two memory controllers 1420 are shown in FIG. 14 , it should be understood that the memory sled 1400 may include additional memory controllers 1420 in other embodiments. Memory controller 1420 may be implemented as any type of processor, controller or control circuit capable of controlling the writing and reading of data into memory sets 1430 , 1432 based on requests received via communication circuit 830 . In the illustrative embodiment, each storage controller 1220 is connected to a corresponding memory set 1430, 1432 to write to and read from memory devices 720 within the corresponding memory set 1430, 1432, and implements the same 1400 sends any permissions (eg, read, write, etc.) associated with the requested sled 400 to perform a memory access operation (eg, read or write).In some embodiments, the memory sled 1400 may also include a controller-to-controller interconnect 1442 . Similar to the resource-to-resource interconnect 624 of the skateboard 400 discussed above, the controller-to-controller interconnect 1442 may be implemented as any type of communication interconnect capable of facilitating controller-to-controller communication. In the illustrative embodiment, controller-to-controller interconnect 1442 is implemented as a high-speed point-to-point interconnect (eg, faster than I/O subsystem 622 ). For example, controller-to-controller interconnect 1442 may be implemented as a Quick Path Interconnect (QPI), Ultra Path Interconnect (UPI), or other high-speed point-to-point interconnect dedicated to processor-to-processor communications. Thus, in some embodiments, memory controller 1420 may access memory located within memory set 1432 associated with another memory controller 1420 through controller-to-controller interconnect 1442 . In some embodiments, a scalable memory controller is made of multiple smaller memory controllers (referred to herein as "chiplets") on a memory sled (eg, memory sled 1400). Chiplets may be interconnected (eg, using EMIB (Embedded Multi-Die Interconnect Bridge)). The combined chiplet memory controller can be scaled up to a relatively large number of memory controllers and I/O ports (eg, up to 16 memory channels). In some embodiments, memory controller 1420 may implement memory interleaving (eg, map one memory address to memory set 1430, map the next memory address to memory set 1432, map a third address to memory set 1430, etc.) . Interleaving may be managed within the memory controller 1420, or from a CPU socket (eg, of the compute sled 800) across the network link to the memory sets 1430, 1432, and when compared to accessing contiguous memory from the same memory device address, the latency associated with performing memory access operations can be improved.Additionally, in some embodiments, the memory sled 1400 may be connected to one or more other sleds 400 (eg, in the same rack 240 or adjacent racks 240 ) through a waveguide using a waveguide connector 1480 . In the illustrative embodiment, the waveguide is a 64 mm waveguide providing 16 Rx (ie, receive) channels and 16 Rt (ie, transmit) channels. In the illustrative embodiment, each channel is 16 Ghz or 32 Ghz. In other embodiments, the frequencies may be different. Using a waveguide can provide high throughput access to a memory pool (eg, memory sets 1430 , 1432 ) to another sled (eg, sled 400 in the same rack 240 as memory sled 1400 or in an adjacent rack 240 ) , without adding a load to the optical data connector 834.Referring now to FIG. 15 , a system for executing one or more workloads (eg, applications) may be implemented in accordance with data center 100 . In the illustrative embodiment, system 1510 includes an orchestrator server 1520 that may be implemented as a computing device (eg, computing sled 800 ) that includes a computing device (eg, computing sled 800 ) executing management software (eg, a cloud operating environment such as OpenStack). A managed node that is communicatively coupled to include a number of compute sleds 1530 (eg, each similar to compute sled 800 ), memory sleds 1540 (eg, each similar to memory sled 1400 ), accelerator sled 1550 (eg, each similar to memory sled 1400 ) A plurality of slides 400, each similar to memory slide 1000) and storage slide 1560 (eg, each similar to storage slide 1200). One or more of the sleds 1530, 1540, 1550, 1560 may be grouped into managed nodes 1570 (such as through an orchestrator server 1520) to collectively execute workloads (eg, in virtual machines or in containers) application 1532). Managed nodes 1570 may be implemented as components of physical resources 620 from the same or different sleds 400 , such as processors 820 , memory resources 720 , accelerator circuits 1020 , or data storage 1250 . Additionally, managed nodes may be established, defined, or "spun up" by the orchestrator server 1520 at the time of distributing workloads to managed nodes, or at any other time, and may exist regardless of Whether any workloads are currently assigned to managed nodes. In an illustrative embodiment, orchestrator server 1520 may implement quality of service (QoS) goals (eg, related to throughput, latency, instructions, etc.) to selectively allocate and/or deallocate physical resources 620 from sleds 400 and/or add or remove one or more sleds 400 from managed nodes 1570. In doing so, the orchestrator server 1520 may receive telemetry data indicative of performance conditions (eg, throughput, latency, instructions per second, etc.) in each sled 400 of the managed nodes 1570 and associate the telemetry data with the service compared to the quality objectives of the service to determine whether the quality objectives of the service are being met. If so, the orchestrator server 1520 may additionally determine whether one or more physical resources can be deallocated from the managed node 1570 while still meeting the QoS goals, thereby freeing those physical resources for use on another managed node used in (for example, to perform different workloads). Alternatively, if the QoS goals are not currently being met, the orchestrator server 1520 may determine to dynamically allocate additional physical resources to assist in executing the workload (eg, application 1532 ) while the workload is executing.Furthermore, in some embodiments, the orchestrator server 1520 may, such as by identifying phases of execution of a workload (eg, application 1532 ) (eg, time periods in which different operations are performed, each operation having different resource utilization characteristics) and Preemptively identify available resources in data center 100 and assign them to managed nodes 1570 (eg, within a predefined time period beginning of the associated phase) to identify resource utilization by workloads (eg, applications 1532 ) trend in terms of. In some embodiments, orchestrator server 1520 may model performance based on various latency and distribution schemes for placement among compute sleds and other resources (eg, accelerator sleds, memory sleds, storage sleds) in data center 100 workload. For example, orchestrator server 1520 may utilize models that take into account the performance of resources on sled 400 (eg, FPGA performance, memory access latency, etc.) and the performance of paths through the network to resources (eg, FPGA) (eg, congestion, latency, bandwidth). Thus, the orchestrator server 1520 may be based on the total latency associated with each potential resource available in the data center 100 (eg, with the exception of delays passing through the network between the compute sled executing the workload and the sled 400 on which the resource is located). In addition to the latency associated with the path, the latency associated with the performance of the resource itself) to determine which resource(s) should be used with which workloads.In some embodiments, orchestrator server 1520 may use telemetry data (eg, temperature, fan speed, etc.) reported from skateboard 400 to generate a map of heat generation in data center 100, and based on the heat generation associated with different workloads The generated map and predicted heat generation allocates resources to managed nodes to maintain target temperature and thermal distribution in the data center 100 . Additionally or alternatively, in some embodiments, the orchestrator server 1520 may organize the received telemetry data into a hierarchical model that indicates relationships between managed nodes (eg, spatial relationships, such as data centers Physical location of resources of managed nodes within 100, and/or functional management, such as grouping of managed nodes by clients servicing them by managed nodes, types of functions typically performed by managed nodes , managed nodes that typically share or exchange workloads among each other, etc.). Based on differences in physical location and resources among the managed nodes, a given workload may exhibit different resource utilization across the resources of different managed nodes (eg, causing different internal temperatures, using different percentages of processor or memory capacity) . The orchestrator server 1520 can determine the difference based on telemetry data stored in the hierarchical model and take the difference into account for future resources of the workload if the workload is redistributed from one managed node to another managed node Utilization forecasting to accurately balance resource utilization in the data center 100 .To reduce the computational load on the orchestrator server 1520 and the data transfer load on the network, in some embodiments, the orchestrator server 1520 may send self-test information to the skateboards 400 to enable each skateboard 400 to locally (eg, at on sled 400 ) determines whether telemetry data generated by sled 400 satisfies one or more conditions (eg, available capacity that meets a predefined threshold, temperature that meets a predefined threshold, etc.). Each sled 400 may then report the simplified results (eg, yes or no) back to the orchestrator server 1520, which may be utilized by the orchestrator server 1520 in determining allocation of resources to managed nodes.Referring now to FIG. 16, a system 1600 includes a plurality of compute sleds 1604 (eg, sled 400 with physical resources 620, compute sleds 800, 1530) from the same or different racks (eg, one or more of racks 240). ) in communication with an orchestrator server 1602 and an accelerator pool 1606, the system 1600 may be implemented in accordance with the data center 100 described above with reference to FIG. The predicted bitstream 1622 is pre-registered on the accelerator pool 1606 including multiple accelerator sleds 1608A, 1608B (eg, with physical Skateboard 400, Accelerator Skateboard 1000, 1550 for resource 620). It should be understood that a job may be one or more tasks of an application or workload. It should be appreciated that in other embodiments, the system 1600 may include a different number of compute sleds 1604, accelerator sleds 1608, and/or other sleds (eg, memory sleds or storage sleds).In use, as described in more detail below, the orchestrator server 1602 of the system 1600 can predict the next job requested from the compute sled 1604 to be accelerated and a bitstream 1622 capable of executing the predicted next job. It should be understood that the bitstream 1622 may be executable code that can be used to implement a set of functions, and is also referred to as a kernel 1680 when the bitstream 1622 is registered and executed on the accelerator 1670 . Bitstream 1622 may be implemented as any set of instructions executable by accelerator 1670 to perform corresponding functions. For example, bitstream 1622 may be implemented as a set of instructions for performing cryptographic functions, arithmetic functions, hashing functions, and/or other functions executable by accelerator 1670 . Based on the predicted bitstream 1622 and the characteristics of the next job, the orchestrator server 1602 may determine available accelerators 1670-1, 1670-2, 1670-3, or 1670-4 capable of executing the predicted next job, and assign the The determined accelerator 1670-1, 1670-2, 1670-3, or 1670-4 is configured to preregister the predicted bitstream 1622. By managing future bitstream registrations and preconfiguring accelerators 1670, illustrative system 1600 can reduce overall execution time, which typically includes the time elapsed in determining accelerators 1670 that meet bitstream requirements and in determining accelerators 1670 The time elapsed when the bitstream 1622 was registered.Orchestrator server 1602 may be implemented as any type of computing device capable of performing the functions described herein, including predicting the next job to be requested for acceleration, predicting a bitstream 1622 capable of executing the predicted next job, And the accelerator 1670 is configured to register the predicted bit stream 1622. As shown in FIG. 16, the illustrative orchestrator server 1602 includes a bitstream library 1620 that stores accelerators 1670-1, 1670-2, 1670-3, and 1670-4 currently in the system 1600 Bitstream 1622 registered on. Additionally, the orchestrator server 1602 operates to track the bitstreams 1622 currently registered on the accelerators 1670-1, 1670-2, 1670-3, and 1670-4. In some embodiments, bitstream library 1620 may further store bitstreams 1622 that have previously been registered on accelerators 1670-1, 1670-2, 1670-3, and 1670-4. In an illustrative embodiment, orchestrator server 1602 may select bitstream 1622 from bitstream library 1620 that is predicted to receive the predicted next job. In some embodiments, orchestrator server 1602 may support a cloud operating environment, such as OpenStack, and accelerator pool 1606 and compute sled 1604 may execute one or more applications or processes (ie, jobs or workloads), such as on virtual machines or in the container.Compute sled 1604 may be implemented with a central processing unit (CPU) 1640 capable of executing workloads (eg, applications 1642 ) and performing other functions described herein, including requesting accelerators 1670 via orchestrator server 1602 to accelerate work Any type of computing device. For example, computing sled 1604 may be implemented as computing sled 800, 1530, computer, distributed computing system, multiprocessor system, network device (eg, physical or virtual), desktop computer, workstation, laptop, notebook A computer, processor-based system, or network device.In the illustrative embodiment, accelerator pool 1606 includes two accelerator sleds 1608A, 1608B, and each accelerator sled 1608A, 1608B includes prefetch logic 1660A, 1660B, bitstream caches 1662A, 1662B, and two accelerators 1670-1 , 1670-2 or 1670-3, 1670-4. It should be appreciated that in other embodiments, accelerator sled 1608 may include other or additional components, such as those typically found in typical computing devices (eg, various input/output devices and/or other components). Furthermore, in some embodiments, one or more of the illustrative components may be incorporated into, or otherwise form part of, another component. It should be appreciated that in other embodiments, accelerator pool 1606 may include a different number of accelerator sleds 1608A, 1608B, and each accelerator sled 1608A, 1608B may include a different number of accelerators 1670 . It should be understood that, in other embodiments, one or more of the illustrative components may be incorporated into, or otherwise form part of, another component.Accelerator 1670 may be implemented as a single device, such as an integrated circuit, embedded system, field programmable array (FPGA), system on chip (SOC), application specific integrated circuit (ASIC), reconfigurable hardware or hardware circuit, or capable of The compute sled 1604 communicates with the orchestrator server 1602 to pre-register the predicted bitstream 1622 to perform other specialized hardware to be requested from the compute sled 1604 for the next job for acceleration. In the illustrative embodiment, each accelerator 1670-1, 1670-2, 1670-3, 1670-4 includes two registered on each accelerator 1670-1, 1670-2, 1670-3, 1670-4 A core 1680 (eg, each set of circuits and/or executable code that may be used to implement a set of functions, ie, bitstream 1622). It should be understood, however, that in other embodiments, each accelerator 1670 may include a different number of cores 1680 on the corresponding accelerator 1670 .In the illustrative embodiment as shown in FIG. 16 , accelerator sled 1608 further includes prefetch logic unit 1660 and bitstream cache 1662 . The prefetch logic unit 1660 may be implemented as a circuit, component, or any type of device capable of prefetching the bitstream 1622 predicted to execute to be received from the compute sled 1604 from the bitstream library 1620 of the orchestrator server 1602 the predicted next job. For example, if orchestrator server 1602 determines that accelerator 1670-1 of accelerator sled 1608A is configured to register predicted bitstream A 1622A, then prefetch logic 1660A of accelerator sled 1608A fetches predicted bitstream 1622 from bitstream library 1620 and in The predicted bitstream 1622 is registered on the accelerator 1670-1. In some embodiments, the prefetch logic unit 1660 may be implemented as a coprocessor, embedded circuit, ASIC, FPGA, and/or other specialized circuits.Bitstream cache 1662 may be implemented as any device or circuit capable of determining the bitstream register data for each accelerator 1670 on accelerator sled 1608 and communicating the bitstream register data to orchestrator server 1602 . For example, the bitstream registration data includes one or more bitstreams 1622 currently registered on each accelerator 1670 on the corresponding accelerator sled 1608 . In some embodiments, the bitstream cache 1662 may update the timestamp for bitstream registration and execution on one of the accelerators 1670 corresponding to the accelerator sled 1608 and transmit the updated timestamp data to the orchestrator server 1602 to update the bitstream Stream library 1620. It should be appreciated that the timestamp of each bitstream 1622 may be used to determine the execution mode for each bitstream 1622 of each available application 1642, as discussed further below. Additionally, in some embodiments, bitstream cache 1662 may include one or more security signatures (eg, unique codes) that may be used to authenticate incoming bitstreams (eg, to prevent registration or execution of rogue bitstreams).Referring now to FIG. 17, the orchestrator server 1602 may be implemented as any type of computing device capable of performing the functions described herein, including determining one or more bits registered on each of the plurality of accelerators stream, predict application of compute sled from multiple compute sleds to request next job for acceleration, predict bitstream from bitstream library to perform predicted next job requesting to be accelerated, determine predicted whether the predicted bitstream is already registered on the accelerator, determining the accelerator that satisfies the characteristics of the predicted bitstream in response to determining that the predicted bitstream is not registered on the accelerator, and responsive to determining that the predicted next job is to be performed register the predicted bit stream on the determined accelerator.As shown in FIG. 17 , illustrative orchestrator server 1602 includes compute engine 1702 , input/output (I/O) subsystem 1708 , communication circuitry 1710 , and one or more data storage devices 1714 . Of course, in other embodiments, orchestrator server 1602 may include other or additional components, such as those typically found in computers (eg, displays, peripherals, etc.). Furthermore, in some embodiments, one or more of the illustrative components may be incorporated into, or otherwise form part of, another component.The computing engine 1702 may be implemented as any type of device or collection of devices capable of performing the various computing functions described below. In some embodiments, computing engine 1702 may be implemented as a single device, such as an integrated circuit, embedded system, FPGA, system on a chip (SOC), or other integrated system or device. Additionally, in some embodiments, computing engine 1702 includes or is implemented as processor 1704 and memory 1706 . Processor 1704 may be implemented as any type of processor capable of performing the functions described herein. For example, processor 1704 may be implemented as a single-core or multi-core processor, microcontroller, or other processor or processing/control circuit. In some embodiments, the processor 1704 may be implemented as, include, or be coupled to an FPGA, an ASIC, reconfigurable hardware or hardware circuits, or other specialized hardware to facilitate performance of the functions described herein.Memory 1706 may be implemented as any type of volatile (eg, dynamic random access memory (DRAM), etc.) or non-volatile memory or data storage device capable of performing the functions described herein. Volatile memory may be a storage medium that requires power to maintain the state of data stored by the medium. Non-limiting examples of volatile memory may include various types of random access memory (RAM), such as DRAM or static random access memory (SRAM). One specific type of DRAM that can be used in memory modules is Synchronous Dynamic Random Access Memory (SDRAM). In certain embodiments, the DRAM of the memory component may comply with standards promulgated by JEDEC, such as JESD79F for DDR SDRAM, JESD79-2F for DDR2 SDRAM, JESD79-3F for DDR3 SDRAM, JESD79-3F for DDR4 SDRAM JESD79-4A, JESD209 for Low Power DDR (LPDDR), JESD209-2 for LPDDR2, JESD209-3 for LPDDR3, and JESD209-4 for LPDDR4 (these standards are available at www.jedec.org acquired). Such standards (and similar standards) may be referred to as DDR-based standards, and communication interfaces of memory devices implementing such standards may be referred to as DDR-based interfaces.In one embodiment, the memory device is a block addressable memory device, such as those based on NAND or NOR technology. Memory devices may also include future generation non-volatile devices, such as three-dimensional crosspoint memory devices (eg, Intel 3D XPoint™ memory) or other byte-addressable write-in-place non-volatile memory devices. In one embodiment, the memory device may be or include a memory device using chalcogenide glass, multi-threshold level NAND flash memory, NOR flash memory, single-level or multi-level phase change memory (PCM), resistive memory, nanowire memory , ferroelectric transistor random access memory (FeTRAM), antiferroelectric memory, magnetoresistive random access memory (MRAM) memory incorporating memristor technology, resistive memory including metal oxide substrates, oxygen vacancy substrates, and Conductive Bridge Random Access Memory (CB-RAM), or Spin Transfer Torque (STT)-MRAM, spintronic magnetic junction memory based devices, magnetic tunnel junction (MTJ) based devices, DW (domain wall) and A SOT (Spin Orbit Transfer) device, a thyristor based memory device, or a combination of any of the above or other memory. A memory device may refer to the die itself and/or a packaged memory product.In some embodiments, a 3D crosspoint architecture (eg, Intel 3D XPointTM memory) may include a transistorless stackable crosspoint architecture, where memory cells are located at the intersection of word lines and bit lines and are individually addressable, And its bit storage is based on changes in bulk resistance. In some embodiments, all or a portion of memory 1706 may be integrated into processor 1704 . In operation, memory 1706 may store various software and data used during operation.The compute engine 1702 is communicatively coupled with other components of the orchestrator server 1602 via an I/O subsystem 1708, which may be implemented to facilitate utilization of the compute engine 1702 of the orchestrator server 1602 (eg, using circuits and/or components for input/output operations of processor 1704 and/or memory 1706) and other components. For example, the I/O subsystem 1708 may be implemented as or otherwise include a memory controller hub, an input/output control hub, an integrated sensor hub, firmware devices, communication links (eg, point-to-point links, bus links, wireline , cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate input/output operations. In some embodiments, I/O subsystem 1708 may form part of a system on a chip (SoC) and be incorporated into compute engine 1702 along with one or more of processor 1704 , memory 1706 , and other components of orchestrator server 1602 middle.Communication circuit 1710 may be implemented as any communication circuit, device, or collection thereof capable of enabling communication through a network between orchestrator server 1602 and another computing device (eg, computing sled 1604, accelerator sled 1608, etc.). Communication circuitry 1710 may be configured to enable such communications using any one or more communication technologies (eg, wired or wireless communications) and associated protocols (eg, Ethernet, Bluetooth®, Wi-Fi®, WiMAX, etc.) .Illustrative communications circuitry 1710 includes a network interface controller (NIC) 1712, which may also be referred to as a host fabric interface (HFI). NIC 1712 may be implemented as one or more add-in boards, daughter cards, network interface cards, controller chips, chipsets, or may be used by orchestrator server 1602 to communicate with another computing device (eg, computing sled 1604, accelerators Skateboard 1608, etc.) connected to other devices. In some embodiments, NIC 1712 may be implemented as part of a system-on-chip (SoC) that includes one or more processors, or included on a multi-chip package that also includes one or more processors. In some embodiments, NIC 1712 may include a local processor (not shown) and/or local memory (not shown) both local to NIC 1712. In such an embodiment, the local processor of NIC 1712 may be capable of performing one or more of the functions of compute engine 1702 described herein. Additionally or alternatively, in such embodiments, the local memory of the NIC 1712 may be integrated into one or more components of the orchestrator server 1602 at the board level, socket level, chip level, and/or other level.One or more illustrative data storage devices 1714 may be implemented as any type of device configured for short-term or long-term storage of data, such as, for example, memory devices and circuits, memory cards, hard disk drives (HDDs), solid state drives (SSDs) or other data storage devices. Each data storage device 1714 may include a system partition that stores data and firmware code for the data storage device 1714 . Each data storage device 1714 may also include an operating system partition that stores data files and executable files for the operating system. Additionally or alternatively, the orchestrator server 1602 may include one or more peripheral devices (not shown). Such peripherals may include any type of peripherals commonly found in computing devices, such as displays, speakers, mice, keyboards and/or other input/output devices, interface devices and/or other peripherals.Compute sled 1604 and accelerator sled 1608 may have similar components to those described in FIG. 17 . Descriptions of those components of orchestrator server 1602 apply equally to descriptions of components of those devices and are not repeated herein for clarity of description. Additionally, it should be understood that any of computing sled 1604 and accelerator sled 1608 may include other components (eg, accelerator 1670 ) and/or other components, subcomponents, and devices typically found in computing devices, which are not referenced above Orchestrator server 1602 and is not discussed herein for clarity of description.As described above, the orchestrator server 1602 and the skateboards 1604, 1608 are illustratively in communication via a network (not shown), which may be implemented as any type of wired or wireless communication network, including a global network (eg, , Internet), Local Area Network (LAN) or Wide Area Network (WAN), cellular networks (eg, Global System for Mobile Communications (GSM), 3G, Long Term Evolution (LTE), Worldwide Interoperability for Microwave Access (WiMAX), etc.), digital subscribers Line (DSL) network, cable network (eg, coaxial cable, fiber optic network, etc.), or any combination thereof.Referring now to FIG. 18, in an illustrative embodiment, orchestrator server 1602 may establish environment 1800 during operation. In the illustrative embodiment, environment 1800 includes bitstream library 1620 , which may be implemented as any data indicative of previously and currently registered bitstreams 1622 on accelerators 1670 of accelerator pool 1606 . Additionally, illustrative environment 1800 includes network communicator 1802 , bitstream updater 1804 , work predictor 1806 , bitstream predictor 1808 , and accelerator manager 1810 . As shown in FIG. 18 , the accelerator manager 1810 further includes an accelerator characteristic determiner 1812 , a registered bitstream tracker 1814 and a bitstream prefetcher 1816 . Each of the components of environment 1800 may be implemented as hardware, firmware, software, or a combination thereof. Thus, in some embodiments, one or more of the components of environment 1800 may be implemented as a collection of circuits or electrical devices (eg, network communicator circuit 1802, bitstream updater circuit 1804, work predictor circuit 1806, bitstream predictor circuit 1808, accelerator manager circuit 1810, accelerator characteristic determiner circuit 1812, register bitstream tracker circuit 1814, bitstream prefetcher circuit 1816, etc.).In illustrative environment 1800, network communicator 1802 is configured to facilitate inbound and outbound network communications (eg, network traffic, network packets, network flows, etc.) to and from orchestrator server 1602, respectively. To do so, network communicator 1802 is configured to receive and process data from remote systems or computing devices (eg, computing sled 1604, accelerator sled 1608 of accelerator pool 1606, etc.), and to prepare and transmit data to remote systems or computing Devices (eg, compute sled 1604, accelerator sled 1608 of accelerator pool 1606, etc.). Accordingly, in some embodiments, at least a portion of the functions of the network communicator 1802 may be performed by the communication circuitry of the orchestrator server 1602.As discussed above, the bitstream updater 1804, which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to update the bitstream library 1620 to keep track of which bitstream 1622 is currently being Which accelerator 1670 to register on. As discussed above, bitstream library 1620 stores bitstreams 1622 currently registered on accelerators 1670 of accelerator pool 1606 . To do so, the bitstream updater 1804 may receive bitstream registration data from the bitstream cache 1662 of each accelerator sled 1608 indicating which bitstreams 1622 are currently registered on each accelerator 1670 of the corresponding accelerator sled 1608, and correspondingly Bitstream library 1620 is updated. In some embodiments, after registering the bitstream 1622 on the accelerator 1670, the bitstream updater 1804 may update the timestamp of the bitstream 1622 in the bitstream library 1620, the timestamp indicating the time the bitstream was registered. Alternatively, in some embodiments, the bitstream cache 1662 of each accelerator sled 1608 may update the timestamp of bitstream registration and execution on one of the accelerators 1670 of the corresponding accelerator sled 1608 and transmit the updated timestamp data To the bitstream updater 1804 to update the bitstream library 1620. It should be understood that the timestamp of each bitstream 1622 may be used to determine the execution mode for each bitstream 1622 of each available application 1642. In some embodiments, bitstream cache 1662 may temporarily store copies of one or more bitstreams that have been received from bitstream repository 1620 .As discussed above, the job predictor 1806 , which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to predict the amount of work to be requested for acceleration from the available applications 1642 of the compute sled 1604 . next job. To do so, the job predictor 1806 may predict the next job based on the available applications 1642 currently executing on the compute sled 1604 . For example, the type and size of available applications 1642 can infer the type of work that is likely to be requested by available applications 1642 for acceleration. Additionally or alternatively, the work predictor 1806 may predict the next work to be requested based on the execution mode of the bitstream 1622 for each available application 1642. To do so, in some embodiments, the work predictor 1806 may determine the past execution history of each bitstream 1622 for each application 1642. For example, the work predictor 1806 may analyze the timestamp of each bitstream 1622 to determine the number of times the corresponding bitstream 1622 was used to perform the work requested by each available application 1642. In other embodiments, work predictor 1806 may utilize machine learning to predict the execution mode for each bitstream 1622 of each application 1642 .As discussed above, the bitstream predictor 1808 , which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to predict data from the bitstream library 1620 for performing predictions to request Bitstream 1622 for accelerated next job. To do so, the bitstream predictor 1808 may predict the bitstream 1622 based on the available accelerators 1670 of the system 1600, as each available accelerator 1670 may have different capabilities, making them differently suited for performing a given type of work. For example, accelerator 1670 may be capable of performing work requiring encryption/decryption, compression/decompression, transcoding, matrix multiplication, and/or convolutional neural network operations. Additionally or alternatively, in some embodiments, the bitstream predictor 1808 may predict for execution of the next predicted job based on the predicted type of next job and the type of workload that each accelerator 1670 is capable of accelerating. The bitstream 1622. In other embodiments, the bitstream predictor 1808 may predict the bitstream 1622 based on the execution mode of the bitstream for each available application 1642. For example, the bitstream predictor 1808 may analyze the timestamp of each bitstream 1622 to determine the number of times the corresponding bitstream 1622 was used to perform the work requested by each available application 1642 to determine the execution mode.As discussed above, the accelerator manager 1810 , which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to manage bitstream registers on each accelerator 1670 of the accelerator pool 1606 . In the illustrative embodiment, accelerator manager 1810 determines, based on bitstream library 1620, whether predicted bitstream 1622 is already registered on one of accelerators 1670, and in response to determining that predicted bitstream 1622 is not registered on the accelerator At one of the accelerators 1670 , the accelerator manager 1810 determines an accelerator 1670 capable of executing the predicted bitstream 1622 and pre-registers the predicted bitstream 1622 on the determined accelerator 1670 . To do so, the accelerator manager 1810 further includes an accelerator characteristic determiner 1812 , a registered bitstream tracker 1814 and a bitstream prefetcher 1816 .As discussed above, the accelerator characteristic determiner 1812 , which may be implemented as hardware, firmware, software, virtualized hardware, emulated architecture, and/or a combination thereof, is configured to determine the characteristics of each accelerator 1670 . For example, in some embodiments, accelerator characteristic determiner 1812 may determine the availability of each accelerator 1670 . To do so, the accelerator characteristic determiner 1812 may determine the current load on each accelerator 1670 . In some embodiments in which the accelerator 1670 is implemented as a field programmable gate array (FPGA), the accelerator characteristic determiner 1812 may further determine the number of free slots in each FPGA and/or the free logic gates in each FPGA quantity. Additionally or alternatively, accelerator characteristic determiner 1812 may determine the type of workload that each accelerator 1670 is capable of accelerating. For example, accelerator characteristics determiner 1812 may determine whether each accelerator is capable of performing work that requires encryption/decryption, compression/decompression, transcoding, matrix multiplication, and/or convolutional neural network operations. Additionally, in some embodiments, the accelerator characteristic determiner 1812 may determine the physical distance to each accelerator 1670 from the computing sled 1604 that is requesting work to be accelerated. It should be appreciated that the physical distance between the requested computing sled 1604 and the accelerator 1670 can affect communication efficiency.As discussed above, the registered bitstream tracker 1814 , which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to track the bitstream 1622 currently being registered on the accelerator 1670 . The registered bitstream tracker 1814 may determine the bitstreams 1622 registered on the accelerators 1670 of each accelerator sled 1608 and store the bitstream registered data in the bitstream cache 1662 . As discussed above, the bitstream cache 1662 can communicate the bitstream registration data to the orchestrator server 1602 so that the registered bitstream tracker 1814 can monitor which bitstream 1622 is registered on which accelerator 1670 to execute the data from the application 1642 Works to analyze the execution mode of the bitstream 1622 for the corresponding application 1642.As discussed above, the bitstream prefetcher 1816 , which may be implemented as hardware, firmware, software, virtualized hardware, an emulated architecture, and/or a combination thereof, is configured to prefetch the bitstream 1622 that is likely to be capable of executing the predicted The predicted bitstream 1622 executes the predicted next job on the accelerator 1670 of the . By prefetching and registering the predicted bitstream 1622 on the accelerator 1670, the accelerator 1670 becomes preconfigured to perform the next work before a corresponding work request is received from the application 1642 of the compute sled 1604. As discussed above, by managing future bitstream registrations and preconfiguring accelerators 1670, illustrative system 1600 may reduce overall execution time, including determining accelerators 1670 that meet bitstream requirements and the time spent on registering bitstreams 1622 on accelerators 1670 time.Referring now to Figures 19-21, in use, the orchestrator server 1602 may perform a method 1900 for predicting a bitstream 1622 to be registered on the accelerator 1670 based on the predicted next job to be accelerated and by receiving the next The predicted bitstream is registered 1622 to preconfigure the accelerator before working. The method 1900 begins at block 1902 where the orchestrator server 1602 determines one or more bitstreams to register on each accelerator 1670 . To do so, in some embodiments, the orchestrator server 1602 may monitor bitstream submissions and executions on each accelerator 1670 in block 1904 and may update the timestamp of each bitstream execution in block 1906, after which it may The timestamps are analyzed to determine the execution mode for each bitstream for each application. In other embodiments, the orchestrator server 1602 may receive bitstream registration data in block 1908 from the bitstream cache 1662 of each accelerator sled 1608 . As discussed above, the bitstream registration data indicates the bitstream 1622 currently being registered on each of the accelerators 1670 on the corresponding accelerator sled 1608 . Then in block 1910, the orchestrator server 1602 may update the bitstream library 1620 to keep track of which bitstream 1622 is currently registered on which accelerator 1670. As discussed above, the bitstream library 1620 includes the bitstream 1622 currently registered on the accelerator 1670 .In block 1912, the orchestrator server 1602 predicts the next job to be requested for acceleration. An acceleration request is received from one of the applications 1642 currently executing on one of the computing sleds 1604 . In some embodiments, the orchestrator server 1602 may predict the next job to be accelerated in block 1914 based on the available applications 1642 executing on the compute sled 1604 . For example, based on the types of applications 1642 available, the orchestrator server 1602 can predict the type of data or work that is likely to be requested for acceleration. Additionally or alternatively, the orchestrator server 1602 may predict the next job to accelerate in block 1916 based on the execution mode of the bitstream 1622 for each available application 1642. To do so, the orchestrator server 1602 may determine the past execution history of each bitstream 1622 for each application 1642, as indicated in block 1918, and/or may utilize machine learning to predict execution patterns, as in block 1918 as indicated in the 1920s. After determining the predicted next job to be accelerated, the method 1900 proceeds to block 1922 shown in FIG. 20 .Referring now to FIG. 20 , in block 1922 the orchestrator server 1602 determines the characteristics of each accelerator 1670 . To do so, the orchestrator server 1602 may determine the availability of each accelerator 1670 of the accelerator pool 1606, as indicated in block 1924. For example, in block 1926, the orchestrator server 1602 may determine the current load on each accelerator 1670 to determine the availability of each accelerator 1670. As discussed above, in some embodiments, accelerator 1670 may be implemented as a field programmable gate array (FPGA). In such an embodiment, the orchestrator server 1602 may determine the number of free slots in each FPGA in block 1928 and/or the number of free logic gates in each FPGA in block 1930 to determine each Availability of FPGAs.Additionally or alternatively, in some embodiments, the orchestrator server 1602 may determine the types of workloads that each accelerator 1670 is capable of accelerating, as indicated in block 1932 . For example, in block 1934, the orchestrator server 1602 may determine the cryptographic, compression, transcoding, matrix multiplication, and/or convolutional neural network computing capabilities of each accelerator 1670. In other embodiments, the orchestrator server 1602 may further determine the physical distance from the requested compute sled 1604 to each accelerator 1670 . As discussed above, the physical distance between the requesting compute sled 1604 and the accelerator 1670 can affect communication efficiency.In block 1938, the orchestrator server 1602 predicts the bitstream 1622 from the bitstream library 1620 for execution of the predicted next job. To do so, in block 1940, the orchestrator server 1602 may predict the bitstream 1622 based on the available accelerator(s) 1670. Additionally or alternatively, the orchestrator server 1602 may predict the bitstream 1622 based on the type of predicted next job to be accelerated and the type of workload that each accelerator 1670 is capable of accelerating, as indicated in block 1942 . As discussed above, each accelerator 1670 may have different features that allow the accelerator 1670 to execute certain types of data. Thus, based on the predicted type of next job, the orchestrator server 1602 can determine the characteristics of the accelerators 1670 that need to perform the predicted type of next job. Additionally or alternatively, in block 1944, the orchestrator server 1602 may predict the bitstream 1622 based on the execution mode of the bitstream 1622 for each available application 1642 to determine which bitstream 1622 is likely to be required to execute the predicted the next job. For example, as discussed above, the orchestrator server 1602 may analyze execution patterns based on past execution history of each bitstream 1622 for each application 1642 and/or execution patterns predicted using machine learning. After determining the predicted bitstream for execution of the predicted next job, the method 1900 proceeds to block 1946 shown in FIG. 21 .Referring now to FIG. 21 , in block 1946 the orchestrator server 1602 may determine whether the predicted bitstream 1622 is already registered on one of the available accelerators 1670 of the accelerator pool 1606 . If the orchestrator server 1602 determines in block 1948 that the predicted bitstream 1622 is already registered on one of the available accelerators 1670, then the orchestrator server 1602 determines that registration of the predicted bitstream 1622 is not required and the method 1900 skips to the end. However, if the orchestrator server 1602 determines in block 1948 that the predicted bitstream 1622 is not registered on one of the available accelerators 1670 and needs to be registered, the method 1900 proceeds to block 1950 .In block 1950, the orchestrator server 1602 determines accelerators 1670 that satisfy the predicted bitstream characteristics. To do so, in block 1952, the orchestrator server 1602 may determine the accelerator 1670 based on the specific acceleration capability required by the predicted bitstream 1622. Additionally or alternatively, the orchestrator server 1602 may determine the accelerator 1670 based on some capacity on the accelerator 1670 required by the predicted bitstream 1622 , as indicated in block 1954 . After determining the accelerator 1670 , the method 1900 proceeds to block 1956 where the orchestrator server 1602 preconfigures the determined accelerator 1670 by registering the predicted bitstream 1622 on the determined accelerator 1670 . In other words, the predicted bitstream 1622 is prefetched from the bitstream library 1620 and registered on the determined accelerator 1670 prior to receiving the predicted next job. By dynamically registering the predicted bitstream 1622 prior to receiving the predicted next job, the system 1600 can reduce the likelihood of incurring delays for fetching and registering the bitstream after receiving the next job to be accelerated.This application provides the following technical solutions:1. A computing device for preconfiguring an accelerator of a plurality of accelerators of a system, the computing device comprising:communication circuit;a compute engine to (i) determine one or more bit streams registered on each of the plurality of accelerators, (ii) predict that an application to be requested from at least one of the plurality of compute sleds is used for Accelerating the next job, (iii) predicting a bitstream from the bitstream library to perform the predicted next job requesting to be accelerated, (iv) determining whether the predicted bitstream is already registered in one of the accelerators above, (v) in response to determining that the predicted bitstream is not registered on one of the accelerators, selecting an accelerator from a plurality of accelerators that satisfies the characteristics of the predicted bitstream, and (vi) in response to determining that the predicted bitstream is satisfied Accelerator of the characteristics of the bit stream, the predicted bit stream is registered on the determined accelerator.2. The computing device of technical solution 1, wherein determining the one or more bitstreams registered on each accelerator includes monitoring bitstream submission and execution on each accelerator.3. The computing device of technical solution 1, wherein determining the one or more bitstreams registered on each accelerator comprises receiving bitstream registration data from each accelerator, wherein the bitstream registration data indicates bits currently registered on the corresponding accelerator flow.4. The computing device of claim 1, wherein determining the one or more bitstreams registered on each accelerator includes updating a bitstream library to keep track of the bitstreams currently registered on each accelerator.5. The computing device of claim 1, wherein predicting the next job to be requested for acceleration includes predicting the next job to be requested for acceleration based on available applications currently executing on the plurality of computing sleds.6. The computing device of claim 1, wherein predicting the next job to be requested for acceleration includes predicting the next job to be requested based on an execution mode of a bitstream for each available application currently executing on the plurality of computing sleds Next job for acceleration.7. The computing device of claim 1, wherein predicting the bitstream from the bitstream library comprises predicting the bitstream based on available accelerators of the system.8. The computing device of technical solution 1, wherein predicting the bitstream from the bitstream library includes predicting the bitstream based on the predicted type of next job and the type of workload each accelerator can accelerate.9. The computing device of claim 1, wherein predicting the bitstream from the bitstream library includes predicting an execution mode of the bitstream for each available application.10. The computing device of claim 1, wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on a specific accelerator capability required by the predicted bitstream.11. The computing device of claim 1, wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on a capacity on the accelerator required by the predicted bitstream.12. The computing device of technical solution 1, wherein the computing engine is further used to determine the characteristics of each accelerator in the plurality of accelerators of the system.13. One or more machine-readable storage media including a plurality of instructions stored thereon that, when executed by a computing device, cause the computing device to:determining one or more bitstreams registered on each of the plurality of accelerators;predicting the next job to be requested for acceleration from at least one of the computing sleds from at least one of the plurality of computing sleds;predicting a bitstream from the bitstream library to perform the predicted next job requesting to be accelerated;determine whether the predicted bit stream is already registered on one of the accelerators;In response to determining that the predicted bitstream is not registered on one of the accelerators, selecting an accelerator from the plurality of accelerators that satisfies the characteristics of the predicted bitstream; andIn response to determining an accelerator that satisfies the characteristics of the predicted bitstream, the predicted bitstream is registered on the determined accelerator.14. The one or more machine-readable storage media of claim 13, wherein determining the one or more bitstreams registered on each accelerator includes monitoring bitstream submission and execution on each accelerator.15. The one or more machine-readable storage media of claim 13, wherein determining the one or more bitstreams registered on each accelerator comprises receiving bitstream registration data from each accelerator, wherein the bitstream registration data indicates a current Bitstream registered on the corresponding accelerator.16. The one or more machine-readable storage media of claim 13, wherein determining the one or more bitstreams registered on each accelerator includes updating a bitstream library to keep track of the bitstreams currently registered on each accelerator .17. The one or more machine-readable storage media of claim 13, wherein predicting the next job to be requested for acceleration comprises predicting to be requested for acceleration based on available applications currently executing on the plurality of computing sleds the next job.18. The one or more machine-readable storage media of claim 13, wherein predicting the next job to be requested for acceleration comprises a bitstream based on a bitstream for each available application currently executing on the plurality of compute sleds Execution mode to predict the next job to be requested for acceleration.19. The one or more machine-readable storage media of claim 13, wherein predicting the bitstream from the bitstream library comprises predicting the bitstream based on available accelerators of the system.20. The one or more machine-readable storage media of claim 13, wherein predicting the bitstream from the bitstream library comprises predicting the bits based on the predicted type of next job and the type of workload each accelerator can accelerate flow.21. The one or more machine-readable storage media of claim 13, wherein predicting the bitstream from the bitstream library comprises predicting an execution mode of the bitstream for each available application.22. The one or more machine-readable storage media of claim 13, wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on specific accelerator capabilities required by the predicted bitstream.23. The one or more machine-readable storage media of claim 13, wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on a capacity on the accelerator required by the predicted bitstream.24. The one or more machine-readable storage media of claim 13, wherein the plurality of instructions, when executed, further cause the computing device to determine a characteristic of each of the plurality of accelerators of the system.25. A computing device for preconfiguring an accelerator of a plurality of accelerators of a system, the computing device comprising:circuitry for determining one or more bitstreams registered on each of the plurality of accelerators;means for predicting the next job to be requested for acceleration from an application of at least one of the plurality of computing sleds;means for predicting a bitstream from a bitstream library to perform the predicted next job requesting to be accelerated;means for determining whether the predicted bit stream has been registered on one of the accelerators;means for selecting an accelerator from a plurality of accelerators that satisfies a characteristic of the predicted bitstream in response to determining that the predicted bitstream is not registered on one of the accelerators; andMeans for registering the predicted bitstream on the determined accelerator in response to determining an accelerator that satisfies a characteristic of the predicted bitstream.26. A method for provisioning an accelerator of a plurality of accelerators of a system, the method comprising:determining, by the orchestrator server, one or more bitstreams registered on each of the plurality of accelerators;an application predicted by the orchestrator server to be requested from at least one of the plurality of compute sleds for the accelerated next job;predicting, by the orchestrator server, a bitstream from the bitstream library to perform the predicted next job requesting to be accelerated;determining by the orchestrator server whether the predicted bitstream has already been registered on one of the accelerators;selecting, by the orchestrator server and in response to determining that the predicted bitstream is not registered on one of the accelerators, an accelerator from a plurality of accelerators that satisfies the characteristics of the predicted bitstream; andThe predicted bitstream is registered on the determined accelerator by the orchestrator server and in response to determining an accelerator that satisfies the characteristics of the predicted bitstream.27. The method of claim 26, wherein determining the one or more bitstreams registered on each accelerator comprises receiving, by the orchestrator server, bitstream registration data from each accelerator, wherein the bitstream registration data indicates that the corresponding accelerator is currently registered on the bit stream.28. The method of claim 26, wherein predicting the next job to be requested for acceleration includes predicting, by the orchestrator server, the next job to be requested for acceleration based on available applications currently executing on the plurality of compute sleds .ExampleIllustrative examples of the techniques disclosed herein are provided below. Embodiments of the techniques may include any one or more and any combination of the examples described below.Example 1 includes a computing device for preconfiguring an accelerator of a plurality of accelerators of a system, the computing device including a communication circuit; a computing engine to (i) determine a value registered on each of the plurality of accelerators one or more bitstreams, (ii) predicting the next job requested for acceleration from an application of at least one of the plurality of computing sleds, (iii) predicting the predicted execution request from the bitstream library to be accelerated the next working bitstream, (iv) determine whether the predicted bitstream is already registered on one of the accelerators, (v) in response to determining that the predicted bitstream is not registered on one of the accelerators, from The plurality of accelerators select an accelerator that satisfies the characteristics of the predicted bitstream, and (vi) in response to determining an accelerator that satisfies the characteristics of the predicted bitstream, registering the predicted bitstream on the determined accelerator.Example 2 includes the subject matter of Example 1, and wherein determining the one or more bitstreams registered on each accelerator includes monitoring bitstream submission and execution on each accelerator.Example 3 includes the subject matter of any of Examples 1 and 2, and wherein determining the one or more bitstreams registered on each accelerator includes receiving bitstream registration data from each accelerator, wherein the bitstream registration data indicates that Corresponds to the bit stream on the accelerator.Example 4 includes the subject matter of any of Examples 1-3, and wherein determining the one or more bitstreams registered on each accelerator includes updating the bitstream library to keep track of the bitstreams currently registered on each accelerator.Example 5 includes the subject matter of any of Examples 1-4, and wherein predicting the next job to be requested for acceleration includes predicting the next job to be requested for acceleration based on available applications currently executing on the plurality of computing skateboards. a job.Example 6 includes the subject matter of any of Examples 1-5, and wherein predicting the next job to be requested for acceleration includes an execution mode based on a bitstream for each available application currently executing on a plurality of compute sleds to predict the next job to be requested for acceleration.Example 7 includes the subject matter of any of Examples 1-6, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application The past execution history of a bitstream.Example 8 includes the subject matter of any of Examples 1-7, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application includes utilizing machine learning to predict the execution mode.Example 9 includes the subject matter of any of Examples 1-8, and wherein predicting the bitstream from the bitstream library includes predicting the bitstream based on available accelerators of the system.Example 10 includes the subject matter of any of Examples 1-9, and wherein predicting the bitstream from the bitstream library includes predicting the bitstream based on the predicted type of next job and the type of workload each accelerator is capable of accelerating.Example 11 includes the subject matter of any of Examples 1-10, and wherein predicting the bitstream from the bitstream library includes predicting an execution mode of the bitstream for each available application.Example 12 includes the subject matter of any of Examples 1-11, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on specific accelerator capabilities required by the predicted bitstream.Example 13 includes the subject matter of any of Examples 1-12, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on a capacity on the accelerator required by the predicted bitstream.Example 14 includes the subject matter of any of Examples 1-13, and wherein the computing engine is further to determine a characteristic of each of the plurality of accelerators of the system.Example 15 includes the subject matter of any of Examples 1-14, and wherein determining the characteristics of each accelerator of the system includes determining the availability of each accelerator.Example 16 includes the subject matter of any of Examples 1-15, and wherein determining the availability of each accelerator includes determining a current load on each accelerator.Example 17 includes the subject matter of any of Examples 1-16, and wherein determining the availability of each accelerator includes determining a number of free slots in each accelerator.Example 18 includes the subject matter of any of Examples 1-17, and wherein determining the availability of each accelerator includes determining a number of free logic gates in each accelerator.Example 19 includes the subject matter of any of Examples 1-18, and wherein determining the characteristics of each accelerator of the system includes determining a type of workload that each accelerator is capable of accelerating.Example 20 includes the subject matter of any of Examples 1-19, and wherein determining the type of workload that each accelerator can accelerate includes determining each accelerator's cryptography, compression, transcoding, matrix multiplication, and/or convolutional neural network operations ability.Example 21 includes the subject matter of any of Examples 1-20, and wherein determining the characteristics of each accelerator of the system includes determining a physical distance from the requested computing sled to each accelerator.Example 22 includes the subject matter of any of Examples 1-21, and wherein selecting accelerators that satisfy the characteristics of the predicted bitstream includes determining the accelerators based on the characteristics of each accelerator.Example 23 includes a method for provisioning an accelerator of a plurality of accelerators of a system, the method comprising determining, by an orchestrator server, one or more bitstreams registered on each of the plurality of accelerators; by the orchestrator The orchestrator server predicts the next job to be requested for acceleration from an application of at least one compute sled from the plurality of compute sleds; the predicted next job from the bitstream library is predicted to be performed by the orchestrator server that requests the acceleration to be accelerated by the orchestrator server to determine whether the predicted bit stream is already registered on one of the accelerators; The accelerators select an accelerator that satisfies the characteristics of the predicted bitstream; and the predicted bitstream is registered on the determined accelerator by the orchestrator server and in response to determining an accelerator that satisfies the characteristics of the predicted bitstream.Example 24 includes the subject matter of Example 23, and wherein determining the one or more bitstreams registered on each accelerator includes monitoring, by the orchestrator server, bitstream submission and execution on each accelerator.Example 25 includes the subject matter of any of Examples 23 and 24, and wherein determining the one or more bitstreams registered on each accelerator includes receiving, by the orchestrator server, bitstream registration data from each accelerator, wherein the bitstream registration data Indicates the bitstream currently registered on the corresponding accelerator.Example 26 includes the subject matter of any of Examples 23-25, and wherein determining the one or more bitstreams registered on each accelerator includes updating, by the orchestrator server, a library of bitstreams to keep track of the bitstreams currently registered on each accelerator. bitstream.Example 27 includes the subject matter of any of Examples 23-26, and wherein predicting the next job to be requested for acceleration includes predicting, by the orchestrator server, that the next job to be requested is to be requested based on available applications currently executing on the plurality of compute sleds. for the accelerated next job.Example 28 includes the subject matter of any of Examples 23-27, and wherein predicting the next job to be requested for acceleration includes, by the orchestrator server, based on bits for each available application currently executing on the plurality of compute sleds The execution mode of the flow to predict the next work to be requested for acceleration.Example 29 includes the subject matter of any of Examples 23-28, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application includes determining, by the orchestrator server, for each Past execution history of each bitstream for available applications.Example 30 includes the subject matter of any of Examples 23-29, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application includes utilizing machine learning by the orchestrator server to Predictive execution mode.Example 31 includes the subject matter of any of Examples 23-30, and wherein predicting the bitstream from the bitstream library includes predicting, by the orchestrator server, the bitstream based on available accelerators of the system.Example 32 includes the subject matter of any of Examples 23-31, and wherein predicting the bitstream from the bitstream library includes predicting, by the orchestrator server, based on the predicted type of next job and the type of workload each accelerator is capable of accelerating. Predict the bitstream.Example 33 includes the subject matter of any of Examples 23-32, and wherein predicting the bitstream from the bitstream library includes predicting, by the orchestrator server, an execution mode of the bitstream for each available application.Example 34 includes the subject matter of any of Examples 23-33, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining, by the orchestrator server, the accelerator based on specific accelerator capabilities required by the predicted bitstream.Example 35 includes the subject matter of any of Examples 23-34, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining, by the orchestrator server, the accelerator based on a capacity on the accelerator required by the predicted bitstream.Example 36 includes the subject matter of any of Examples 23-35, and further includes determining, by the orchestrator server, a characteristic of each of the plurality of accelerators of the system.Example 37 includes the subject matter of any of Examples 23-36, and wherein determining the characteristics of each accelerator of the system includes determining, by the orchestrator server, the availability of each accelerator.Example 38 includes the subject matter of any of Examples 23-37, and wherein determining the availability of each accelerator includes determining, by the orchestrator server, the current load on each accelerator.Example 39 includes the subject matter of any of Examples 23-38, and wherein determining the availability of each accelerator includes determining, by the orchestrator server, a number of free slots in each accelerator.Example 40 includes the subject matter of any of Examples 23-39, and wherein determining the availability of each accelerator includes determining, by the orchestrator server, a number of free logic gates in each accelerator.Example 41 includes the subject matter of any of Examples 23-40, and wherein determining the characteristics of each accelerator of the system includes determining, by the orchestrator server, a type of workload that each accelerator is capable of accelerating.Example 42 includes the subject matter of any of Examples 23-41, and wherein determining the type of workload each accelerator is capable of accelerating includes determining, by the orchestrator server, ciphering, compression, transcoding, matrix multiplication, and/or volume for each accelerator Integrating neural network computing power.Example 43 includes the subject matter of any of Examples 23-42, and wherein determining the characteristics of each accelerator of the system includes determining, by the orchestrator server, a physical distance from the requested computing sled to each accelerator.Example 44 includes the subject matter of any of Examples 23-43, and wherein selecting accelerators that satisfy the characteristics of the predicted bitstream includes determining, by the orchestrator server, accelerators based on the characteristics of each accelerator.Example 45 includes one or more machine-readable storage media including a plurality of instructions stored thereon that, in response to being executed, cause a computing device to perform the method of any of Examples 23-44.Example 46 includes a computing device comprising means for performing the method of any of Examples 23-44.Example 47 includes a computing device for provisioning an accelerator of a plurality of accelerators of a system, the computing device including a bitstream updater circuit to determine one or more of the accelerators registered on each of the plurality of accelerators. a bitstream; a job predictor circuit to predict the next job to be requested for acceleration from an application of at least one of a plurality of compute sleds; a bitstream predictor circuit to predict a job from the bitstream library a bitstream to perform the predicted next job requested to be accelerated; and accelerator manager circuitry to (i) determine whether the predicted bitstream has been registered on one of the accelerators, (ii) respond to determining that the predicted bitstream is not registered on one of the accelerators, selecting an accelerator from a plurality of accelerators that satisfies the characteristics of the predicted bitstream, and (iii) in response to determining the accelerator that satisfies the characteristics of the predicted bitstream, The predicted bit stream is registered on the determined accelerator.Example 48 includes the subject matter of Example 47, and wherein determining the one or more bitstreams registered on each accelerator includes monitoring bitstream submission and execution on each accelerator.Example 49 includes the subject matter of any of Examples 47 and 48, and wherein determining the one or more bitstreams registered on each accelerator includes receiving bitstream registration data from each accelerator, wherein the bitstream registration data indicates that the bitstream is currently registered at the corresponding accelerator. Bitstream on the accelerator.Example 50 includes the subject matter of any of Examples 47-49, and wherein determining the one or more bitstreams registered on each accelerator includes updating a bitstream library to keep track of the bitstreams currently registered on each accelerator.Example 51 includes the subject matter of any of Examples 47-50, and wherein predicting the next job to be requested for acceleration includes predicting the next job to be requested for acceleration based on available applications currently executing on the plurality of computing sleds. a job.Example 52 includes the subject matter of any of Examples 47-51, and wherein predicting the next job to be requested for acceleration includes an execution mode based on a bitstream for each available application currently executing on the plurality of compute sleds to predict the next job to be requested for acceleration.Example 53 includes the subject matter of any of Examples 47-52, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application The past execution history of a bitstream.Example 54 includes the subject matter of any of Examples 47-53, and wherein predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application includes utilizing machine learning to predict the execution mode.Example 55 includes the subject matter of any of Examples 47-54, and wherein predicting the bitstream from the bitstream library includes predicting the bitstream based on available accelerators of the system.Example 56 includes the subject matter of any of Examples 47-55, and wherein predicting the bitstream from the bitstream library includes predicting the bitstream based on the predicted type of next job and the type of workload each accelerator is capable of accelerating.Example 57 includes the subject matter of any of Examples 47-56, and wherein predicting the bitstream from the bitstream library includes predicting an execution mode of the bitstream for each available application.Example 58 includes the subject matter of any of Examples 47-57, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on specific accelerator capabilities required by the predicted bitstream.Example 59 includes the subject matter of any of Examples 47-58, and wherein selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining the accelerator based on a capacity on the accelerator required by the predicted bitstream.Example 60 includes the subject matter of any of Examples 47-59, and wherein the computing engine is further to determine a characteristic of each of the plurality of accelerators of the system.Example 61 includes the subject matter of any of Examples 47-60, and wherein determining the characteristics of each accelerator of the system includes determining the availability of each accelerator.Example 62 includes the subject matter of any of Examples 47-61, and wherein determining the availability of each accelerator includes determining a current load on each accelerator.Example 63 includes the subject matter of any of Examples 47-62, and wherein determining the availability of each accelerator includes determining a number of free slots in each accelerator.Example 64 includes the subject matter of any of Examples 47-63, and wherein determining the availability of each accelerator includes determining a number of free logic gates in each accelerator.Example 65 includes the subject matter of any of Examples 47-64, and wherein determining the characteristics of each accelerator of the system includes determining a type of workload that each accelerator is capable of accelerating.Example 66 includes the subject matter of any of Examples 47-65, and wherein determining the type of workload each accelerator is capable of accelerating includes determining each accelerator's cryptography, compression, transcoding, matrix multiplication, and/or convolutional neural network operations ability.Example 67 includes the subject matter of any of Examples 47-66, and wherein determining the characteristics of each accelerator of the system includes determining a physical distance from the requested computing sled to each accelerator.Example 68 includes the subject matter of any of Examples 47-67, and wherein selecting accelerators that satisfy the characteristics of the predicted bitstream includes determining the accelerators based on the characteristics of each accelerator.Example 69 includes a computing device for provisioning an accelerator of a plurality of accelerators of a system, the computing device comprising circuitry for determining one or more bitstreams registered on each of the plurality of accelerators; Means for predicting a next job to be requested for acceleration from an application of at least one of a plurality of computing sleds; a means for predicting that a predicted next job from a bitstream library is to be performed that requests an acceleration to be accelerated Means for a bit stream; means for determining whether a predicted bit stream has been registered on one of the accelerators; means for determining whether the predicted bit stream is not registered on one of the accelerators from a plurality of accelerators means for selecting an accelerator that satisfies the characteristics of the predicted bitstream; and means for registering the predicted bitstream on the determined accelerator in response to determining the accelerator that satisfies the characteristics of the predicted bitstream.Example 70 includes the subject matter of Example 69, and wherein the circuitry for determining the one or more bitstreams registered on each accelerator includes circuitry for monitoring bitstream submission and execution on each accelerator.Example 71 includes the subject matter of any of Examples 69 and 70, and wherein the means for determining one or more bitstreams registered on each accelerator comprises means for receiving bitstream registration data from each accelerator, wherein the bits The stream register data indicates the bit stream currently registered on the corresponding accelerator.Example 72 includes the subject matter of any of Examples 69-71, and wherein the means for determining one or more bitstreams registered on each accelerator comprises updating a library of bitstreams to track currently registered at each accelerator A device on a bitstream.Example 73 includes the subject matter of any of Examples 69-72, and wherein the means for predicting the next job to be requested for acceleration includes predicting the next job to be requested based on available applications currently executing on the plurality of computing skateboards The means to request the next job for acceleration.Example 74 includes the subject matter of any of Examples 69-73, and wherein the means for predicting the next job to be requested for acceleration includes for each available application currently executing on a plurality of computing sleds based on The execution mode of the bitstream to predict the device to be requested for the next job for acceleration.Example 75 includes the subject matter of any of Examples 69-74, and wherein the means for predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application comprises for determining A means of past execution history for each bitstream for each available application.Example 76 includes the subject matter of any of Examples 69-75, and wherein the means for predicting the next job to be requested for acceleration based on the execution mode of the bitstream for each available application comprises utilizing a machine Means of learning to predict execution modes.Example 77 includes the subject matter of any of Examples 69-76, and wherein the means for predicting the bitstream from the bitstream library comprises means for predicting the bitstream based on available accelerators of the system.Example 78 includes the subject matter of any of Examples 69-77, and wherein the means for predicting a bitstream from a library of bitstreams includes a method for predicting a bitstream based on the predicted type of next job and the workload each accelerator is capable of accelerating. Type to predict the bitstream device.Example 79 includes the subject matter of any of Examples 69-78, and wherein the means for predicting the bitstream from the bitstream library comprises means for predicting an execution mode of the bitstream for each available application.Example 80 includes the subject matter of any of Examples 69-79, and wherein the means for selecting an accelerator that satisfies the characteristics of the predicted bitstream includes determining an accelerator based on a specific accelerator capability required by the predicted bitstream installation.Example 81 includes the subject matter of any of Examples 69-80, and wherein the means for selecting an accelerator that satisfies the characteristics of the predicted bitstream comprises determining based on a capacity on the accelerator required by the predicted bitstream Accelerator device.Example 82 includes the subject matter of any of Examples 69-81, and further includes means for determining a characteristic of each of the plurality of accelerators of the system.Example 83 includes the subject matter of any of Examples 69-82, and wherein the means for determining characteristics of each accelerator of the system comprises means for determining availability of each accelerator.Example 84 includes the subject matter of any of Examples 69-83, and wherein the means for determining the availability of each accelerator comprises means for determining the current load on each accelerator.Example 85 includes the subject matter of any of Examples 69-84, and wherein the means for determining the availability of each accelerator comprises means for determining a number of free slots in each accelerator.Example 86 includes the subject matter of any of Examples 69-85, and wherein the means for determining the availability of each accelerator comprises means for determining a number of free logic gates in each accelerator.Example 87 includes the subject matter of any of Examples 69-86, and wherein the means for determining a characteristic of each accelerator of the system comprises means for determining a type of workload each accelerator is capable of accelerating.Example 88 includes the subject matter of any of Examples 69-87, and wherein the means for determining the type of workload each accelerator is capable of accelerating includes determining each accelerator's cipher, compression, transcoding, matrix multiplication and/or Or convolutional neural network computing power device.Example 89 includes the subject matter of any of Examples 69-88, and wherein the means for determining a characteristic of each accelerator of the system comprises means for determining a physical distance from the requested computing sled to each accelerator.Example 90 includes the subject matter of any of Examples 69-89, and wherein the means for selecting accelerators that satisfy the characteristics of the predicted bitstream includes means for determining accelerators based on characteristics of each accelerator.
The invention discloses securing of sensor data. Systems and methods include establishing a secure communication between an application module and a sensor module. The application module is executingon an information-handling machine, and the sensor module is coupled to the information-handling machine. The establishment of the secure communication is at least partially facilitated by a mutuallytrusted module.
1.A device comprising:A biometric sensor device comprising a sensor for collecting biometric information from a user, wherein the sensor is configured to generate sensor data corresponding to the biometric information, and the biometric sensor device further comprises:a key generation module, configured to generate a key;a cryptographic engine for encrypting the sensor data;a communication module for establishing a secure channel with a host of the application, the host including a trusted execution environment;A transmitter for transmitting encrypted sensor data to the host.2.A system comprising:Host, including processor, memory, and trusted execution environment;A biometric sensor device comprising a sensor for collecting biometric information from a user, wherein the sensor is configured to generate sensor data corresponding to the biometric information, and the biometric sensor device further comprises:a key generation module, configured to generate a key;a cryptographic engine for encrypting the sensor data;a communication module for establishing a secure channel with a host of the application, the host including a trusted execution environment;A transmitter for transmitting encrypted sensor data to the host, wherein the encrypted sensor data is for processing at the host using a process within the trusted execution environment.
Protect sensor data securityThe invention patent application is the international application number PCT/US2015/051543, the international application date is September 22, 2015, and the application number of the invention entering the Chinese national stage is 201580045664.2, and the name of the invention is "protection of sensor data security". Application.Priority claimThis application claims the benefit and priority of U.S. Patent Application Serial No. Serial No. No. No. No. No. No. No. No. No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No No this.Related applicationThe subject matter of this application relates to the subject matter of the following commonly assigned and co-pending applications:■The United States, filed on September 26, 2014, entitled "Securing Audio Communications" and invented by PRADEEP M.PAPPACHAN, RESHMA LAL, RAKESH A.UGHREJA, KUMARN.DWARAKANATH, and VICTORIA C.MOORE The application number is 14/498,711.The above-referenced patent application is incorporated herein in its entirety by reference.Background techniqueComputing devices (eg, smart phones, tablet computers, laptop computers, etc.) include various sensors that are capable of sensing/detecting user input, environmental conditions, overall device status, and the like. Such sensors may include microphones, cameras, touch sensors, gesture sensors, motion sensors, light sensors, temperature sensors, position sensors, and the like. As sensors become more prevalent, there is increasing interest in the security of sensor data and its impact on user privacy. Malware on the user device is capable of intercepting and accessing sensor data and thereby accessing private user data. Therefore, some protection may be required to prevent unauthorized access to private sensor/voice data.DRAWINGSOther objects and advantages of the present invention will become apparent from the Detailed Description.1 is a block diagram showing a system configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.2 is a block diagram showing another system configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.3 is a flow chart illustrating a method for establishing secure communication between an application module and a sensor module, in accordance with some embodiments.4 is a flow chart illustrating a method for securely transmitting acquired sensor data from a sensor module to an application module, in accordance with some embodiments.FIG. 5 is a flow chart illustrating a method for terminating a secure session between an application module and a sensor module in accordance with some embodiments.FIG. 6 is a block diagram showing a processor in accordance with some embodiments.7 is a block diagram including a system on a chip that is configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.While the invention is susceptible to various modifications and alternatives It should be understood, however, that the drawings and embodiments are not intended to be limited Rather, the invention is to cover all modifications, equivalents and alternatives of the scope of the invention.detailed description1 is a block diagram showing a system configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.In some embodiments, the sensor module 110 is configured to transmit sensor data to the application module upon establishing a secure communication with the application module 135. The sensor module 110 can represent various types of sensors such as a microphone, a camera, a touch sensor, a gesture sensor, a motion sensor, a light sensor, a temperature sensor, a position sensor, and the like. Accordingly, sensor module 110 can be configured to generate and transmit audio data, video data, touch sensor data, gesture data, motion data, ambient light intensity data, ambient temperature, device location data, and the like, to application module 110.In some embodiments, sensor module 110 and application module 115 may be part of an information processing/computing system/environment, such as a personal laptop computer, a personal desktop computer, a smart phone, a dedicated sensor system, and the like. Data collected by sensor module 110 and sent to application module 135 may be monitored by other applications or malware executing in the same computing environment. The sensor data may typically contain private information, and access to the private information is preferably limited and controlled.In some embodiments, the mutual trust module 160 is configured to, at least in part, facilitate establishing an encrypted secure communication between the application module 135 and the sensor module 110. In some embodiments, the sensor data is exchanged between the sensor module 110 and the application module 135, and if possible, the sensor data is intercepted by other unauthorized application modules, operating systems, other operating system components, and the like. It has become very difficult. As such, it is difficult, if possible, for malware (such as ring type 0 malware) to unauthorized access to encrypted sensor data exchanged between sensor module 110 and application module 135.In some embodiments, the mutual trust module 160 can be configured to determine whether the application module 115 is a trusted application module prior to facilitating establishment of the encrypted communication. In some embodiments, the mutual trust module 160 can be configured to determine the credibility of the application module by determining whether the application module 135 is part of a trusted computing/execution environment. The mutual trust module may also use other methods to determine the credibility of the application module 135.If the application module 115 is a trusted application module, the mutual trust module 160 can facilitate establishing encrypted secure communication between the application module 115 and the sensor module 110. In some embodiments, the mutual trust module 160 can securely provide a secret key to the application module 115 and can securely provide the same key to the sensor module 110. The application module 115 and the sensor module 110 can then use the key to encrypt/decrypt the sensors and other data exchanged between them. It should be noted that various other cryptographic methods can be employed to protect communications.In some implementations, a public/private key cryptographic method can be used in which each party has a public and private key pair and the parties exchange public keys. When the application module, for example, sends a message to the sensor, the application module encrypts the message using the public key of the sensor. As such, only the sensor can decrypt the message using the sensor's private key. The sensor can use the public key of the application to encrypt messages for the application, and as such, the application can decrypt the messages using the private key of the application.In some implementations, the private key of one or more of the modules in the drawing module can be pre-programmed into multiple modules. For example, in embodiments where sensor module 110 and mutual trust module 160 are implemented in hardware, when a private key for the module is produced in a manner that is inaccessible to any other external module or unit, the private key can be embedded into In the module.After establishing the encrypted secure communication, the application module 115 and the sensor module 110 can begin to communicate securely. In some embodiments, application module 115 and sensor module 110 can communicate directly with each other. In other embodiments, the two modules can communicate via the mutual trust module 160. In still other embodiments, the two modules can communicate with the operating system via a communication bus shared with one or more other modules. Other communication modes (such as wireless communication) can also be used.In some embodiments, the application module 135 can also transmit a session policy to the sensor module 110. The session policy may include certain rules and conditions governing the operation of sensor module 110 and application module 135. For example, the session policy can indicate whether the application module has exclusive access to the sensor module during the session. In general, the session policy may indicate such rules/conditions as: exclusive mode operations for the sensor module of the session (eg, only one application module is allowed to access the sensor module during the session). In other examples, multiple application modules may be allowed to access the sensor data and the like.For example, during a call, the phone application can request exclusive use of the microphone to prevent malware from intercepting the conversation. In another example, the voice-based user authentication software may not require privacy to be said. Instead, the application may require a complete sound sample to avoid modification or replacement. In this second example, the microphone can be shared with other applications.In some embodiments, the encrypted sensor data is kept secret for other software and modules executing or present on the system. Such software and modules may include, for example, system software, kernels, sensor drivers within kernel space, sensor device drivers, and middleware. Thus, in some embodiments, malware (even malware capable of exploiting vulnerabilities in system level software, such as ring type 0 malware) may not be able to access the exchange between sensor module 110 and application module 135. Encrypted sensor data.In an alternative embodiment, the mutual trust module 160 can be configured to facilitate establishing encrypted secure communication between the application module 135 and the sensor module 160. In some embodiments, the mutual trust module 160 can also facilitate secure exchange of one or more keys between the mutual trust module 160 and the application module 135 as the encrypted secure communication between the application application module 135 and the mutual trust module 160. a part of. Additionally, the mutual trust module 160 can have proprietary, direct, and non-shared connections with the sensor module 110. Thus, the mutual trust module 160 and the application module 135 can use the established encrypted secure communication to securely exchange data, and the mutual trust module 160 can use the proprietary link between the sensor module 110 and the mutual trust module 160 to securely. Exchange data. Thus, through these two secure connections, the application module 135 can securely exchange data with the sensor module 110.In other alternative embodiments, the mutual trust module 160 can be configured to facilitate establishing a first limited access to the memory range of the application module 135. Additionally, the mutual trust module 160 can be configured to facilitate establishing a second restricted access to the same memory range of the sensor module 110. Thus, application module 135 and sensor module 110 may each be able to securely communicate with each other by writing and reading data from a memory range that is restricted access by the two modules.In still other alternative embodiments, the mutual trust module 160 can be physically included in the sensor module 110. As such, one or more of the sensor modules 110 can be configured to perform the functions of the mutual trust module 160.2 is a block diagram showing another system configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.In some embodiments, the sensor module 210 is configured to securely exchange sensor data with the application module 250. The sensor module 210 can represent various types of sensors such as a microphone, a camera, a touch sensor, a gesture sensor, a motion sensor, a light sensor, a temperature sensor, a position sensor, and the like. Accordingly, sensor module 210 can be configured to generate and transmit audio data, video data, touch sensor data, gesture data, motion data, ambient light intensity data, ambient temperature, device location data, and the like, to application module 210.Sensor module 210 may also include one or more sensor processing modules 215 that are configured to perform the processing functions required by sensor module 210. Sensor processing module 215 can be configured to, for example, process data received by sensor hardware 235. In some embodiments, sensor hardware 235 is configured to interface with an environment or user to collect one or more data. Additionally, sensor processing module 215 can be configured to perform password related calculations to protect data exchange between sensor module 210 and application module 250.In some embodiments, sensor module 210 and application module 250 may be part of an information processing/computing system environment, such as a personal laptop computer, a personal desktop computer, a smart phone, a dedicated sensor computer system, and the like. In some embodiments, the various modules/components shown in the figures may be located in multiple systems.Data collected by sensor module 210 and sent to application module 250 may be monitored by other applications or malware executing in the same computing environment. The sensor data may typically contain private information, and access to the private information is preferably limited and controlled.In some embodiments, the mutual trust module 260 is configured to, at least in part, facilitate establishing an encrypted secure communication between the application module 250 and the sensor module 210. In some embodiments, the mutual trust module 260 can be configured to determine whether the application module 215 is a trusted application module as a condition for establishing secure communication between the application module 250 and the sensor module 210. For example, in an embodiment where a trusted execution environment exists in a computing system environment, the application module can be authenticated as a trusted module in response to determining that the application is part of the trusted execution environment. It should be noted that other methods can be used to authenticate the application module as a trusted application module.In some embodiments, sensor module 210 and mutual trust module 260 can be pre-programmed with intrinsic trust for each other. For example, trust between the two devices can be established by a private and proprietary bus/connection between two hardware components, such as mutual trust module 260 and sensor module 210. As such, the authentication between the two hardware devices is an implicit design.In other embodiments, the application module 250 can generate a signed certificate that can be verified by a mutual trust entity.In response to the mutual trust module 260 determining that the application module is trusted, the mutual trust module 260 can facilitate establishing encrypted secure communication between the application module 250 and the sensor module 210. In some embodiments, the mutual trust module 260 can securely provide the same secret key to the application module 215 and the sensor module 210. In some embodiments, the mutual trust module 260 can securely transmit secret keys to the application module 215 and the sensor module 210 using different encryption protocols, which are not used to exchange between the application module 215 and the sensor module 210. data.Application module 250 and sensor module 210 may then encrypt/decrypt sensor data using the shared secret key prior to transmission to securely communicate with each other. It should be noted that various other encryption protocols may be used to protect communication between the application module 250 and the sensor module 210.For an application/application module executing in the execution environment of the system, the trusted execution environment 270 is a trusted environment, where the application module 250 is a member of the trusted execution environment. The applications in the trusted execution environment 270 can be authenticated using the various methods, modules, and systems not described herein.In some embodiments, the application module 250 can use the established cryptographic secure channel established by the mutual trust module 260 to conduct a secure sensor data session with the sensor module 210. In some embodiments, the one or more sensor memory modules 220 can be configured to store an encryption key and also store and decrypt sensor data before/after processing, transmission or reception. In addition to other sensor related operations, the sensor processing module 215 can also be configured to perform encryption/decryption operations.It should be noted that additional processing units can be used. For example, one or more processing units may be assigned multiple sensor data processing tasks, one or more processing units may be assigned multiple encryption/decryption tasks, and the like. It should be noted that one or more direct memory access units may be used to transfer/transfer data to/from other sensor memory modules 220, as well as to transfer data to other memory units, such as system memory allocated to the execution environment. / Transfer data from it.In some embodiments, the mutual trust module 260 is configured to generate a plurality of additional keys as needed that can be used to provide additional protection for communication between the application module 250 and the sensor module 210. The mutual trust module 260 can generate a plurality of additional keys that can be used to protect the integrity of, for example, exchanged sensor data. In some embodiments, the mutual trust module 260 can be configured to generate one or more message authentication codes (MACs) that can be used to authenticate and/or verify at the application module 250 and the sensor module 210 The integrity of the sensor data exchanged between. In some embodiments, the message authentication code can be used to authenticate the encrypted sensor data and determine if the sensor data was modified during transmission.In some embodiments, after establishing the encrypted secure communication, the application module 250 can be configured to transmit the session ID and session policy for the current session to the sensor module 210. The session ID can be used to identify subsequent communications as part of the session, and the session policy can be used to establish one or more rules and conditions for the sensor module 210. Examples of rules/conditions that may be part of a session strategy include: exclusive access by the application module to the sensor module; shared access of the sensor module by two or more application modules; disabling (eg, OS/driver) Procedure 245) traditional access to the sensor module, and the like.It should be noted that in some embodiments, the one or more rules and conditions indicated by the session policy may be implemented by the sensor processing module 215. In an alternative embodiment, additional hardware (eg, in a particular communication path) may be used (or used in addition to the sensor processing module) to implement the session policy. In some embodiments, a session policy implemented by hardware can limit sensor access to authorized software modules.In some embodiments, sensor module 210 can dynamically program hardware access control to prevent such requests from gaining access to the sensor data when a new request from the software module violates the current session policy. The hardware access control may continue to restrict access until a command issued by the currently authorized software module, such as restoring access by the OS and other software modules to the sensor data, is received through the established secure communication channel.In some embodiments, whether the OS/driver module 245 (or another application module and other sensor middleware) intercepts the security sensor data exchanged between the application module 250 and the sensor module 210, the security sensor The data is always kept secret.In some embodiments, when the application module 250 determines that the sensor session is complete, the application module 250 can send a request to the mutual trust module 260 to initiate termination of the secure session. In response, the mutual trust module 260 can notify the end of the secure sensor session between the sensor module 210 and the application module 250. In an alternative embodiment, the application module 250 can communicate the termination of the sensor session directly to the sensor module 210.Sensor module 210 can then release any resources associated with the secure session and can then resume normal (unlimited by application modules or session policies) operations. In some embodiments, sensor module 210 may now allow the operating system and other non-trusted application modules to access its resources.In some embodiments, if the application module terminates abnormally while in a secure communication session with the sensor module 210, the sensor module 210 (either by itself or at the request of the mutual trust module 260) may end the secure session, for example, after a timeout period. For example, the application module 250 can be configured to transmit a "heartbeat" signal to indicate to the mutual trust module 260 and/or the sensor module 210 that the application module is still functioning/operating normally. The absence of the heartbeat signal may trigger the timeout period accordingly.In some embodiments, more than one application module can communicate securely with sensor module 210 at a given time. For example, the first application module may first establish a secure session with the sensor module 210 and transmit a first session policy to the sensor module 210. The second application module may then attempt to establish a session with the sensor module 210 via the mutual trust module 260. The mutual trust module 260 can agree to the request and establish communication when, for example, there is no conflict with the first session policy.3 is a flow chart illustrating a method for establishing secure communication between an application module and a sensor module, in accordance with some embodiments.In some embodiments, the methods described herein may be implemented by one or more of the systems illustrated in Figures 1 and 2.In some embodiments, processing begins at 300, wherein at block 310, the sensor module waits for communication from an application module. In some embodiments, the communication can be direct or through a mutual trust module. At decision 315, a determination is made as to whether communication has been received from the application module. If communication from the application module has not been received, decision 315 branches to the "no" branch, where processing returns to block 310.On the other hand, if communication has been received from the application module, decision 315 branches to the "yes" branch, where at block 320, a request is received from the application module to initiate a secure session with the sensor module. . In some embodiments, the request may be received and processed by a mutual trust module, which is a module that is trusted by both the application module and the sensor module.Then, at decision 325, a determination is made as to whether the application module is a trusted module. In some embodiments, if the application module is executing within a trusted execution environment of the system, the application module can be determined to be a trusted application module. In some embodiments, this determination can be made by the mutual trust module. If the application module is not a trusted application module, decision 325 branches to a "no" branch, where processing returns to block 310.On the other hand, if the application module is a trusted application module, decision 325 branches to a "yes" branch, wherein at block 330, the mutual trust module generates a plurality of secret encryption keys, the secret encryption key The key will be used by the application module and the sensor module to communicate securely with each other. The mutual trust module can also generate a plurality of keys, which can be used to protect the integrity of the exchanged sensor data, as desired.At block 335, the mutual trust module securely transmits the secret key to the application module and the sensor module. It should be noted that various other security/encryption schemes can be used to protect the exchange of sensors and other data between the application module and the sensor module.At block 337, the application module securely transmits a session policy for the sensor session to the sensor module using the security key provided by the mutual trust module. In some embodiments, the session policy may include certain rules and conditions for the sensor module, such as providing exclusive access to the sensor data to the application module. Additionally, the sensor module can transmit other relevant information, such as a session ID.At block 340, the sensor module receives the session policy transmitted by the application module. In response, the sensor module configures certain modules (eg, processing units of the sensor modules) that are part of the sensor module as needed to implement the session strategy.At block 345, the sensor module begins acquiring sensor data and transmits the sensor data to the application module securely (using the established secret secure communication) as needed. The application module can then decrypt and use the sensor data as needed.4 is a flow chart illustrating a method for securely transmitting acquired sensor data from a sensor module to an application module, in accordance with some embodiments.In some embodiments, the methods described herein may be implemented by one or more of the systems illustrated in Figures 1 and 2. In some embodiments, the flowcharts in this figure may represent at least some of the functions represented by block 345 in FIG. 3 in greater detail.At block 410, the sensor module waits for a request from an application module. At decision 415, a determination is made as to whether a request from the application module has been received. If the request from the application module is not received, decision 415 branches to the "no" branch, where the loop loops back to block 410. It should be noted that in some embodiments, the request may be received by a mutual trust module.On the other hand, if a request from the application module has been received, decision 415 branches to the "yes" branch, where at block 420 the application module requests to receive sensor data from the sensor module.Then, at block 425, another determination is made as to whether the request from the application module conforms to an existing session policy. If the request does not comply with the existing session policy, decision 425 branches to the "no" branch, where processing loops back to block 410 again. For example, the first application module can establish exclusive access to the sensor module through the first session policy. Therefore, the access request of the second application module may violate the existing session policy and may therefore be rejected.In some embodiments, if the sensor module receives a sensor input request from a second application module, the sensor module will ignore the request. In some implementations, the sensor module may not be configured to process additional requests. In these implementations, for example, when the first application has requested exclusive access to the sensor module, a hardware-based access control mechanism in the sensor module can be used to reject the second application module Access the sensor data.On the other hand, if the request does indeed conform to the existing session policy, decision 425 branches to a "yes" branch, wherein at block 430, the sensor module collects sensor data.At block 435, the sensor module encrypts the sensor data. Additionally, the sensor module can add integrity protection to the sensor data as needed. At block 440, the sensor module transmits the encryption/protection sensor data to the application module. In some embodiments, the application module can then decrypt and use the sensor data as needed.Processing then returns to block 410.FIG. 5 is a flow chart illustrating a method for terminating a secure session between an application module and a sensor module, in accordance with some embodiments. In some embodiments, the methods described herein may be implemented by one or more of the systems illustrated in Figures 1 and 2.Processing begins at 500, wherein at block 510, the application module sends a signal indicating the end of the current sensor session, and at block 515, the sensor module receives from the application module for ending the The signal of the current sensor session. In some embodiments, the application module can communicate the end of the sensor session to the sensor module via a mutual trust module.At block 520, the sensor module releases any resources associated with the sensor session, and at block 525, the sensor module cancels any rules/conditions that are part of the current session policy, the current session policy It is imposed by the application module. In some embodiments, the sensor module can now resume normal and non-secure operations until another application module requests a new secure sensor session.Processing then ends at 599.FIG. 6 is a block diagram showing a processor in accordance with some embodiments.FIG. 6 illustrates a processor core 600 in accordance with one embodiment. Processor core 600 can be a core for any type of processor, such as a micro processor, an embedded processor, a digital signal processor (DSP), a network processor, or other device for executing code. Although FIG. 6 illustrates only one processor core 600, the processing elements may alternatively include more than one processor core 600 as shown in FIG. Processor core 600 may be a single-threaded core, or, for at least one embodiment, processor core 600 may be multi-threaded as it may include more than one hardware thread description table (or "logical processor") per core.FIG. 6 also shows a memory 670 coupled to processor 600. Memory 670 can be any of a wide variety of memories (including different layers of the memory hierarchy) as known to those skilled in the art or otherwise available. Memory 670 can include one or more code instructions 613 to be executed by processor 600 core. Processor core 600 follows the sequence of instruction programs indicated by code 613. Each instruction enters front end portion 610 and is processed by one or more decoders 620. The decoder may generate micro-operations (such as fixed-width micro-ops) in a predefined format as its output, or may generate other instructions, micro-instructions, or control signals that reflect the original code instructions. The front end 610 also includes register renaming logic 625 and scheduling logic 630 that collectively allocate resources and queue operations corresponding to the conversion instructions for execution.A processor 600 including execution logic 650 having a set of execution units 655-1, 655-2 through 655-N is shown. Some embodiments may include many execution units that are specific to a particular function or set of functions. Other embodiments may include only one execution unit, or one execution unit that may perform a particular function. Execution logic 650 performs the operations specified by the code instructions. After completing the execution of the operation specified by the code instruction, backend logic 660 retires the instruction of code 613. In one embodiment, processor 600 allows for out-of-order execution of instructions but requires ordered retirement of instructions. The retirement logic 665 can take many forms known to those skilled in the art (eg, reorder buffers, etc.). In this manner, during execution of code 613, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by register renaming logic 625, and any registers (not shown) modified by execution logic 650. The processor core 600 is transformed.Although not shown in FIG. 6, the processing elements can include other on-chip components and processor core 600. For example, the processing elements can include memory control logic along with processor core 600. Processing elements may include I/O control logic, and/or may include I/O control logic integrated with memory control logic. The processing element may also include one or more caches.In some embodiments, the code 613 can be configured to, at least in part, facilitate cryptographic secure communication established between the application module and the sensor module. In some embodiments, the application module can be executed in a secure environment. In some embodiments, code 613 may be adapted to cause a conversion of a register or memory element corresponding to the establishment of the encrypted secure communication between the application module and the sensor module.7 is a block diagram including a system on a chip that is configured to provide secure communication between an application module and a sensor module, in accordance with some embodiments.In some embodiments, system on chip 750 is another example of a system configured to at least partially establish cryptographic secure communication between an application executing in the system and sensor control module 715. The sensor module 715 can be configured to securely transmit sensor data acquired by the sensor hardware 725 to the application. In some embodiments, the sensor module 715 can be configured to securely transmit the data using encrypted secure communication established with the application.It should be noted that one or more additional components/units may be included in system on chip 750, and one or more of the components illustrated herein may not be in system on chip 750. Additionally, it should be noted that one or more of the components can be implemented in hardware, firmware, software, or a combination thereof. Further, it should be noted that each of the one or more components may be implemented by one or more other units.System on chip 750 can generally be designed as a single integrated circuit package. In some embodiments, the system on chip 750 can be fabricated on a single semiconductor wafer substrate. In various examples, various SOC design and fabrication methods can be used to construct the system on chip 750 to efficiently create a small computing system. Among other units, the system on chip 750 may include a plurality of processing units 715, a plurality of memory units 720, a plurality of memory units 725, a plurality of graphics processing units 727, and a plurality of communication units 735. It should be noted that in other embodiments, one or more of the different devices and modules in system on chip 750 can be fabricated on a plurality of separate semiconductor wafer substrates.Additionally, coupled to system on chip 750 may be one or more cameras for acquiring images/video, one or more microphones for acquiring sensors, one or more antennas for facilitating communication electromagnetic transmission/reception, One or more speakers for the output sensor, one or more touch screens for outputting images/video and receiving user input, and one or more keyboards and mice for receiving user input. Further, coupled to system on chip 750 are one or more sensors, such as position sensors, proximity sensors, light sensors, accelerometers, magnetic sensors, pressure sensors, temperature sensors, biometric security sensors, and the like.In some embodiments, the instructions/software code may be stored in a combination of non-volatile/volatile memory, such as storage unit 725 and memory unit 720. The instructions may be configured for processing by processor 715 to facilitate at least some of the functions of system on chip 750, such as at least in part, to facilitate establishing secure communication between sensor module 715 and an application executing in the system. In still other embodiments, at least a portion of the processing described above can be performed by the mutual trust module 760.In some embodiments, system on chip 750 can be a portable device such as a mobile phone, a smart phone with a touch screen, a tablet computer, a laptop computer, a hybrid device, another communication device, and the like.Example 1 can include an information processing system that can include a sensor module, an application module configured for execution on the information processing system, and mutual trust coupled to the sensor module and the application module Module. The mutual trust module is configured to, at least in part, facilitate establishing establishing secure communication of the application module with the sensor module.Example 2 may include the system of claim 1, wherein the mutual trust module is configured to at least partially facilitate at least one of:Establishing encrypted secure communication between the application module and the sensor module, including exchanging one or more keys,Establishing cryptographic secure communication, including exchanging one or more keys with the application module; and establishing a dedicated link between the sensor module and the mutual trust module,Establishing a first limited access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module, andEstablishing encrypted secure communication with the application module, including exchanging one or more keys, and wherein the mutual trust module is physically included in the sensor module.Example 3 may include the system of claim 1 or 2, wherein the mutual trust module facilitates at least based on the mutual trust module being configured to verify the credibility of the application module.Example 4 may include the system of claim 1 or 2 or 3, wherein the mutual trust module verifies that the credibility of the application module is based at least based on the mutual trust module being configured to determine the application module Executing in a trusted execution environment.Example 5 may include the system of claim 1 or 2 or 3 or 4, comprising the application module transmitting a session policy to the sensor module based at least on the establishing the secure communication. The sensor module is configured to implement the session policy using hardware in the sensor module.Example 6 may include the system of claim 1 or 2 or 3 or 4 or 5, comprising receiving another one from another application module for establishing another between the another application module and the sensor module Request for secure communication. The mutual trust module is configured to establish another secure communication based on at least determining that the another application module is a trusted application module and determining that the request conforms to the session policy.Example 7 may include the system of claim 1 or 2 or 3 or 4 or 5, wherein the sensor module is configured to acquire sensor data, use the established encryption, based at least on establishing encrypted secure communication Secure communication to encrypt the sensor data and transmit the encrypted sensor data to the application module. The application module is configured to receive encrypted sensor data from the sensor module, decrypt the sensor data using the established encrypted secure communication, and process the decrypted sensor data.Example 8 can include a method for securely exchanging information, the method comprising establishing secure communication between an application module and a sensor module. The application module is configured for execution on an information processing machine. The sensor module is coupled to the information processing machine. Establishing the secure communication is at least partially facilitated by a mutual trust module.Example 9 may include the method of claim 8, wherein the mutual trust module is configured to at least partially facilitate at least one of:Establishing encrypted secure communication between the application module and the sensor module, including exchanging one or more keys,Establishing encrypted secure communications, including exchanging one or more keys between the application modules; and establishing a dedicated link between the sensor module and the mutual trust module,Establishing a first limited access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module, andEstablishing cryptographic secure communication between the application modules, including exchanging one or more keys, and wherein the mutual trust module is physically included in the sensor module.Example 10 may include the method of claim 8 or 9, wherein the mutual trust module facilitates establishing a credibility based at least on the mutual trust module configured to verify the application module.Example 11 may include the method of claim 8 or 9 or 10, wherein the mutual trust module verifies the credibility of the application module based at least on the mutual trust module to determine that the application module is in a trusted execution environment Executed in.Example 12 may include the method of claim 8 or 9 or 10 or 11, comprising the application module transmitting a session policy to the sensor module based at least on the establishing the secure communication. The sensor module is configured to implement the session policy.Example 13 may include the method of claim 8 or 9 or 10 or 11 or 12, comprising receiving another one from another application module for establishing between the another application module and the sensor module Request for secure communication. Establishing another secure communication is based at least on determining that the other application module is a trusted application module and determining that the request conforms to the session policy.Example 14 may include the method of claim 8 or 9 or 10 or 11 or 12, wherein the sensor module is configured to acquire sensor data, use the established security, at least based on establishing encrypted secure communication Communication to encrypt the sensor data and transmit the encrypted sensor data to the application module. The application module is configured to receive encrypted sensor data from the sensor module, decrypt the sensor data using the established encrypted secure communication, and process the decrypted sensor data.Example 15 can include at least one non-transitory machine-accessible storage medium having stored thereon a plurality of instructions, wherein the instructions are configured to cause the machine to build an application module when executed on a machine Encrypted secure communication between sensor modules. The application module is configured for execution on an information processing machine.Example 16 may include at least one storage medium of claim 15, wherein the instructions are configured to cause the machine to perform, at least in part, at least one of:Establishing cryptographic secure communication between the application module and the sensor module, including exchanging one or more keys;Establishing encrypted secure communications, including exchanging one or more keys between the application modules; and establishing a dedicated link between the sensor module and the mutual trust module;Establishing a first restricted access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module;Establishing cryptographic secure communication between the application modules, including exchanging one or more keys, and wherein the mutual trust module is physically included in the sensor module.Example 17 may include at least one storage medium of claim 15 or 16, wherein the instructions are configured to cause the machine to verify the credibility of the application module.Example 18 may include at least one storage medium of claim 15 or 16 or 17, wherein the instructions are configured to cause the machine to perform at least based on determining that the application module is executing in a trusted execution environment To verify the credibility of the application module.Example 19 may include at least one storage medium of claim 15 or 16 or 17 or 18, wherein the application module is configured to communicate to the sensor based at least on the establishing the encrypted secure communication The module transmits the session policy. The sensor module is configured to implement the session policy.Example 20 may include at least one storage medium as described in claim 15 or 16 or 17 or 18 or 19, including the instructions being configured to receive and process from another application module for establishing the other A request for another secure communication between an application module and the sensor module. The instructions are configured to cause the machine to establish another secure communication with the sensor module based at least on: determining that the another application module is a trusted application module, and determining that the request is consistent with The conversation strategy.The example 21 may include at least one storage medium according to claim 15 or 16 or 17 or 18 or 19, wherein the sensor module is configured to acquire sensor data, use the at least based on establishing encrypted secure communication An established encrypted secure communication is used to encrypt the sensor data and transmit the encrypted sensor data to the application module. The application module is configured to receive encrypted sensor data from the sensor module, decrypt the sensor data using the established encrypted secure communication, and process the decrypted sensor data.Example 22 can include an apparatus for securely exchanging information. The apparatus includes means for at least partially facilitating establishing secure communication between an application module and a sensor module. The application module is configured for execution on an information processing machine and the sensor module is coupled to the information processing machine.Example 23 can include the apparatus of claim 22, comprising means for at least partially promoting at least one of:Establishing encrypted secure communication between the application module and the sensor module, including exchanging one or more keys,Establishing encrypted secure communication, including exchanging one or more keys with the application module, and exchanging a dedicated link between the sensor module and the means for facilitating,Establishing a first limited access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module, andEstablishing cryptographic secure communication between the application modules, including exchanging one or more keys, and wherein the means for facilitating is physically included in the sensor module.Example 24 may include the apparatus of claim 22 or 23 including means for verifying the credibility of the application module.Example 25 can include the apparatus of claim 22 or 23 or 24, including means for determining that the application module is executing in a trusted execution environment.Example 26 can include a mutual trust module that includes one or more processors and one or more memory units coupled to the one or more processors. The mutual trust module is configured to, at least in part, facilitate establishing secure communication between the application module and the sensor module. The sensor module is coupled to the mutual trust module, and the application module is coupled to the mutual trust module and the sensor module, and wherein the application module is configured for execution on an information processing system.Example 27 can include the mutual trust module of claim 26, the mutual trust module configured to at least partially facilitate at least one of:Establishing encrypted secure communication between the application module and the sensor module, including exchanging one or more keys,Establishing encrypted secure communication, including exchanging one or more keys with the application module; and establishing a dedicated link between the sensor module and the mutual trust module,Establishing a first limited access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module, andEstablishing encrypted secure communication with the application module, including exchanging one or more keys, and wherein the mutual trust module is physically included in the sensor module.Example 28 may include the mutual trust module of claim 26 or 27, the mutual trust module configured to verify the credibility of the application module.Example 29 can include the mutual trust module of claim 26 or 27 or 28, the mutual trust module configured to determine that the application module is executing in a trusted execution environment.Example 30 may include the mutual trust module of claim 26 or 27 or 28 or 29, wherein the application module is configured to transmit a session to the sensor module based at least on the establishing the encrypted secure communication Strategy. The sensor module is configured to implement the session policy using hardware in the sensor module.Example 31 may include the mutual trust module of claim 26 or 27 or 28 or 29 or 30, the mutual trust module being configured to receive from another application module for establishing the another application module with Another request for secure communication between the sensor modules. The mutual trust module is configured to establish another secure communication based on at least determining that the another application module is a trusted application module and determining that the request conforms to the session policy.Example 32 may include the mutual trust module of claim 26 or 27 or 28 or 29 or 30, wherein the sensor module is configured to acquire sensor data, use the established Encrypted secure communication to encrypt the sensor data and transmit the encrypted sensor data to the application module. The application module is configured to receive encrypted sensor data from the sensor module, decrypt the sensor data using the established encrypted secure communication, and process the decrypted sensor data.Example 33 can include a sensor module that includes one or more processors and one or more memory units coupled to the one or more processors. The sensor module is configured for coupling to a mutual trust module, and the mutual trust module is configured to, at least in part, facilitate establishing secure communication between the application module and the sensor module. The application module is configured for execution on an information handling system.Example 34 may include the sensor module of claim 33, wherein the mutual trust module is configured to at least partially facilitate at least one of:Establishing encrypted secure communication between the application module and the sensor module, including exchanging one or more keys,Establishing encrypted secure communication, including exchanging one or more keys with the application module; and establishing a dedicated link between the sensor module and the mutual trust module,Establishing a first limited access to a memory range of the application module and establishing a second restricted access to the memory range of the sensor module, andEstablishing encrypted secure communication with the application module, including exchanging one or more keys, and wherein the mutual trust module is physically included in the sensor module.Example 35 may include the sensor module of claim 33 or 34, wherein the mutual trust module facilitates at least based on the mutual trust module being configured to verify the credibility of the application module.Example 36 may include the sensor module of claim 33 or 34 or 35, wherein the mutual trust module verifies that the credibility of the application module is based at least based on the mutual trust module being configured to determine the application The module is executing in a trusted execution environment.Example 37 can include the sensor module of claim 33 or 34 or 35 or 36, comprising the application module transmitting a session policy to the sensor module based at least on the establishing the secure communication. The sensor module is configured to implement the session policy using hardware in the sensor module.Example 38 may include the sensor module of claim 33 or 34 or 35 or 36 or 37, comprising receiving another from another application module for establishing another between the another application module and the sensor module A request for secure communication. The mutual trust module is configured to establish another secure communication based on at least determining that the another application module is a trusted application module and determining that the request conforms to the session policy.Example 39 may include the sensor module of claim 33, wherein the sensor module is configured to acquire sensor data, encrypt the sensor using the established encrypted secure communication, at least based on establishing encrypted secure communication Data and transmitting the encrypted sensor data to the application module. The application module is configured to receive encrypted sensor data from the sensor module, decrypt the sensor data using the established encrypted secure communication, and process the decrypted sensor data.One or more embodiments of the invention have been described above. It should be noted that these and any other embodiments are illustrative and are intended to illustrate and not to limit the invention. While the present invention is broadly applicable to various types of systems, the skilled artisan will recognize that it is not possible to include all possible embodiments and backgrounds of the invention in this disclosure. Many alternative embodiments of the invention will be apparent to those skilled in the <RTIgt;The various illustrative logical blocks, modules, circuits, and steps associated with the embodiments disclosed herein can be implemented as hardware, firmware, software, or combinations thereof. To clearly illustrate this interchangeability of hardware, firmware, and software, various illustrative components, blocks, modules, circuits, and steps are described above in terms of functionality. Whether such functionality can be implemented as hardware or software depends on the specific application and design constraints imposed on the overall system. A person skilled in the art can implement the described functions in various ways for each specific application, but such implementation decisions should not be construed as causing a departure from the scope of the invention.The previous description of the disclosed embodiments is provided to ensure that any one skilled in the art can make or use the invention. Various modifications to the embodiments are obvious to those skilled in the art, and the general principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. The present invention is not intended to be limited to the embodiments shown herein, but the broadest scope is consistent with the principles and novel features disclosed herein.The benefits and advantages that the present invention may provide with respect to particular embodiments have been described above. Any of the benefits, advantages, and any elements that may result in or become more or less limiting to the benefits and advantages are not to be construed as any or all of the essential, essential or essential features of the claims. The term "comprises", "comprising", or any other variation thereof, as used herein, is intended to be interpreted as a non-exclusive element or limitation that conforms to these terms. Thus, a system, method, or other embodiment that includes a group of elements is not limited to only including those elements, and may include other elements that are not explicitly listed or other elements that are inherent to the claimed embodiments.Although the invention has been described with reference to the specific embodiments thereof, it is understood that the embodiments are illustrative, and the scope of the invention is not limited to the embodiments. Many variations, modifications, additions and improvements to the embodiments described above are possible. It is contemplated that these changes, modifications, additions and improvements may fall within the scope of the invention as described in the following claims.
In described examples, an integrated circuit (100) includes a substrate (102), a photodiode (110), and a Fresnel structure (120). The photodiode (110) is formed on the substrate (102), and it has a p-n junction (116). The Fresnel structure (120) is formed above the photodiode (110), and it defines a focal zone (122) that is positioned within a proximity of the p-n junction (116). In one aspect, the Fresnel structure (120) may include a trench pattern (132, 134, 136, 138) that functions as a diffraction means for redirecting and concentrating incident photons to the focal zone (124). In another aspect, the Fresnel structure (120) may include a wiring pattern that functions as a diffraction means for redirecting and concentrating incident photons to the focal zone (122). In yet another aspect, the Fresnel structure (120) may include a transparent dielectric pattern that functions as a refractive means for redirecting and concentrating incident photons to the focal zone (122).
CLAIMSWhat is claimed is:1. An integrated circuit, comprising:a substrate;a photodiode formed on the substrate, the photodiode having a p-n junction; and a Fresnel structure formed above the photodiode, the Fresnel structure defining a focal zone positioned within a proximity of the p-n junction.2. The integrated circuit of claim 1, wherein the Fresnel structure includes a trench pattern formed above the photodiode, the trench pattern including:a first trench positioned adjacent to a central region of the Fresnel structure by a first radial distance, the first trench having a first trench width based on a first difference between a second radial distance and the first radial distance; anda second trench positioned adjacent to the central region of the Fresnel structure by a third radial distance, the second trench having a second trench width based on a second difference between a fourth radial distance and the third radial distance.3. The integrated circuit of claim 2, wherein:the first radial distance is based on a focal length defined by the focal zone, a first multiplier, and a wavelength of an electromagnetic (EM) wave to be detected by the photodiode; the second radial distance is based on the focal length, a second multiplier greater than the first multiplier, and the wavelength of the EM wave to be detected by the photodiode;the third radial distance is based on the focal length, a third multiplier greater than the second multiplier, and the wavelength of the EM wave to be detected by the photodiode; and the fourth radial distance is based on the focal length, a fourth multiplier greater than the third multiplier, and the wavelength of the EM wave to be detected by the photodiode.4. The integrated circuit of claim 2, wherein:the first trench includes a first ring trench parallel to a top surface of the substrate and laterally surrounding the central region of the Fresnel structure; andthe second trench includes a second ring trench parallel to the top surface of the substrate and laterally surrounding the first ring trench.5. The integrated circuit of claim 4, wherein:the first ring trench includes a first circular ring trench having a center overlapping with the central region of the Fresnel structure; andthe second ring trench includes a second circular ring trench concentric with the first circular ring trench.6. The integrated circuit of claim 4, wherein:the first ring trench includes a first octagonal ring trench having a center overlapping with the central region of the Fresnel structure; andthe second ring trench includes a second octagonal ring trench concentric with the first octagonal ring trench.7. The integrated circuit of claim 4, wherein:the first ring trench includes a first rectangular ring trench having a center overlapping with the central region of the Fresnel structure; andthe second ring trench includes a second rectangular ring trench concentric with the first rectangular ring trench.8. The integrated circuit of claim 2, wherein:the first trench includes a first pair of parallel trenches placed on opposite sides of the central region of the Fresnel structure; andthe second trench includes a second pair of parallel trenches bracketing the first pair of parallel trenches.9. The integrated circuit of claim 2, wherein:the first trench is etched into a top surface of the substrate; andthe second trench is etched into the top surface of the substrate.10. The integrated circuit of claim 2, wherein:the first trench is etched into an epitaxial layer grown above the substrate; andthe second trench is etched into the epitaxial layer grown above the substrate.11. The integrated circuit of claim 2, wherein the Fresnel structure includes:a first transparent dielectric structure occupying the first trench; anda second transparent dielectric structure occupying the second trench.12. The integrated circuit of claim 1, wherein the Fresnel structure includes a wiring pattern formed above the photodiode, the wiring pattern including:a first zone plate formed in a wiring layer above the substrate, the first zone plate positioned adjacent to a central region of the Fresnel structure by a first radial distance, the first zone plate having a first width based on a first difference between a second radial distance and the first radial distance; anda second zone plate formed in the wiring layer and positioned adjacent to the central region of the Fresnel structure by a third radial distance, the second zone plate having a second width based on a second difference between a fourth radial distance and the third radial distance.13. The integrated circuit of claim 12, wherein the wiring layer includes a poly silicon layer formed on a dielectric layer above the substrate.14. The integrated circuit of claim 12, wherein the wiring layer includes a metal layer formed on a dielectric layer above the substrate.15. The integrated circuit of claim 12, wherein:the first radial distance is based on a focal length defined by the focal zone, a first multiplier, and a wavelength of an electromagnetic (EM) wave to be detected by the photodiode; the second radial distance is based on the focal length, a second multiplier greater than the first multiplier, and the wavelength of the EM wave to be detected by the photodiode;the third radial distance is based on the focal length, a third multiplier greater than the second multiplier, and the wavelength of the EM wave to be detected by the photodiode; and the fourth radial distance is based on the focal length, a fourth multiplier greater than the third multiplier, and the wavelength of the EM wave to be detected by the photodiode.16. The integrated circuit of claim 12, wherein:the first zone plate includes a first ring plate parallel to a top surface of the substrate and laterally surrounding the central region of the Fresnel structure; andthe second zone plate includes a second ring plate parallel to the top surface of the substrate and laterally surrounding the first ring plate.17. The integrated circuit of claim 16, wherein:the first ring plate includes a first circular ring plate having a center overlapping with the central region of the Fresnel structure; and the second ring plate includes a second circular ring plate concentric with the first circular ring plate.18. The integrated circuit of claim 16, wherein:the first ring plate includes a first octagonal ring plate having a center overlapping with the central region of the Fresnel structure; andthe second ring plate includes a second octagonal ring plate concentric with the first octagonal ring plate.19. The integrated circuit of claim 16, wherein:the first ring plate includes a first rectangular ring plate having a center overlapping with the central region of the Fresnel structure; andthe second ring plate includes a second rectangular ring plate concentric with the first rectangular ring plate.20. The integrated circuit of claim 16, wherein:the first zone plate includes a first pair of parallel plates placed on opposite sides of the central region of the Fresnel structure; andthe second zone plate includes a second pair of parallel plates bracketing the first pair of parallel plates.21. An integrated circuit, comprising:a substrate;a top surface positioned above the substrate;a photodiode formed on the substrate, the photodiode having a p-n junction; and a diffractive structure formed above the photodiode and beneath the top surface, the diffractive structure positioned to direct an electromagnetic (EM) wave from the top surface to a focal zone within a proximity of the p-n junction.22. The integrated circuit of claim 21, wherein the diffractive structure defines a slit pattern positioned above the photodiode, the slit pattern including:a first slit positioned adjacent to a central region of the diffractive structure by a first radial distance, the first slit having a first slit width based on a first difference between a second radial distance and the first radial distance; and a second slit positioned adjacent to the central region of the diffractive structure by a third radial distance, the second slit having a second slit width based on a second difference between a fourth radial distance and the third radial distance.23. The integrated circuit of claim 22, wherein:the first radial distance is based on a focal length defined by the focal zone, a first multiplier, and a wavelength of the EM wave;the second radial distance is based on the focal length, a second multiplier greater than the first multiplier, and the wavelength of the EM wave;the third radial distance is based on the focal length, a third multiplier greater than the second multiplier, and the wavelength of the EM wave; andthe fourth radial distance is based on the focal length, a fourth multiplier greater than the third multiplier, and the wavelength of the EM wave.24. The integrated circuit of claim 22, wherein:the first slit includes a first ring slit parallel to the top surface and laterally surrounding the central region of the diffractive structure; andthe second slit includes a second ring slit parallel to the top surface and laterally surrounding the first ring slit.25. The integrated circuit of claim 24, wherein:the first ring slit includes a first circular ring slit having a center overlapping with the central region of the diffractive structure; andthe second ring slit includes a second circular ring slit concentric with the first circular ring slit.26. The integrated circuit of claim 24, wherein:the first ring slit includes a first octagonal ring slit having a center overlapping with the central region of the diffractive structure; andthe second ring slit includes a second octagonal ring slit concentric with the first octagonal ring slit.27. The integrated circuit of claim 24, wherein:the first ring slit includes a first rectangular ring slit having a center overlapping with the central region of the diffractive structure; and the second ring slit includes a second rectangular ring slit concentric with the first rectangular ring slit.28. The integrated circuit of claim 22, wherein:the first slit includes a first pair of parallel slits placed on opposite sides of the central region of the diffractive structure; andthe second slit includes a second pair of parallel slits bracketing the first pair of parallel slits.29. The integrated circuit of claim 21, wherein the diffractive structure includes shallow trenches etched into a top epitaxial layer grown above the substrate.30. The integrated circuit of claim 21, wherein the diffractive structure includes wire gratings formed in an interconnect layer.
INTEGRATED PHOTODETECTOR[0001] This relates to systems and techniques for manufacturing of an integrated photodetector.BACKGROUND[0002] Photodetectors have many industrial and commercial applications. For example, photodetectors can be configured as proximity sensors, which are used in various consumer electronic products for sensing positions and motions of users. Depending on the sense range, the application of proximity sensors can be broadly classified into two categories: short-range proximity sensors and long-range proximity sensors. For example, a short-range proximity sensor can be used in a hand-held device, such as a smart phone, for activating and deactivating a touch screen to avoid inadvertent inputs during a phone call. A long-range proximity sensor can be used in a video gaming system, such as a motion sensor, for detecting the relative motions of a user while ignoring the background.[0003] A photodetector may be fabricated in an integrated circuit along with other circuits. The performance of a photodetector may depend on its ability to convert incident photons to a sense signal. A convergent lens (e.g., a convex lens) can be used to focus the incident photons for enhancing the performance of a photodetector. However, conventional convergent lenses are generally too costly to be fabricated alongside with a photodetector in an integrated circuit. Moreover, the installation of conventional convergent lenses may be incompatible with the fabrication process of an integrated circuit.SUMMARY[0004] In described examples, an integrated photodetector can be fabricated within an integrated circuit alongside with an optical device. The optical device is structured and configured to redirect incident electromagnetic (EM) waves within a proximity of a p-n junction of a photodiode. As energy from the incident EM waves is absorbed and converted into electron-hole pairs near the p-n junction, the minority carriers may travel more efficiently to avoid recombination. This allows the integrated photodetector to generate sense signals with higher amplitudes and thus better resolution. [0005] In one example implementation, an integrated circuit includes a substrate, a photodiode, and a Fresnel structure. The photodiode is formed on the substrate, and it has a p-n junction. The Fresnel structure is formed above the photodiode, and it defines a focal zone that is positioned within a proximity of the p-n junction. According to one aspect, the Fresnel structure may include a trench pattern that functions as a diffraction means for redirecting and concentrating incident photons to the focal zone. According to another aspect, the Fresnel structure may include a wiring pattern that functions as a diffraction means for redirecting and concentrating incident photons to the focal zone. According to yet another aspect, the Fresnel structure may include a transparent dielectric pattern that functions as a refractive means for redirecting and concentrating incident photons to the focal zone.[0006] In another example implementation, an integrated circuit includes a substrate, a photodiode and a diffractive structure. The photodiode is formed on the substrate, and it has a p-n junction. The diffractive structure is formed above the photodiode and beneath a top surface of the integrated circuit. The diffractive structure is positioned to direct an EM wave from the top surface to a focal zone within a proximity of the p-n junction. According to one aspect, the diffractive structure may include a trench pattern that is configured to redirect and concentrate incident photons to the focal zone. According to another aspect, the diffractive structure may include a wiring pattern that is configured to redirect and concentrate incident photons to the focal zone.BRIEF DESCRIPTION OF THE DRAWINGS[0007] FIG. 1A shows a partial cross-sectional view of an exemplary integrated circuit according to one aspect.[0008] FIG. IB shows a partial cross-sectional view of an exemplary integrated circuit according to another aspect.[0009] FIG. 1C shows a partial cross-sectional view of an exemplary integrated circuit according to another aspect.[0010] FIG. ID shows a partial cross-sectional view of an exemplary integrated circuit according to another aspect.[0011] FIG. 2 shows a perspective view of an exemplary Fresnel structure according to one aspect. [0012] FIG. 3 shows a top view of an exemplary Fresnel structure with a circular ring pattern according to one aspect.[0013] FIG. 4 shows a top view of an exemplary Fresnel structure with an octagonal ring pattern according to one aspect.[0014] FIG. 5 shows a top view of an exemplary Fresnel structure with a rectangular ring pattern according to one aspect.[0015] FIG. 6 shows a top view of an exemplary Fresnel structure with a linear pattern according to one aspect.[0016] FIG. 7 shows a schematic view of an exemplary photodetector circuit according to one aspect.DETAILED DESCRIPTION OF EXAMPLE EMBODFMENTS[0017] Like reference symbols in the various drawings indicate like elements. The figures are not drawn to scale.[0018] Example embodiments include an integrated solution for manufacturing a low-cost and high-performance photodetector within an integrated circuit.[0019] FIG. 1A shows a partial cross-sectional view of an exemplary integrated circuit 100 according to one aspect. The integrated circuit 100 is fabricated within a single semiconductor die. The integrated circuit 100 is formed on a substrate 102. The substrate 102 can be a semiconductor substrate that includes a semiconducting material, such as silicon. The substrate 102 can be doped with p-type dopants. As shown in FIG. IB, one or more epitaxial layers 103 can be grown directly on the substrate 102.[0020] Referring to FIG. 7, the integrated circuit 100 includes a photodetector circuit 700, which is also formed on the substrate 102 and/or the epitaxial layer 103. The photodetector circuit 700 includes a photodiode 702, a Fresnel structure 704, an amplifier 706, and a feedback resistor 708. The Fresnel structure 704 is positioned above the photodiode 702 for redirecting electromagnetic (EM) waves to the proximity of a p-n junction of the photodiode 702. The anode of the photodiode 702 is coupled to a ground terminal, whereas the cathode of the photodiode 702 is coupled to a negative input electrode 712 of the amplifier 706. The amplifier 706 can be a trans-impedance amplifier being biased by a bias voltage (VB) at its positive input electrode 714. The amplifier 706 is configured with a negative feedback loop, in which the feedback resistor 708 is coupled between the negative input electrode 712 and the output electrode 716 of the amplifier 706.[0021] The photodiode 702 is configured in a reversed bias mode with its cathode regulated at a higher voltage (e.g., VB) than its anode (e.g., VG D). In the reversed bias mode, the photodiode 702 does not conduct any current. However, upon receiving a sufficient amount of photons, the photodiode 702 will convert the incident photons to electron-hole pairs. The majority carriers will stay within a local region, whereas minority carriers will travel across the p-n junction. For example, the electrons generated at the p-doped region (i.e., anode) of the photodiode 702 will travel to the n-doped region (i.e., cathode) of the photodiode 702. As a result of the travelling minority carriers, the photodiode 702 generates a sense current (IS) that immediately pulls down the potential at the negative input electrode 712. The amplifier 706 responds by increasing the potential at the output electrode 716. The increased potential at the output electrode 716 replenishes the charges drained at the negative input electrode 712 via the feedback resistor 708. In this manner, the photodetector circuit 700 generates an output voltage (e.g., VOUT) as a function of the sense current (IS) and the feedback resistance (R) of the feedback resistor 708. The Fresnel structure 704 allows more photons to be received and converted near the p-n junction of the photodiode 702, thereby enhancing the amplitude the sense current (IS). Accordingly, the enhanced sense current (IS) improves the sensitivity of the output voltage (VOUT).[0022] Referring again to FIG. 1 A, a photodiode 110 can be formed in the substrate 102, and it can implement the functions of the photodiode 702 as described in FIG. 7. The photodiode 110 includes an p-doped region 112, an n-doped region 114, and a p-n junction 116 abutting between the p-doped region 112 and the n-doped region. Both the p-doped region 112 and the n-doped region 114 can be formed by implantations within the substrate 102 or within the epitaxial layer 103 as shown in FIG. IB. The p-doped region 112 is a part of the anode of the photodiode 110, whereas the n-doped region 114 is a part of the cathode of the photodiode 110. Although FIG. 1 A shows that the photodiode 110 includes only one p-doped region 112 and one n-doped region 114, the photodiode 110 may be formed by multiple p-doped and n-doped regions.[0023] In one example implementation, the photodiode 110 can be formed by a complementary metal-oxide semiconductor (CMOS) device (e.g., MOS or PMOS). In another example implementation, the photodiode 110 can be formed by a bipolar junction transistor (BJT) device. In yet another example implementation, the photodiode 110 can be formed by a silicon on insulator (SOI) device. Depending on the type of devices by which the photodiode 110 is formed, the p-n junction 116 may be a vertical junction, a horizontal junction, or a combination of both.[0024] The integrated circuit 100 includes a top surface 104 and a bottom surface 105 opposing the top surface 104. The substrate 102 generally forms the bottom surface 105, whereas one or more dielectric layers 106 form the top surface. The dielectric layer 106 may be transparent, or semi-transparent, which allows EM waves 190 to traverse from the top surface 104 to the photodiode 110. The integrated circuit 100 incorporates within its process flow a Fresnel structure 120 for redirecting and focusing the incident EM waves 190. The Fresnel structure 120 can implement the functions of the Fresnel structure 704 as described in FIG. 7. The Fresnel structure 120 is positioned between the top surface 104 and the p-n junction 116 of the photodiode 110. Though FIGS. 1A-1C show the Fresnel structure 120 being positioned above the photodiode 110, the Fresnel structure 120 can be formed as a part of the photodiode 110 as well.[0025] The Fresnel structure 120 defines a focal zone 122 that surrounds a focal point 124 to which the incident EM waves are redirected and concentrated. The focal point 124 generally rests within the focal zone 122. The location of the focal point 124 may vary depending on the wavelengths of the EM waves 190. The focal zone 122 is positioned within a proximity of the p-n junction 116. According to one aspect, the proximity of the p-n junction 116 is a distance that is within an order of wavelength(s) of a target EM wave. In one example implementation, the proximity of the p-n junction 116 can be within a radial distance of a half wavelength from the p-n junction 116. In another example implementation, the proximity of the p-n junction 116 can be within a radial distance of one wavelength from the p-n junction 116. In yet another example implementation, the proximity of the p-n junction 116 can be within a radial distance of two wavelengths from the p-n junction 116.[0026] According to another aspect, the proximity of the p-n junction 116 may depend on the penetrating power of the incident EM waves 190. Generally, EM waves with greater wavelengths penetrate deeper into the substrate 102 (or the epitaxial layer 103 as shown in FIG. IB). Thus, the proximity of the p-n junction 116 may also be a vertical distance descending from the Fresnel structure 120. In one example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 1 um where the incident EM waves 190 include ultraviolet waves. In another example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 10 um where the incident EM waves 190 include visible light waves. In yet another example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 30 um where the incident EM waves 190 include near infrared (NIR) waves.[0027] The Fresnel structure 120 includes a central region 131 that is positioned directly above a section of the p-n junction 116. The Fresnel structure 120 also includes an array of slits that are arranged outward from the central region 131. In the implementation as shown in FIGS. 1A-1C, the slits are defined by a trench pattern etched into the surface of the substrate 102 (or the epitaxial layer 103 as shown in FIG. IB). Thus, the cost and complexity of fabricating the Fresnel structure 120 are relatively low when compared to the conventional convergent lenses.[0028] The trench pattern serves as a diffractive structure (e.g., a diffraction grating) for diffracting EM waves 190 such that the diffracted EM waves 190 is concentrated within the focal zone 122. In one implementation, the trench pattern includes a first trench 132, a second trench134, a third trench 136, and a fourth trench 138 on each side of the central region 131. Each of the first, second, third, and fourth trenches 132, 134, 136, and 138 is adjacent to and laterally surrounds the central region 131. Moreover, each of the first, second, third, and fourth trenches 132, 134, 136, and 138 is arranged symmetrically along the center region 131, thereby forming a pair of symmetrical slits above the photodiode 110.[0029] The trench pattern also includes an array of moat regions separating the aforementioned trenches. For example, the first and second trenches 132 and 134 are separated by a first moat region 133, the second and third trenches 134 and 136 are separated by a second moat region135, and the third and fourth trenches 136 and 138 are separated by a third moat region 137. The outmost trench (e.g., the fourth trench 138) is surrounded by an outmost moat region (e.g., the fourth moat region 139). The central region 131 may include a central moat region as shown in FIGS. 1A-1C in one aspect. And in another aspect, the central moat region may further define a central trench (not shown), which can serve as a center slit (not shown). These moat regions (e.g., 131, 133, 135, 137, and 139) serve as a means for blocking out-of-phase EM waves 190, whereas the slits defined by the trenches (e.g., 132, 134, 136, and 138) serves as a means for passing in-phase EM waves 190. Together, the trenches and the moat regions diffract the incident EM waves 190 to have constructive interferences within the focal zone 122.[0030] The trenches (e.g., 132, 134, 136, and 138) can be formed during a shallow trench isolation process, which is a part of the process flow for fabricating the integrated circuit 100. The moat regions (e.g., 131, 133, 135, 137, and 139) can be used for forming one or more circuits for interfacing with the photodiode 110. For example, the moat regions can be used for forming the amplifier 706 as described in FIG. 7. To isolate a circuit formed on one moat region (e.g., 133) from another circuit formed on another moat region (e.g., 135), a field oxide material can be deposited in the trench (e.g., 134) to separate these two moat regions.[0031] According to one aspect, the field oxide material can form one or more refractive structures for enhancing the performance of the diffractive structure as described in FIGS. 1A and IB. Referring to FIG. 1C, for example, the Fresnel structure 120 may further include a refractive structure that is configured to refract the incident EM waves 190 to be concentrated within the focal zone 122. In one implementation, the refractive structure includes a patterned transparent dielectric (TD) blocks, such as a first TD block 142, a second TD block 144, a third TD block 146, and a fourth TD block 148. The first, second, third, and fourth TD blocks 142, 144, 146, and 148 are respectively deposited into, and thereby occupying, the first, second, third, and fourth trenches 132, 134, 136, and 138. Thus, each of the first, second, third, and fourth TD blocks 142, 144, 146, and 148 is adjacent to and laterally surrounds the central region 131. Moreover, each of the first, second, third, and fourth TD blocks 142, 144, 146, and 148 is arranged symmetrically along the center region 131, thereby forming a pair of symmetrical slits above the photodiode 110.[0032] The transparent dielectric material can be a transparent oxide material, such as silicon oxide, or any other transparent dielectric material used during the fabrication process of the integrated circuit 100. In one implementation, each of the first, second, third, and fourth TD blocks 142, 144, 146, and 148 may have a relatively flat top surface that is substantially coplanar with the top surface of the substrate 102 (or the epitaxial layer 103 as shown in FIG. IB). In another implementation, each of the first, second, third, and fourth TD blocks 142, 144, 146, and 148 may have a top surface that is slightly sloped away from the central region 131 to enhance the overall refractive power of the refractive structure. Accordingly, each of first, second, third, and fourth TD blocks 142, 144, 146, and 148 may have a near side wall, which is closer to the central region 131, and a far side wall, which is farther away from the central region 131. The near side walls may be deposited with more transparent dielectric material such that the near side walls are generally higher than the respective far side walls.[0033] FIGS. 1A-1C illustrates that the Fresnel structure 120 can be formed on the substrate 102 or on the epitaxial layer 103 above the substrate 102. According to an additional aspect, a Fresnel structure may also be formed on one or more conductive wiring layers, which are positioned above the substrate (e.g., 102) and the epitaxial layer (e.g., 103). Referring to FIG. ID, for example, a Fresnel structure 160 can be formed in one or more conductive wiring layers above the substrate 102. The wiring layers may include but is not limited to a polysilicon layer that formed on the moat regions (e.g., 121, 133, 135, 137, and 139 as shown in FIGS. 1A-1C) and/or an interconnect metal layer that formed on a dielectric layer (e.g., 106). Similar to the Fresnel structure 120, the Fresnel structure 160 is configured to redirect and focus the incident EM waves 190, and it can also implement the functions of the Fresnel structure 704 as described in FIG. 7.[0034] The Fresnel structure 160 is positioned between the top surface 104 of the integrated circuit 100 and the p-n junction 116 of the photodiode 110. The Fresnel structure 160 defines a focal zone 162 that surrounds a focal point 164 to which the incident EM waves are redirected. The location of the focal point 164 may vary depending on the wavelengths of the EM waves. Regardless, the focal point 164 generally rests within the focal zone 162. Because the Fresnel structure 160 is positioned farther away from the photodiode 110 than the Fresnel structure 120, the focal length 166 of the Fresnel structure 160 is generally greater that the focal length 126 of the Fresnel structure 120 (see, e.g., FIG. 1A). The focal zone 162 is positioned within a proximity of the p-n junction 116. According to one aspect, the proximity of the p-n junction 116 is a distance that is within an order of wavelength(s) of the target EM wave. In one example implementation, the proximity of the p-n junction 116 can be within a radial distance of a half wavelength from the p-n junction 116. In another example implementation, the proximity of the p-n junction 116 can be within a radial distance of one wavelength from the p-n junction 116. In yet another example implementation, the proximity of the p-n junction 116 can be within a radial distance of two wavelengths from the p-n junction 116.[0035] According to another aspect, the proximity of the p-n junction 116 may depend on the penetrating power of the incident EM waves 190. EM waves 190 with greater wavelengths generally penetrate deeper into the substrate 102 (or the epitaxial layer 103 as shown in FIG. IB). Thus, the proximity of the p-n junction 116 may also be a vertical distance descending from the Fresnel structure 160. In one example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 1 um where the target EM waves 190 include ultraviolet waves. In another example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 10 um where the target EM waves 190 include visible light waves. In yet another example implementation, the proximity of the p-n junction 116 can be within a vertical distance of 30 um where the target EM waves 190 include near infrared (NIR) waves.[0036] The Fresnel structure 160 includes a central region 171 that is positioned directly above a section of the p-n junction 116. The Fresnel structure 160 also includes an array of slits (i.e., 162, 164, 166, and 168) that are arranged outward from the central region 171. In the implementation, the slits are defined by a zone plate pattern within one or more interconnect wiring layers. Thus, the cost and complexity of fabricating the Fresnel structure 160 are relatively low when compared to the conventional convergent lenses.[0037] The wiring pattern serves as a diffractive structure (e.g., a diffraction grating) for diffracting EM waves 190 such that the diffracted EM waves 190 is concentrated within the focal zone 162. The wiring pattern includes an array of zone plates to form a diffraction grating. In one example implementation, the wiring pattern includes a central zone plate at the central region 171, a first zone plate 173, a second zone plate 175, a third zone plate 177, and a fourth zone plate 179. These zone plates (e.g., 171, 173, 175, 177 and 179) serve as a means for blocking out-of-phase EM waves 190. Moreover, these zone plates (e.g., 171, 173, 175, 177 and 179) define an array of slits that serve as a means for passing in-phase EM waves 190. In one example implementation, the zone plate grating defines a first slit 172, a second slit 174, a third slit 176, and a fourth slit 178. These slits are defined on both sides of the central region 171. Each of the first, second, third, and fourth slits 172, 174, 176 and 178 is adjacent to and laterally surrounds the central region 171. Moreover, each of the first, second, third, and fourth slits 172, 174, 176 and 178 is arranged symmetrically along the center region 171, thereby forming a pair of symmetrical slits above the photodiode 110. Together, the trenches and the moat regions diffract the incident EM waves 190 to have constructive interferences within the focal zone 162. [0038] The zone plates (e.g., 171, 173, 175, 177 and 179) and the slits (e.g., 172, 174, 176 and178) can be formed during one or more wiring deposition processes (e.g., poly silicon deposition and/or metal deposition). Thus, the formation of the zone plates and the slits can be achieved within the fabrication process flow of the integrated circuit 100. In an implementation where the zone plates (e.g., 171, 173, 175, 177 and 179) are formed in a polysilicon wiring layer, the zone plates can be positioned directly on or above the moat regions (e.g., 131, 133, 135, 137, and 139 as shown in FIGS. 1A-1C). Thus, each of the zone plates may align and overlap with a particular moat region to increase the vertical dimension of each slit. In this particular implementation, the Fresnel structure 120 is combined with the Fresnel structure 160 to form an array of slits, each of which includes a pair of elongated side walls defined by a corresponding trench (e.g., 132, 134, 136, or 138) and a pair of zone plates (e.g., 171, 173, 175, 177, and/or179) .[0039] The zone plates (e.g., 171, 173, 175, 177 and 179) may be formed in a metal wiring layer above a polysilicon wiring layer as well. In an implementation where the Fresnel structure 120 is included, the zone plates can be positioned above the photodiode 110 without blocking the incident EM waves 190 from the Fresnel structure 120. In this particular implementation, the zone plates (e.g., 171, 173, 175, 177 and 179) are arranged farther away from the central regions 171 than the trenches (e.g., 132, 134, 136, or 138) are arranged from the central regions 131 in order to leave the vertical region above the trenches substantially unobstructed. Alternatively, in an implementation where the Fresnel structure 120 is not included, the zone plates (e.g., 171, 173, 175, 177 and 179) can be positioned closer to the central region 171. For example, the zone plates (e.g., 171, 173, 175, 177 and 179) can be positioned directly above a location where the moat regions (e.g., 131, 133, 135, 137, and 139) are supposed to form should they be included.[0040] FIG. 2 shows a perspective view of an exemplary Fresnel structure 200 according to one aspect. The Fresnel structure 200 serves as a model for the Fresnel structure 120 and the Fresnel structure 160. For example, the Fresnel structure 200 helps illustrate the respective dimensions of the grating pattern of the Fresnel structures 120 and 160. The Fresnel structure 200 includes a symmetrically arranged bar pattern, which define a symmetrically arranged slit pattern. The bar pattern includes a central bar BC, a pair of first side bars B l, a pair of second side bars B2, and a pair of third side bars B3. The slit pattern includes a pair of first slits SI, a pair of second slits S2, and a pair of third slits S3. [0041] The bar pattern serves a model for the moat regions (e.g., 131, 133, 135, and 137) of the Fresnel structure 120, and as a model for the zone plates (e.g., 171, 173, 175, and 177) of the Fresnel structure 160. Similarly, the slit pattern serves a model for the trenches (e.g., 132, 134, and 136) of the Fresnel structure 120, and as a model for the slits (e.g., 172, 174, and 176) of the Fresnel structure 160. Thus, the radial distances (rl, r2, r3, and r4) as shown in FIG. 2 corresponds to the radial distances (rl, r2, r3, and r4) as shown in FIGS. 1A-1C. Moreover, the trench widths (e.g., Wl and W2) as shown in FIGS. 1A-1C are modeled by the slits (e.g., SI and S2) in FIG. 2, whereas the plate widths (e.g., WP1, WP2, and WP3) as shown in FIG. ID are modeled by the bars (e.g., BC, B l, and B2). Furthermore, the focal length (f) represents a model of the focal length 126 of the Fresnel structure 120 and the focal length 166 of the Fresnel structure 160.[0042] The respective widths of the bars and the slits can be expressed as a function of the radial distance. For example, the width of the central bar BC is defined by two times of the first radial distance rl . For another example, The width of the first slit SI is defined by a difference between the second radial distance r2 and the first radial distance rl . For yet another example, the width of the first side bar B l is defined by a difference between the third radial distance r3 and the second radial distance r2. And similarly, the width of the second slit S2 is defined by a difference between the fourth radial distance r4 and the third radial distance r3. From here, it can be derived that the width of the nth side bar Bn is defined by a difference between the (2n+l)th radial distance r(2n+l) and the 2nth radial distance r(2n), whereas the width of the nth slit Sn is defined by a difference between the (2n)th radial distance r(2n) and the (2n-l)th radial distance r(2n-l).[0043] To achieve constructive interferences within the focal zone (e.g., the focal zone 122 or the focal zone 162), the diffractive distances (e.g., dn) and the focal length (f) may be separated by an order of a half-wavelength (i.e., λ/2). Accordingly, the nthdiffractive distance (dn) can be expressed by the following Equation (1):[0044] Moreover, the first diffractive distance (di) may be associated with the first radial distance (ri) by an integer multiplier of 1. The first diffractive distance (di) and the first radial distance (ri) may join the focal length (f) to form a first right-angle triangle. Under this trigonometric principle, the n diffractive distance (dn) may be associated with the n radial distance (rn) by an integer multiplier of n to form the nthright-angle triangle with the focal length (f). To solve for a particular radial distance rn, one may apply the Pythagoras Theorem as expressed in the following Equations (2.1) and (2.2):2 rΛ\ 2 rl= (/+——) -/[0045] Assuming the wavelength (λ) of the incident EM waves 190 is substantially smaller than the focal length (f), the particular radial distance rnmay be approximated by Equation (3):[0046] Accordingly, the radial distance of the nthorder can be determined based upon the focal length (f) as defined by the respective focal zone (e.g., 122 and 162), an integer multiplier (n) associated with the nthorder, and the wavelength (λ) of the EM wave to be detected. Because the focal zone (e.g., 122) is located within a proximity of the p-n junction (e.g., 116) of the photodiode (e.g., 110), the diffusion distance of the minority carriers may be reduced by integrating a photodiode with the Fresnel structure (e.g., 120 and/or 160) according to one or more aspects. And because the diffusion distance dictates the frequency response of the photodetector (e.g., 700), the Fresnel structure (e.g., 120 and/or 160) may significantly enhance the performance of a photodetector. In one example implementation, the frequency response of a photodetector can be improved from 100 kHz to 10 MHz by incorporating the Fresnel structure (e.g., 120 and/or 160).[0047] The integrated Fresnel structures (e.g., 120 and 160) as described hereinabove may adopt various planar pattern. For example, FIGS. 3-6 show several of these planar patterns, each of which can be adopted to form the cross-sectional configurations of the Fresnel structure as shown in FIGS. 1A-1D.[0048] FIG. 3 shows a top view of an exemplary Fresnel structure with a circular ring pattern 300 according to one aspect. The circular ring pattern 300 is parallel to a top surface of the integrated circuit substrate (e.g., 102). The circular ring pattern 300 includes: a circular center plate 331 overlaps with the central region (e.g., 131 and 171) of the Fresnel structure; a first circular ring plate 331 surrounding the circular center plate 331; a second circular ring plate 333 concentric with the first circular ring plate 331; a third circular ring plate 335 concentric with the second circular ring plate 333; a fourth circular ring plate 337 concentric with the third circular ring plate 335; and a fifth circular ring plate 339 concentric with the fourth circular ring plate 337.[0049] Together, the circular center plate 331 and the circular ring plates 333, 335, 337, and 339 define several circular ring slits, including: a first circular ring slit 332 defined between the circular center plate 331 and the first circular ring plate 333; a second circular ring slit 334 defined between the second circular ring plate 333 and the second circular ring plate 335; a third circular ring slit 336 defined between the third circular ring plate 335 and the fourth circular ring plate 337; and a fourth circular ring slit 338 defined between the fourth circular ring plate 337 and the fifth circular ring plate 339.[0050] According to one aspect, the cross-section A of the circular ring pattern 300 can be viewed as the cross-sectional view of the Fresnel structure 120 as shown and described in FIGS. 1A-1C. Thus, the circular ring slits 332, 334, 336, and 338 may correspond to the trenches 132, 134, 136, and 138 respectively. To that end, each of the trenches 132, 134, 136, and 138 is a circular ring trench adopting the planar configuration of the circular ring pattern 300. Moreover, the circular ring plates 333, 335, 337, and 339 may correspond to the moat regions 133, 135, 137, and 139 respectively. To that end, each of the moat regions 133, 135, 137, and 139 is a circular moat ring adopting the planar configuration of the circular ring pattern 300.[0051] According to another aspect, the cross-section A of the circular ring pattern 300 can be viewed as the cross-sectional view of the Fresnel structure 160 as shown and described in FIG. ID. Thus, the circular ring slits 332, 334, 336, and 338 may correspond to the slits 172, 174, 176 and 178 respectively. To that end, each of the slits 172, 174, 176 and 178 is a circular ring slit adopting the planar configuration of the circular ring pattern 300. Moreover, the circular ring plates 333, 335, 337, and 339 may correspond to the zone plates 173, 175, 177 and 179 respectively. To that end, each of the zone plates 173, 175, 177 and 179 is a circular ring plate adopting the planar configuration of the circular ring pattern 300.[0052] FIG. 4 shows a top view of an exemplary Fresnel structure with an octagonal ring pattern 400 according to one aspect. The octagonal ring pattern 400 is parallel to a top surface of the integrated circuit substrate (e.g., 102). The octagonal ring pattern 400 includes: an octagonal center plate 431 overlaps with the central region (e.g., 131 and 171) of the Fresnel structure; a first octagonal ring plate 431 surrounding the octagonal center plate 431; a second octagonal ring plate 433 concentric with the first octagonal ring plate 431; a third octagonal ring plate 435 concentric with the second octagonal ring plate 433; a fourth octagonal ring plate 437 concentric with the third octagonal ring plate 435; and a fifth octagonal ring plate 439 concentric with the fourth octagonal ring plate 437.[0053] Together, the octagonal center plate 431 and the octagonal ring plates 433, 435, 437, and 439 define several octagonal ring slits, including: a first octagonal ring slit 432 defined between the octagonal center plate 431 and the first octagonal ring plate 433; a second octagonal ring slit 434 defined between the second octagonal ring plate 433 and the second octagonal ring plate 435; a third octagonal ring slit 436 defined between the third octagonal ring plate 435 and the fourth octagonal ring plate 437; and a fourth octagonal ring slit 438 defined between the fourth octagonal ring plate 437 and the fifth octagonal ring plate 439.[0054] According to one aspect, the cross-section A of the octagonal ring pattern 400 can be viewed as the cross-sectional view of the Fresnel structure 120 as shown and described in FIGS. 1A-1C. Thus, the octagonal ring slits 432, 434, 436, and 438 may correspond to the trenches 132, 134, 136, and 138 respectively. To that end, each of the trenches 132, 134, 136, and 138 is an octagonal ring trench adopting the planar configuration of the octagonal ring pattern 400. Moreover, the octagonal ring plates 433, 435, 437, and 439 may correspond to the moat regions 143, 135, 137, and 139 respectively. To that end, each of the moat regions 143, 135, 137, and 139 is an octagonal moat ring adopting the planar configuration of the octagonal ring pattern 400.[0055] According to another aspect, the cross-section A of the octagonal ring pattern 400 can be viewed as the cross-sectional view of the Fresnel structure 160 as shown and described in FIG. ID. Thus, the octagonal ring slits 432, 434, 436, and 438 may correspond to the slits 172, 174, 176 and 178 respectively. To that end, each of the slits 172, 174, 176 and 178 is an octagonal ring slit adopting the planar configuration of the octagonal ring pattern 400. Moreover, the octagonal ring plates 433, 435, 437, and 439 may correspond to the zone plates 173, 175, 177 and 179 respectively. To that end, each of the zone plates 173, 175, 177 and 179 is an octagonal ring plate adopting the planar configuration of the octagonal ring pattern 400.[0056] FIG. 5 shows a top view of an exemplary Fresnel structure with a rectangular ring 500 pattern according to one aspect. The rectangular ring pattern 500 is parallel to a top surface of the integrated circuit substrate (e.g., 102). The rectangular ring pattern 500 includes: a rectangular center plate 531 overlaps with the central region (e.g., 131 and 171) of the Fresnel structure; a first rectangular ring plate 531 surrounding the rectangular center plate 531; a second rectangular ring plate 533 concentric with the first rectangular ring plate 531; a third rectangular ring plate 535 concentric with the second rectangular ring plate 533; a fourth rectangular ring plate 537 concentric with the third rectangular ring plate 535; and a fifth rectangular ring plate 539 concentric with the fourth rectangular ring plate 537.[0057] Together, the rectangular center plate 531 and the rectangular ring plates 533, 535, 537, and 539 define several rectangular ring slits, including: a first rectangular ring slit 532 defined between the rectangular center plate 531 and the first rectangular ring plate 533; a second rectangular ring slit 534 defined between the second rectangular ring plate 533 and the second rectangular ring plate 535; a third rectangular ring slit 536 defined between the third rectangular ring plate 535 and the fourth rectangular ring plate 537; and a fourth rectangular ring slit 538 defined between the fourth rectangular ring plate 537 and the fifth rectangular ring plate 539.[0058] According to one aspect, the cross-section A of the rectangular ring pattern 500 can be viewed as the cross-sectional view of the Fresnel structure 120 as shown and described in FIGS. 1A-1C. Thus, the rectangular ring slits 532, 534, 536, and 538 may correspond to the trenches 132, 134, 136, and 138 respectively. To that end, each of the trenches 132, 134, 136, and 138 is a rectangular ring trench adopting the planar configuration of the rectangular ring pattern 500. Moreover, the rectangular ring plates 533, 535, 537, and 539 may correspond to the moat regions 153, 135, 137, and 139 respectively. To that end, each of the moat regions 153, 135, 137, and 139 is a rectangular moat ring adopting the planar configuration of the rectangular ring pattern 500.[0059] According to another aspect, the cross-section A of the rectangular ring pattern 500 can be viewed as the cross-sectional view of the Fresnel structure 160 as shown and described in FIG. ID. Thus, the rectangular ring slits 532, 534, 536, and 538 may correspond to the slits 172, 174, 176 and 178 respectively. To that end, each of the slits 172, 174, 176 and 178 is a rectangular ring slit adopting the planar configuration of the rectangular ring pattern 500. Moreover, the rectangular ring plates 533, 535, 537, and 539 may correspond to the zone plates 173, 175, 177 and 179 respectively. To that end, each of the zone plates 173, 175, 177 and 179 is a rectangular ring plate adopting the planar configuration of the rectangular ring pattern 500. [0060] FIG. 6 shows a top view of an exemplary Fresnel structure with a linear pattern 600 according to one aspect. The linear pattern 600 is parallel to a top surface of the integrated circuit substrate (e.g., 102). The linear pattern 600 includes: a rectangular center plate 631 overlaps with the central region (e.g., 131 and 171) of the Fresnel structure; a first rectangular strip plate 631 surrounding the rectangular center plate 631; a second rectangular strip plate 633 concentric with the first rectangular strip plate 631; a third rectangular strip plate 635 concentric with the second rectangular strip plate 633; a fourth rectangular strip plate 637 concentric with the third rectangular strip plate 635; and a fifth rectangular strip plate 639 concentric with the fourth rectangular strip plate 637.[0061] Together, the rectangular center plate 631 and the rectangular strip plates 633, 635, 637, and 639 define several rectangular strip slits, including: a first rectangular strip slit 632 defined between the rectangular center plate 631 and the first rectangular strip plate 633; a second rectangular strip slit 634 defined between the second rectangular strip plate 633 and the second rectangular strip plate 635; a third rectangular strip slit 636 defined between the third rectangular strip plate 635 and the fourth rectangular strip plate 637; and a fourth rectangular strip slit 638 defined between the fourth rectangular strip plate 637 and the fifth rectangular strip plate 639.[0062] According to one aspect, the cross-section A of the linear pattern 600 can be viewed as the cross-sectional view of the Fresnel structure 120 as shown and described in FIGS. 1A-1C. Thus, the rectangular strip slits 632, 634, 636, and 638 may correspond to the trenches 132, 134,136, and 138 respectively. To that end, each of the trenches 132, 134, 136, and 138 is a rectangular strip trench adopting the planar configuration of the rectangular strip pattern 600. Moreover, the rectangular strip plates 633, 635, 637, and 639 may correspond to the moat regions 163, 135, 137, and 139 respectively. To that end, each of the moat regions 163, 135,137, and 139 is a rectangular moat strip adopting the planar configuration of the rectangular strip pattern 600.[0063] According to another aspect, the cross-section A of the linear pattern 600 can be viewed as the cross-sectional view of the Fresnel structure 160 as shown and described in FIG. ID. Thus, the rectangular strip slits 632, 634, 636, and 638 may correspond to the slits 172, 174, 176 and 178 respectively. To that end, each of the slits 172, 174, 176 and 178 is a rectangular strip slit adopting the planar configuration of the rectangular strip pattern 600. Moreover, the rectangular strip plates 633, 635, 637, and 639 may correspond to the zone plates 173, 175, 177 and 179 respectively. To that end, each of the zone plates 173, 175, 177 and 179 is a rectangular strip plate adopting the planar configuration of the rectangular strip pattern 600.[0064] In this description, the term "configured to" describes structural and functional characteristics of one or more tangible non-transitory components. For example, the term "configured to" can include a particular configuration that is designed or dedicated for performing a certain function. Accordingly, a device is "configured to" perform a certain function if the device includes tangible non-transitory components that can be enabled, activated or powered to perform that certain function. Also, for example, when used for describing a device, the term "configured to" does not require the device to be configurable at any given point of time.[0065] Regarding various functions performed by components (e.g., elements, resources) described hereinabove, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component that performs the specified function of the described component (e.g., functionally equivalent), even though not structurally equivalent to the described structure. A particular feature may have been described with respect to only one of several implementations, but such feature may be combined with one or more other features of the other implementations.[0066] Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in suitable subcombinations.[0067] Modifications are possible in the described embodiments, and other embodiments are possible, within the scope of the claims.
Methods, systems, and apparatuses for wafer-level integrated circuit packages are described. An IC package includes an IC chip, an insulating layer on the IC chip, a plurality of vias, a plurality of routing interconnects, and a plurality of bump interconnects. The IC chip has a plurality of terminals configured in an array on a surface of the IC chip. A plurality of vias through the insulating layer provide access to the plurality of terminals. Each of the plurality of routing interconnects has a first portion and a second portion. The first portion of each routing interconnect is in contact with a respective terminal of the plurality of terminals though a respective via, and the second portion of each routing interconnect extends over the insulating layer. Each bump interconnect of the plurality of bump interconnects is connected to the second portion of a respective routing interconnect of the plurality of routing interconnects whereby the insulating layer provides stress absorption with regard to stress applied to bump interconnect.
1.A method for forming an integrated circuit package is characterized by comprising:Receiving a wafer with multiple integrated circuit areas, each integrated circuit area having multiple terminals arranged in an array on the surface of the wafer;Forming an insulating layer on the wafer;Form multiple paths through the insulating layer to achieve the connection of multiple terminals to each integrated circuit area;Forming a plurality of routing interconnections on the insulating layer, such that each routing interconnection has a first portion connected to a corresponding terminal via a corresponding path through the insulating layer, and a the second part;Forming multiple bump interconnections on multiple routing interconnections, so that each bump interconnection is connected to the second part of the corresponding routing interconnection among the multiple routing interconnections;Wherein, forming a plurality of routing interconnections includes placing a metallization layer under the bumps on the area for connecting the bump interconnections on each routing interconnection.2.The method of forming an integrated circuit package according to claim 1, wherein forming a plurality of routing interconnections includes:Stack multiple layers of material to form the second part of each routing interconnect; andIn the area for connecting the bump interconnections, at least the outermost layer of the stacked multilayer material is removed.3.The method of forming an integrated circuit package according to claim 1, wherein forming a plurality of routing interconnections includes:Solderable material is placed on the area on each routing interconnect used to connect the bump interconnects.4.The method of forming an integrated circuit package according to claim 1, wherein forming a plurality of routing interconnections includes forming a first portion connected to the corresponding terminal via a first path and a second portion extending on the insulating layer The first route interconnection;Wherein, forming a plurality of bump interconnections on the plurality of routing interconnections includes forming a first bump interconnection connected to the second portion of the first routing interconnection, wherein the first path has the closest to the first bump The edge position of the interconnection, and the first bump interconnection has the bottom edge position closest to the first path; andWherein, forming the first bump interconnection includes forming a first bump interconnection that does not cover the first path, so that the distance between the edge position of the first path and the bottom edge position of the first bump interconnection is greater than zero.5.The method of forming an integrated circuit package according to claim 1, wherein forming a plurality of routing interconnections includes forming a first portion connected to the corresponding terminal via a first path and a second portion extending on the insulating layer The first route interconnection; andWherein, forming a plurality of bump interconnections on the plurality of routing interconnections includes forming a first bump interconnection connected to the second part of the first routing interconnection, and making the first bump interconnection cover the first path.6.The method of forming an integrated circuit package according to claim 5, wherein the opening portion of the first path has a center point, and the bottom of the first bump interconnection also has a center point, wherein the connection to The first bump interconnection of the second part of the first routing interconnection includes:The first bump interconnection is formed such that the distance between the center point of the first path and the bottom center point of the first bump interconnection along the first route is greater than zero.7.An integrated circuit package, characterized in that it includes:An integrated circuit chip having multiple terminals arranged in an array on the surface of the integrated circuit chip;The insulating layer on the surface of the integrated circuit chip;Multiple paths, which are connected to the multiple terminals through the insulating layer;Multiple routing interconnections, where each routing interconnection has a first part and a second part, the first part of each routing interconnection is connected to a corresponding one of the multiple terminals via a corresponding path, and each routing interconnection The second part of extends on the insulating layer;Multiple bump interconnections, where each bump interconnection is connected to the second part of the corresponding one of the multiple routing interconnections;Among them, the metallization layer under the bump is placed on the area for connecting the bump interconnection on each routing interconnection.8.The integrated circuit package of claim 7, wherein the second part of each routing interconnection includes a plurality of stacked material layers, wherein the stacked The outermost layer of the multilayer material is removed.9.The integrated circuit package according to claim 7, wherein the second portion of each routing interconnection includes a solderable layer at the outermost layer of the area for connecting the bump interconnections.10.A wafer-level integrated circuit packaging structure is characterized by comprising:A wafer with multiple integrated circuit areas, each integrated circuit area having terminals arranged in an array on the surface of the wafer;Insulation layer on the surface of the wafer;Multiple paths through the insulation layer for connecting to multiple terminals in each integrated circuit area;Multiple routing interconnections on the insulating layer, wherein each of the multiple routing interconnections has a first portion connected to the corresponding terminal via a corresponding path, and a second portion extending on the insulating layer; andMultiple bump interconnections on multiple routing interconnections, wherein each bump interconnection of the multiple bump interconnections is connected to the second part of the corresponding routing interconnection among the multiple routing interconnections;Among them, the metallization layer under the bump is placed on the area for connecting the bump interconnection on each routing interconnection.
Integrated circuit packaging and forming method thereof, and wafer-level integrated circuit packaging structureTechnical fieldThe present invention relates to integrated circuit packaging technology, and more specifically, to wafer level ball grid array packaging.Background techniquePackaging technology is commonly used to connect integrated circuit chips to other circuits and mount them on a printed circuit board (PCB). Ball grid array (BGA) packaging is one of such IC chip packaging technologies. Compared with other current packaging solutions, BGA packages can provide smaller footprints. The BGA package is provided with an array of solder ball pads on the bottom outer surface of the package substrate. Attach the solder balls to the solder ball tray, and reflow the solder balls to attach the package to the PCB.Wafer-level BGA package is a more advanced BGA package. There are many ways to refer to wafer-level BGA packaging in the industry, including wafer-level chip scale packaging (WLCSP). In wafer-level BGA packaging, when the IC chip has not been separated from its processed wafer, the solder balls are directly mounted on the IC chip. Therefore, compared to other IC package types including traditional BGA packages, wafer-level BGA packages can be made smaller and with higher pin out.Due to the ongoing demand for narrowing manufacturing tolerances (such as 65nm) and meeting strict user reliability and cost pressures, it has caused difficulties in the implementation of wafer-level BGA packaging technology. In the work performance and reliability evaluation test, external stress needs to be applied on the wafer-level BGA package. These stresses will be transferred to the package through solder interconnects. For wafer-level packaging, it is usually necessary to design two layers of polymer in the package body as a stress buffer layer between the solder ball interconnection and the chip. However, adding two layers of polymer in the BGA package will increase the cost.Therefore, it is necessary to improve the wafer-level packaging manufacturing process so that it can not only meet the required reliability, but also meet the cost requirements, while reducing manufacturing tolerances and having a smaller package size.Summary of the inventionThe invention discloses a wafer level integrated circuit (IC) packaging method, system and device. Routing interconnects are used to connect chip terminals to bump interconnects (or other types of package interconnects). On the one hand, routing interconnects directly (e.g., using solder) connect the chip terminals and bump interconnections. On the other hand, a metal layer is added on the routing interconnect to mount the bump interconnection, thereby connecting the chip terminals to the bump interconnection.On the other hand, a single insulating layer can be used to absorb the stress applied to the bump interconnection, while the single insulating layer configuration enables fewer manufacturing process steps than the multiple polymer layer configuration.In an example of the present invention, an IC package is provided, including an IC chip, an insulating layer on the surface of the IC chip, multiple vias, multiple routing interconnections, and multiple bump interconnections (bump) interconnect). The IC package has a plurality of terminals arranged in an array on the surface of the IC chip. A plurality of paths are connected to the plurality of terminals through the insulating layer. Each of the plurality of routing interconnects has a first part and a second part. The first part of each routing interconnection is connected to a corresponding one of the plurality of terminals via a corresponding path, and the second part of each routing interconnection extends on the insulating layer. Each of the plurality of bump interconnections is connected to the second part of the corresponding one of the plurality of routing interconnections.In another example of the present invention, a method of forming multiple IC packages is provided. Receive a wafer having a plurality of integrated circuit regions, each integrated circuit region having a plurality of terminals arranged in an array on the surface of the wafer. An insulating layer is formed on the wafer. A plurality of vias are formed through the insulating layer to achieve communication of a plurality of on-chip terminals to each integrated circuit area. Forming a plurality of routing interconnections on the insulating layer, such that each routing interconnection has a first portion connected to a corresponding terminal via a corresponding path through the insulating layer, and a the second part. A plurality of bump interconnections are formed on the plurality of routing interconnections, so that each bump interconnection is connected to the second part of the corresponding routing interconnection among the plurality of routing interconnections.In yet another example of the present invention, a wafer-level packaging structure is provided, including a wafer, an insulating layer on the wafer, multiple paths through the insulating layer, multiple routing interconnections on the insulating layer, and multiple Multiple bumps on a routing interconnection. The wafer has multiple integrated circuit regions. Each integrated circuit area has terminals arranged in an array on the surface of the wafer. Multiple paths enable connections to multiple terminals in each integrated circuit area. Each of the plurality of routing interconnects has a first part connected to the corresponding terminal via a corresponding path, and a second part extending on the insulating layer. Each of the plurality of bump interconnections is connected to the second part of the corresponding routing interconnection of the plurality of routing interconnections.According to an aspect of the present invention, there is provided a method of forming an integrated circuit (IC) package, including:Receive (receive) a wafer with multiple integrated circuit areas, each integrated circuit area has multiple terminals (array terminals) arranged in an array on the surface of the wafer;Forming an insulating layer on the wafer;Form multiple paths through the insulating layer to achieve the connection of multiple terminals to each integrated circuit area;A plurality of routing interconnections are formed on the insulating layer, so that each of the plurality of routing interconnections has a first part connected to a corresponding terminal via a corresponding path through the insulating layer, and The second part of extends over the insulating layer;A plurality of bump interconnections are formed on the plurality of routing interconnections, so that each bump interconnection is connected to the second part of the corresponding routing interconnection among the plurality of routing interconnections.Preferably, forming multiple routing interconnections includes:Stack multiple layers of material to form the second part of each routing interconnect; andIn the area for connecting the bump interconnections, at least the outermost layer of the stacked multilayer material is removed.Preferably, forming multiple routing interconnections includes:Solderable material is placed on the area on each routing interconnect used to connect the bump interconnects.Preferably, forming a plurality of routing interconnections includes forming a first routing interconnection having a first part connected to the corresponding terminal via a first path and a second part extending on the insulating layer;Wherein, forming a plurality of bump interconnections on the plurality of routing interconnections includes forming a first bump interconnection connected to the second portion of the first routing interconnection, wherein the first path has the closest to the first bump The edge position of the interconnection, and the first bump interconnection has the bottom edge position closest to the first path; andWherein, forming the first bump interconnection includes forming a first bump interconnection that does not cover the first path, so that the distance between the edge position of the first path and the bottom edge position of the first bump interconnection is greater than zero.Preferably, forming a plurality of routing interconnections includes forming a first routing interconnection having a first portion connected to the corresponding terminal via a first path and a second portion extending on the insulating layer; andWherein, forming a plurality of bump interconnections on the plurality of routing interconnections includes forming a first bump interconnection connected to the second part of the first routing interconnection, and making the first bump interconnection cover the first path.Preferably, the opening portion of the first path has a center point, and the bottom of the first bump interconnection also has a center point, wherein the first bumps forming the second portion connected to the first routing interconnection The company includes:The first bump interconnection is formed such that the distance between the center point of the first path and the bottom center point of the first bump interconnection along the first route is greater than zero.Preferably, the first bump interconnection forming the second part connected to the first routing interconnection includes:The first bump interconnection at least partially fills the first path.Preferably, forming multiple routing interconnections includes:Forming a first routing interconnection such that it has a first portion connected to the corresponding terminal through the first path, a second portion extending on the insulating layer, and one of the first and second portions connected to the first routing interconnection The third part of the room.Advantageously, forming a plurality of bump interconnections on the plurality of routing interconnections includes forming a first bump interconnection connected to a second portion of the first routing interconnection, and leaving the first bump interconnection uncovered The first path.Preferably, forming the first bump interconnection further includes:Solder is placed on the second portion of the first routing interconnect to form the first bump interconnection, and the third portion of the first routing interconnect is used as a conduit so that the solder at least partially fills the first path.Preferably, the method further includes:In the solder placement step, the width of the third portion of the first routing interconnection is configured to adjust the flow of solder into the first path.According to another aspect of the present invention, an integrated circuit (IC) package is provided, including:An integrated circuit chip having multiple terminals arranged in an array on the surface of the integrated circuit chip;The insulating layer on the surface of the integrated circuit chip;Multiple paths, which are connected to the multiple terminals through the insulating layer;Multiple routing interconnections, where each routing interconnection has a first part and a second part, the first part of each routing interconnection is connected to a corresponding one of the multiple terminals via a corresponding path, and each routing interconnection The second part of extends on the insulating layer;Multiple bump interconnections, wherein each bump interconnection is connected to the second part of the corresponding one of the multiple routing interconnections.Preferably, the second part of each routing interconnection includes a plurality of stacked material layers, wherein the outermost layer of the stacked multilayer material is removed in the area for connecting the bump interconnections.Preferably, the second part of each routing interconnection includes a solderable layer at the outermost layer of the area for connecting the bump interconnections.Preferably, the plurality of routing interconnections includes a first routing interconnection, wherein the first portion of the first routing interconnection is connected to the corresponding terminal through the first path, and the first bump interconnection is connected to the first The second part of routing interconnection;Wherein, the first path has the edge position closest to the first bump interconnection, and the first bump point interconnection has the bottom edge position closest to the first path;The first bump interconnection does not cover the first path, and the distance between the edge position of the first path and the bottom edge position of the first bump interconnection is greater than zero.Preferably, the plurality of routing interconnections includes a first routing interconnection, wherein a first portion of the first routing interconnection is connected to a corresponding terminal via a first path, and a first bump interconnection is connected to the first The second part of a route interconnection,Wherein, the first bump interconnection covers the first path.Preferably, the opening portion of the first path has a center point, and the bottom of the first bump interconnection also has a center point, wherein the center point of the first path interconnects the bottom with the first bump The distance between the center points of the edges is greater than zero.Advantageously, the first bump interconnection at least partially fills the first path.Preferably, the plurality of routing interconnections includes a first routing interconnection, wherein a first portion of the first routing interconnection is connected to a corresponding terminal via a first path, and a first bump interconnection is connected to the first The second part of a route interconnection,Wherein, the first routing interconnection includes a third portion connected between the first and second portions of the first routing interconnection.Preferably, the first bump interconnection does not cover the first path,Wherein, the solder placed on the second portion of the first routing interconnect for forming the first bump interconnection uses the third portion of the first routing interconnect as a conduit to allow the solder to at least partially fill the first path .Preferably, the width of the third portion of the first routing interconnection is smaller than the diameter of the first path.Preferably, the width of the third portion of the first routing interconnection is smaller than the diameter of the first path.Preferably, the insulating layer is a polymer.Preferably, the insulating layer is used to absorb the stress between the integrated circuit chip and the plurality of bump interconnections.According to yet another aspect of the present invention, a wafer-level integrated circuit packaging structure is provided, including:A wafer with multiple integrated circuit areas, each integrated circuit area having terminals arranged in an array on the surface of the wafer;Insulation layer on the surface of the wafer;Multiple paths through the insulation layer for connecting to multiple terminals in each integrated circuit area;Multiple routing interconnections on the insulating layer, wherein each of the multiple routing interconnections has a first portion connected to the corresponding terminal via a corresponding path, and a second portion extending on the insulating layer; andMultiple bump interconnections on multiple routing interconnections, wherein each bump interconnection of the multiple bump interconnections is connected to the second part of the corresponding routing interconnection of the multiple routing interconnections.From the following description of the present invention, a deeper understanding of various aspects, various advantages, and innovative features of the present invention can be obtained. It should be noted that the summary and abstract sections give one or more exemplary embodiments, but these are not all the embodiments expected by the inventor of the present invention.BRIEF DESCRIPTIONThe present invention will be further described below with reference to the drawings and embodiments. In the drawings:1 is a flowchart of exemplary steps of a wafer-level packaging process of the present invention;Figure 2 is a top view of an exemplary wafer;3 is a cross-sectional view of a wafer, showing a schematic view of the integrated circuit area on the wafer;4 is a flowchart of exemplary steps of the wafer back-end process;5 is a flowchart of exemplary steps of a wafer front-end process with a redistribution layer and a metallization layer under a bump;6 is a schematic diagram of the integrated circuit area of the wafer;7 is a cross-sectional view of part of the integrated circuit area of the wafer;8 is a plan view of a part of the integrated circuit area of the wafer;9 is a flowchart of exemplary steps of a wafer front-end process with a metallization layer under a bump;10 is a schematic diagram of the integrated circuit area of the wafer;Figure 11 is a cross-sectional view of part of the integrated circuit area of the wafer;12 is a flowchart of exemplary steps of a wafer front-end process with a metallization layer under a bump;13 is a cross-sectional view of part of the integrated circuit area of the wafer;14 is a flowchart of forming an integrated circuit package according to an embodiment of the present invention;15 is a cross-sectional view of a part of an integrated circuit region processed according to the flowchart of FIG. 14 according to an embodiment of the present invention;16-18 are top views of part of the integrated circuit area at various stages of front-end assembly according to an embodiment of the present invention;19-23 are cross-sectional views of various integrated circuit regions according to an embodiment of the present invention;24 and 25 are top views of exemplary routing interconnects according to embodiments of the invention.The present invention will be described below with reference to the drawings. In the drawings, the same reference numerals represent the same or functionally similar parts. In addition, the left-most digit of a reference number indicates the drawing number of the drawing when it first appears in the drawing.detailed descriptionintroductionThis specification describes one or more embodiments of the invention. The described embodiments are only used to illustrate the present invention. The scope of the invention is not limited to the described embodiments, but is defined by the claims of the invention.The terms "one embodiment", "embodiment", and "example" appearing in this application mean that the embodiments described in this application may include specific features, structures, or characteristics, but not every embodiment is required. These specific features, structures, or characteristics are included. In addition, the term does not necessarily refer to the same embodiment. When a particular feature, structure, or characteristic is introduced in conjunction with an embodiment, it can be considered that those skilled in the art can incorporate the feature, structure, or characteristic into other embodiments, regardless of whether it is explicitly described in this application.In addition, it should be understood that the descriptions related to the spatial orientation used in this application (such as "above", "below", "left", "right", "up", "down", "top", "Bottom" etc.) are for illustrative purposes only, and the actual implementation of the structure described in this application may have various orientations or ways.Traditional wafer-level packaging process"Wafer-level packaging" is an integrated circuit packaging technology, in which, when the integrated circuit chip is still in the form of a wafer, all package-related interconnection components are already set. After the package-related interconnection components are set, the wafer is tested and divided into independent devices, which are sent directly to users for their use. Therefore, there is no need to carefully package the device independently. The final size of the wafer level package basically corresponds to the size of the chip, which is a very small size packaging solution. As the demand for small devices increases in functionality, wafer-level packaging is becoming increasingly popular. Its applications include mobile devices such as cell phones, PDAs and MP3 players.FIG. 1 shows a flowchart 100 of exemplary steps of a wafer-level packaging process. The process 100 starts at step 102. In step 102, multiple integrated circuits are fabricated on the wafer surface to divide multiple integrated circuit regions. For example, FIG. 2 shows a top view of an exemplary wafer. The wafer 200 may be silicon, gallium arsenide, or other types of wafers. As shown in FIG. 2, the surface 202 of the wafer 200 is divided into a plurality of integrated circuit regions (represented by small rectangles in FIG. 2). According to the process steps of process 100, each integrated circuit area is packaged independently into a single wafer level ball grid array package.In step 104, front-end processing of the wafer is performed, and an array of interconnected solder balls is pasted on the wafer surface of each of the plurality of integrated circuit regions. The key part of wafer-level packaging is the front-end process-step 104. In this step, appropriate interconnection and packaging materials are placed on the wafer. For example, FIG. 3 is a cross-sectional view of a wafer 200, showing an integrated circuit region 300 on the wafer. As shown in FIG. 3, the integrated circuit area 300 has a plurality of interconnection solder balls 302 a-302 e adhered to the surface 202. The interconnection solder balls 302a-302e may be solder, other metals, metal / alloy combinations, and the like. The interconnection solder ball 302 is used to connect the BGA package composed of the integrated circuit area 300 to an external device, such as a PCB.At step 106, each of the plurality of integrated circuit regions is tested on the wafer. For example, the interconnect solder balls 302 of each integrated circuit area can be in contact with the probe to provide ground, power, and test input signals, and receive test output signals.At step 108, back-end processing of the wafer is performed to divide the wafer into a plurality of individual integrated circuit packages. Examples of back-end processes will be described later.At step 110, the separated integrated circuit packages are shipped. For example, the separated integrated circuit packages can be shipped to warehouses, users, equipment assembly sites, and next-step processes.FIG. 4 is a flowchart 400 of exemplary steps for performing a wafer back-end process according to step 108 in the flowchart 100. In all back-end process applications, not all of the steps in the process 400 are required steps. Moreover, the steps in the process 400 do not necessarily need to be performed in the order shown. The process 400 starts at step 402. At step 402, backgrinding the wafer. For example, the wafer 200 may be back ground to reduce the thickness of the wafer to a desired value.At step 404, each of the plurality of integrated circuit regions is marked on the wafer. For example, each integrated circuit area may be marked with information identifying a particular type of ball grid array package, such as manufacturer identification information, product number information (part number information), and so on. For example, the mark of the integrated circuit area 300 may be printed on the opposite side of the surface 202 of the wafer 200 shown in FIG. 3.At step 406, the wafer is diced to divide the wafer into multiple individual integrated circuit packages. As known to those skilled in the art, various suitable methods can be used to scribe the wafer to physically separate these integrated circuit regions from each other.At step 408, multiple separated integrated circuit packages are shipped. For example, the separated integrated circuit packages can be placed on one or more tapes / reels and packaged individually, or other methods of transporting the integrated circuit packages to users can be used.The reliability of wafer-level packaging is very important. In many applications that use these packaging types, such as handheld mobile devices, the interconnection between the integrated circuit package and the device (which is integrated with these integrated circuit packages), and the integrated circuit package itself must be able to withstand various stresses . These stresses include, for example, temperature cycles (such as changes in ambient temperature or power on / off cycles) and mechanical vibrations (such as equipment slipping). The structure of wafer-level packaging plays an important role in the reliability of integrated circuit packaging and the reliability of interconnection between integrated circuit packaging and systems.The front-end process of step 104 is the key to whether a reliable IC package can be formed. Depending on factors such as the wafer fabrication method, the front-end process of step 104 can be performed in different ways. In some cases, the front-end process requires the application of a metal layer to form a circuit / path from the chip terminal to the external package terminal. This metal layer is commonly referred to as a redistribution layer (RDL).There are three common execution schemes for the front-end process of step 104. In the first scenario, the "redistribution layer" (RDL), the under bump metallization layer (UBM) and the bump interconnection (along with multiple polymer layers) are used to route electrical signals from the chip terminals to the outside (Eg PCB) terminals. An example of the first solution will be described in conjunction with the flow 500 in FIG. 5. In the second scheme, RDL is not used. As an alternative, a single layer of polymer, UBM, and bump interconnects are used to route signals between on-chip terminals and external terminals. In the third scheme, RDL is not used. Use UBM and bump interconnects to route signals between on-chip terminals and external terminals. The second and third solutions will also be described later.FIG. 5 shows a flowchart 500 of exemplary steps of a wafer front-end process with a redistribution layer and an under bump metallization layer. The process 500 starts at step 502. In step 502, a wafer having a plurality of integrated circuit regions is received, and each integrated circuit region is provided with a plurality of on-chip terminals arranged in a ring shape. For example, FIG. 6 is a bottom view of an integrated circuit area 600 of a wafer (wafer 200 shown in FIG. 2). As shown in FIG. 6, the integrated circuit area 600 includes a ring 602 formed by terminals 604 (indicated by individual terminals 604a and 604b in FIG. 6). The terminals 604 are arranged in a ring 602 on the lower surface of the integrated circuit area 600 (for example, the surface 202) and adjacent to the outer periphery of the integrated circuit area 600. An integrated circuit area may include one or more such rings 602. The terminal 604 may be an input, output, test, power, ground, etc. pad of the integrated circuit area 600 (or defined by the integrated circuit area 600).At step 504, a first polymer layer is formed on multiple integrated circuit regions of the wafer. 7 is a cross-sectional view of a part of the integrated circuit region 600 produced according to the process 500. As shown in FIG. 7, part of the integrated circuit region 600 includes a chip portion 702a, a terminal 604a on the upper surface 704 of the chip portion 702a, and a passivation layer 706 covering the remaining part of the upper surface 704 of the chip portion 702a. The first polymer layer 708 is formed on the integrated circuit region 600 (and other integrated circuit regions on the wafer) of the wafer and covers the terminal 604a and the passivation layer 706.At step 506, a plurality of first vias are formed through the first polymer layer to achieve communication to the plurality of on-chip terminals. For example, as shown in FIG. 7, a first path 710a is formed through the first polymer layer 708. Similar to the first path 710a, a plurality of paths 710 are formed through the first polymer layer 708, and each path communicates with a corresponding terminal 604 of the integrated circuit area 600.In step 508, a plurality of redistribution layers are formed on the first polymer layer, each redistribution layer has a first portion connected to the corresponding on-chip terminal through a corresponding first path, and The second part extends on the polymer layer. For example, as shown in FIG. 7, a redistribution layer 712 a is formed on the first polymer layer 708. As shown, the first portion 714 of the redistribution layer 712a is connected to the terminal 604a through a first path 710a, and the second portion 716 of the redistribution layer 712a extends over the first polymer layer 708. In this way, a plurality of redistribution layers 712 are formed.For example, FIG. 8 shows a top view of a part of the integrated circuit area 600 on the left 802 of the integrated circuit area 600. As shown in FIG. 8, four redistribution layers 712 a-712 d are formed on the first polymer layer 708, and each redistribution layer includes a first portion 714 and a second portion 716. The first portion 714 of the redistribution layer 712a-712d is connected to four corresponding terminals (not shown in FIG. 8) through four corresponding first paths (not shown in FIG. 8). The second portion 716 of the redistribution layer 712a-712d extends over the first polymer layer 708 (eg, in the right direction of FIG. 8).As known to those skilled in the art, the redistribution layer (RDL) 712 can be deposited on the first polymer layer 708 using various techniques (such as electroplating, sputtering, etc.), and a variety of different lithography or other methods can be used Way to deal with it. The formation of the first portion 714 of the redistribution layer 712a is similar to standard path plating, and the second portion 716 of the redistribution layer 712a extends from the first portion 714 in a manner similar to the formation of standard metal traces on the substrate.At step 510, a second polymer layer is formed on the first polymer layer and the plurality of redistribution layers. For example, as shown in FIG. 7, the second polymer layer 718 is formed on the integrated circuit region 600 (and other integrated circuit regions on the wafer) of the wafer and covers the first polymer layer 708 and the redistribution layer 712a.At step 512, a plurality of second paths are formed through the second polymer layer to achieve communication to the second portion on each of the plurality of redistribution layers. For example, as shown in FIG. 7, a second path 720a is formed through the second polymer layer 718 to achieve connectivity to the second portion 716 of the redistribution layer 712a. In this way, a plurality of second paths 720 are formed through the second polymer layer 718, and each path can achieve communication to the second portion 716 of the corresponding redistribution layer 712. For example, the locations 804a-804d shown in FIG. 8 (indicated by dotted lines, where the second paths 720a-720d correspond to the redistribution layers 712a-712d) can pass through the second polymer layer 718 (not shown in FIG. 8 Out).In step 514, a plurality of lower bump metallization layers are formed on the second polymer layer, and each lower bump metallization layer communicates with the second portion of the corresponding redistribution layer through a corresponding second path. For example, as shown in FIG. 7, the metallization layer 722a under the bumps communicates with the second portion 716 of the redistribution layer 712a through the corresponding second path 720a. In this way, the plurality of bump lower metallization layers 722 may communicate with the corresponding redistribution layer 712 through the corresponding second path 720. For example, in FIG. 8, the under bump metallization layers 722a-722d (not shown in FIG. 8) may pass through corresponding second paths 720a-720d (not shown in FIG. 8) at positions 804a-804d form.The under bump metallization (UBM) layer 722 is usually one or more metal layers (for example, formed by metal deposition—electroplating, sputtering, etc.) for implementing the redistribution layer 722 and the package interconnection mechanism (such as in step 516) The interconnection between the bumps that will be described). For the solder package interconnection mechanism, the UBM layer is used as a solderable layer. In addition, UBM can protect the underlying metal or circuit from chemical / thermal / electrical interactions between different metals / alloys used in the package interconnection mechanism. In one embodiment, the UBM layer 722 is formed similar to standard via plating.At step 516, multiple bump interconnections are formed on the multiple bump lower metallization layers. For example, as shown in FIG. 7, bump interconnect 724a is formed on bump lower metallization layer 722a. In this manner, multiple bump interconnections 724 may be formed on the corresponding bump lower metallization layer 722. For example, in FIG. 8, bump interconnections 724a-724d (not shown in FIG. 8) may be formed at locations 804a-804d, each bump interconnection is connected to the bump lower metallization layer 722a-722d (Not shown in FIG. 8) The corresponding one is in contact. For example, bump interconnect 724 may be a solder ball.In this way, an electrical connection is formed from each terminal 604 to the corresponding bump interconnect 724 (ie, through the corresponding redistribution layer 712 and the bump lower metallization layer 722). As just described in connection with process 500, multiple polymer layers (eg, layers 708 and 718) can be used to support the electrical connection. In many cases, single or multiple layers of polymer material are deposited on the wafer below, above, or between each RDL or UBM metal layer. The polymer layer has multiple functions. For example, they can serve as electrical insulators between different circuits / metal layers, including the redistribution layer 712 and the bump metallization layer 722 and the electrical insulator between the circuits in the chip (chip part 702a). Relatively speaking, the polymer layer is a relatively soft material, which acts as a mechanical buffer layer between the package-to-system interconnect (such as bump interconnect 724) and the chip to protect the chip and absorb the application External stress on the interconnection point. The polymer layer also serves as a mechanical buffer layer from the package to the system interconnect and the chip, protecting the interconnection points from stresses caused by the mismatch of the material properties of the various materials (chips, PCBs, solders, etc.) in the package and the system Impact.There are some deficiencies in the first front-end process solution described in connection with process 500. For example, two polymer layers and an RDL layer are required. These require many process steps and additional materials, which increases the cost. Moreover, many new chips no longer require RDL routing between the chip terminals and the external terminals in their design. In other words, by routing inside the chip (eg, during the circuit fabrication process in step 102 of process 100), the chip terminals are designed to coincide with the external terminals instead of using the chip-outside RDL. The second and third front-end process schemes involve package types where the chip terminals coincide with the external terminals.FIG. 9 is a flowchart 900 of exemplary steps of the wafer front-end process according to the second scheme. The process 900 starts at step 902. At step 902, a wafer having multiple integrated circuit regions is received, each integrated circuit region having multiple on-chip terminals configured in an array. For example, FIG. 10 is a bottom view of an integrated circuit area 1000 of a wafer (wafer 200 shown in FIG. 2). As shown in FIG. 10, the integrated circuit area 1000 includes a rectangular array 1002 composed of terminals 604 (individual terminals 604a and 604b are shown in FIG. 10). The terminals 604 are arranged in an array 1002 on the lower surface (eg, surface 202) of the integrated circuit area 1000.At step 904, a polymer layer is formed on multiple integrated circuit regions of the wafer. FIG. 11 is a cross-sectional view of a part of the integrated circuit region 1000 produced according to the process 900. As shown in FIG. 11, a part of the integrated circuit region 1000 includes a chip portion 702 a, a terminal 604 a on the upper surface 704 of the chip portion 702 a, and a passivation layer 706 covering the upper surface 704 of the chip portion 702 a (excluding the terminal 604 a). The polymer layer 708 is formed on the integrated circuit area 1000 of the wafer (and other integrated circuit areas on the wafer) and covers the terminal 604a and the passivation layer 706.At step 906, multiple paths are formed through the polymer layer to achieve connectivity to multiple on-chip terminals. For example, as shown in FIG. 11, a path 710a is formed through the polymer layer 708. Similar to the path 710a, a plurality of paths 710 are formed through the polymer layer 708, and each path communicates with a corresponding terminal 604 of the integrated circuit area 1000.In step 908, a plurality of lower bump metallization layers are formed on the polymer layer, and the lower bump metallization layer is located at the center of the corresponding path and connected to the corresponding on-chip terminal through the corresponding path. For example, as shown in FIG. 11, the lower bump metallization layer 722a is connected to the terminal 604a through a path 710a. In this way, a plurality of bump lower metallization layers 722 may be connected to corresponding terminals 604 through corresponding paths 710.At step 910, multiple bump interconnections are formed on the multiple bump lower metallization layers. For example, as shown in FIG. 11, bump interconnections 724a are formed on the bump lower metallization layer 722a. Similar to the bump interconnect 724a, a plurality of bump interconnects 724 may be formed on the corresponding bump lower metallization layer 722. In this way, electrical connections are formed from each terminal 604 to the corresponding bump interconnect 724 (ie, through the corresponding bump lower metallization layer).The second front-end process scheme of process 900 has deficiencies. Compared with the first solution (flow 500), the second solution requires fewer steps, uses only a single layer of polymer (polymer layer 708), and does not require a redistribution layer, so the cost is low. However, the chip terminals coincide with the external terminals. In the working state or during the reliability evaluation test, external stress is applied to the IC package produced through the process 900. This stress will be transferred to the IC package through the bump interconnect 724a. Although there is some polymer material between the chip (chip portion 702a) and the bump interconnect 724a, most of the interface (terminal to UBM722a) is still a rigid connection. Due to the stress transfer between the bump interconnect 724a and the chip portion 702a through this rigid connection, there is a great risk of chip damage in the second solution.FIG. 12 is a flowchart 1200 of exemplary steps of a wafer front-end process according to a third scheme. The process 1200 starts at step 1202. At step 1202, a wafer having multiple integrated circuit regions is received, each integrated circuit region having multiple on-chip terminals configured in an array. For example, the wafer 200 shown in FIG. 2 is received, which has multiple integrated circuit regions similar to the integrated circuit region 1000 shown in FIG. 10.In step 1204, a plurality of metallization layers under the bumps are formed, and the metallization layer under each bump is connected to the corresponding on-chip terminal. FIG. 13 is a cross-sectional view of a part of the integrated circuit region 1300 produced according to the process 1200. As shown in FIG. 13, part of the integrated circuit area 1300 includes a chip portion 702a, a terminal 604a on the upper surface 704 of the chip portion 702a, and a passivation layer 706 covering the remaining part of the upper surface 704 of the chip portion 702a. Also shown in FIG. 13 is the lower bump metallization layer 722a formed directly on the terminal 604a. In this way, a plurality of lower bump metallization layers 722 connected to the terminals 604 corresponding to the integrated circuit area can be formed.At step 1206, multiple bump interconnections are formed on the multiple bump lower metallization layers. For example, as shown in FIG. 13, bump interconnection 724a is formed on bump lower metallization layer 722a. Likewise, multiple bump interconnects 724 connected to the corresponding bump lower metallization layer 722 may be formed.In this way, electrical connections are formed from each terminal 604 to the corresponding bump interconnect 724 (ie, through the corresponding bump lower metallization layer). The third front-end process solution of process 1300 has deficiencies. Compared with the first solution and the second solution (flows 500 and 900), the third solution requires fewer steps, uses no polymer, and does not require a redistribution layer, so the cost is low. However, because there is no polymer layer, the only interface between the chip (chip portion 702a) and the bump interconnect 724a is the bump lower metallization layer 722a, which is generally rigid. Therefore, most of the stress experienced by bump interconnect 724a is directly transferred to the chip. So there is a big risk-damage to the chip. For advanced silicon process technology, the risk is greater due to the use of very brittle and fragile low-k dielectric materials.The following describes exemplary embodiments of the present invention, which overcome the above-mentioned three front-end process defects.Exemplary embodimentThe embodiments described herein are used to illustrate the present invention, and the present invention is not limited to these embodiments. The embodiments described herein are applicable to various types of integrated circuit packages. For those skilled in the art, according to the teachings of the present invention, more structural and operational embodiments can be easily derived, including modifications / replacements to the embodiments of the present invention.According to an embodiment, routing interconnects are used for each chip terminal to connect the chip terminals to bump interconnects (or other types of package interconnects). In one embodiment, the routing interconnect directly connects the chip terminals to the bump interconnects. In another embodiment, the bump lower metallization layer places the bump interconnection above the routing interconnection, so that the chip terminals can also be connected to the bump interconnection. In these embodiments, the insulating layer between the routing interconnect and the chip is used to absorb stress, while requiring fewer manufacturing process steps than a multiple polymer layer configuration.14 is a schematic diagram of a process 1400 for forming an integrated circuit package according to an embodiment of the present invention. Based on the discussion here, those skilled in the art can easily derive other structural and operational embodiments.The process 1400 starts at step 1402. At step 1402, a wafer having multiple integrated circuit regions is received, each integrated circuit region having multiple on-chip terminals arranged in an array. For example, a wafer similar to the wafer 200 shown in FIG. 2 is received with a plurality of integrated circuit regions similar to the integrated circuit region 1000 shown in FIG. 10. As shown in FIG. 10, the integrated circuit area 1000 includes a rectangular array 1002 composed of terminals 604 (individual terminals 604a and 604b are shown in FIG. 10). The terminals 604 are arranged in an array 1002 on the lower surface (eg, surface 202) of the integrated circuit area 1000. The array 1002 may be a regular rectangle of terminals as shown in FIG. 10, or may be other terminal array patterns or arrangements, including staggered terminal arrays. The array 1002 need not be a full array of terminals 604.In step 1404, an insulating layer is formed on multiple integrated circuit regions of the wafer. 15 is a cross-sectional view of a part of an integrated circuit region 1500 manufactured according to the process 1400 according to an embodiment of the present invention. As shown in FIG. 15, part of the integrated circuit region 1500 includes a chip portion 702 a, a terminal 604 a on the upper surface 704 of the chip portion 702 a, and a passivation layer 706 covering the upper surface 704 (excluding the terminal 604 a) of the chip portion 702 a. An insulating material layer 1502 is formed on the integrated circuit region 1500 (and other integrated circuit regions on the wafer) of the wafer and covers the terminal 604a and the passivation layer 706. The insulating layer 1502 can absorb shock and is an electrically insulating material, such as polymers, dielectric materials, and / or other shock absorbing and electrically insulating materials. The insulating layer 1502 may include one or more layers of materials. The insulating layer 1502 may be applied by any method known by those skilled in the art (traditional or other methods).At step 1406, multiple paths are formed through the insulating layer to achieve connectivity to multiple on-chip terminals. For example, as shown in FIG. 15, a path 1504a is formed through the insulating layer 1502. A plurality of paths 1504 are formed through the insulating layer 1502, and each path provides the connectivity of the corresponding terminal 604 of the integrated circuit area 1500. For example, FIG. 16 shows a top view of a part of the integrated circuit region 1500 adjacent to the left side 1602 of the region 1500 according to an embodiment of the present invention. The figure shows four paths 1504a-1504d, which are part of a larger array of paths 1504. As shown in FIG. 16, the paths 1504a-1504d are formed through the insulating layer 1502, achieving connectivity to the corresponding on-chip terminals 604a-604d. Note that the path 1504 may have an inclined wall, as shown in FIG. 15, or a vertical wall (for example, the path 1504 may be cylindrical), or may have other shapes. The path 1504 may be formed in any manner known to those skilled in the art, including etching, drilling, and the like.In step 1408, a plurality of routing interconnections are formed on the insulating layer, such that each routing interconnection of the plurality of routing interconnections has a first part connected to a corresponding terminal via a corresponding path through the insulating layer, and The second part extends on the layer. For example, as shown in FIG. 15, the routing interconnect 1056a is formed on the insulating layer 1502. Similar to the routing distribution layer 712a shown in FIG. 7, the routing interconnect 1056 has a first portion 1508 and a second portion 1510. The first portion 1508 of the routing interconnect 1506 is connected to the terminal 604a through the routing 1504a, and the second portion 1510 of the routing interconnect 1506 extends over the insulating layer 1502 (eg, laterally). In this way, multiple insulating layers 1502 are formed on the integrated circuit area 1500.For example, FIG. 17 is a top view of a part of the integrated circuit area 1500 shown in FIG. 16. In FIG. 17, four routing interconnects 1506a-1506d are formed on the insulating layer 1502, and each routing interconnect has a first portion 1508 and a second portion 1510. The first portions 1508a-1508d of the routing interconnects 1506a-1506d are connected to the corresponding terminals 604a-604d (shown in FIG. 16) through corresponding paths 1504a-1504d (shown in FIG. 16). The second portions 1510a-1510d of the routing interconnects 1506a-1506d extend over the insulating layer 1502 (for example, the right direction in FIG. 16).Note that the second portion 1510 of the routing interconnect 1506 can have various shapes. For example, as shown in FIG. 17, the second portion 1508 may be rectangular. As another option, the second portion 1508 may be circular, as described in detail below in conjunction with some examples, or other shapes. For example, the first portion 1508 of the routing interconnect 1506a may be similar to standard path plating, and the second portion 1510 of the routing interconnect 1506a may extend outward from the first portion 1508 in a similar manner to forming standard metal traces on the substrate. The routing interconnect 1506 may be formed of any suitable conductive material, including metals such as solder or solder alloys, copper, aluminum, gold, silver, nickel, tin, titanium, metal / alloy combinations, and the like. The routing interconnect 1506 can be formed in any manner, including sputtering, electroplating, lithography processes, etc. as known to those skilled in the art.At step 1410, a plurality of bump interconnections are formed on a plurality of metallization layers, and each bump interconnection is connected to a corresponding second portion of the metallization layer. For example, as shown in FIG. 15, bump interconnect 1512a is formed on routing interconnect 1506a. In this way, a plurality of bump interconnections 1512 in communication with corresponding routing interconnections 1506 can be formed. For example, in FIG. 18, a plurality of bump interconnections 1512a-1512d (as part of the bump interconnection 1512 array) are formed, and each bump interconnection 1512a-1512d is associated with a corresponding routing interconnection 1506a-1506d Connected. The bump interconnect 1512 may be formed of any suitable conductive material, including metals such as solder or solder alloys, copper, aluminum, gold, silver, nickel, tin, titanium, metal / alloy combinations, and the like. The bump interconnect 1512 can be designed to any size and pitch according to the needs of a specific application. The bump interconnect 1512 can be formed in any manner, including sputtering, electroplating, lithography processes, etc. as known to those skilled in the art.In this way, the electrical connection from each terminal 406 to the corresponding bump interconnect 1512 is achieved using the corresponding routing interconnect 1506. Depending on the needs of a particular application, any number of such electrical connections can be formed, including the formation of ten, thousands, or even larger arrays of terminals 604. After the wafer is processed by the process 1400, the wafer is further processed according to the steps of the process 100 shown in FIG. 1, and the integrated circuit regions are divided into integrated circuit packages separated. For example, you can test each integrated circuit area (step 106), perform back-end processing on these integrated circuit areas to divide it into integrated circuit packages separated (step 108), and ship the separated integrated circuit packages (step 108) Step 110).As shown in FIGS. 15 and 18, in one embodiment, the bump interconnect 1512 is positioned so as to be completely on the insulating layer 1502 (through the routing interconnect 1506). The insulating layer 1502 can realize stress (stress applied to the bump interconnect 1512) absorption of the manufactured integrated circuit package. The second and third solutions described above in conjunction with FIGS. 9-13 do not have sufficient stress absorption mechanisms, which may cause undesirable chip damage. In addition, as shown in FIG. 15, the bump interconnection 1512a is completely located on the insulating layer 1502 without an additional material layer. The first solution described above in connection with FIGS. 5-8 requires two polymer layers, which is a more complicated and expensive technique. Therefore, the embodiments described in conjunction with FIGS. 14-18 are more advantageous than the three solutions previously described.In one embodiment, the metallization layers under the bumps for mounting bump interconnections described in the three traditional solutions described above are no longer needed. As shown in FIG. 15, bump interconnect 1512a is directly attached to routing interconnect 1506a. For example, bump interconnect 1512a may be directly attached to routing interconnect 1506a by soldering (eg, reflow).19 is a cross-sectional view of a part of an integrated circuit area 1900 according to another embodiment of the present invention. In the embodiment shown in FIG. 19, the routing interconnect 1506a includes multiple layers 1902a-1902c. For example, multiple layers 1902a-1902c are composed of one or more layers of different materials (such as the aforementioned different metals / metal alloys). In FIG. 19, the outermost layer 1902a of the multiple layers 1902a-1902c is removed at the region 1904, where the bump interconnect 1512a and the routing interconnect 1506a are connected (using chemical etching or lithography). In FIG. 19, the outermost layer 1902a uses a material that cannot be soldered, and the bump interconnect 1512a is solder, which does not adhere to the outermost layer 1902a material. However, the second layer 1902b is a solderable material, and the bump interconnect 1512a may be adhered thereto. Therefore, the material of the outermost layer 1902a is removed from the routing interconnect 1506 at the region 1904 so that the bump interconnect 1512a can adhere to the second layer 1902b. In addition, since the outermost layer 1902a is not solderable and appears on the routing interconnect 1506a outside the region 1904, it (outermost layer 1902a) can prevent the solder of the bump interconnect 1512a from infiltrating the path 1504a, thereby The terminal 604a is potentially harmful to the chip.20 is a cross-sectional view of a part of an integrated circuit region 2000 according to still another embodiment of the present invention.In the embodiment shown in FIG. 20, an additional metal layer 2002 is formed at the region 2004 on the routing interconnect 1506a. In FIG. 20, bump interconnect 1512a is solder, which does not adhere to the material of routing interconnect 1506a (which is not solderable). However, the material of the additional metal layer 2002 is solderable, so the bump interconnect 1512a may adhere to the additional metal layer 2002. In this way, the additional metal layer 2002 serves as the outermost solderable layer of the routing interconnect 1506a at the region 2004, so that the bump interconnect 1512a adheres to the routing interconnect 1506a by passing through the additional metal layer 2002. In addition, the non-soldering routing interconnect 1506a can prevent the solder of the bump interconnect 1512a from infiltrating into the path 1504a, potentially causing damage to the chip.In an embodiment, the location and / or size of the bump interconnect 1512 may have various forms. For example, FIG. 21 is a cross-sectional view of the integrated circuit region 2000 shown in FIG. 20. In FIG. 21, the opening of the path 1504a has the edge position 2102 closest to the bump interconnect 1512a. The bump interconnect 1512a has the bottom edge position 2104 closest to the path 1504a (for example, when the additional metal layer 2002 exists, it coincides with the edge of the additional metal layer 2002). In the embodiment of FIG. 21, the bump interconnect 1512a does not cover the path 1504a (for example, does not overlap the path 1504a in FIG. 21). In addition, the distance between the path edge location 2102 and the bottom edge location 2104 of the bump interconnect 1512a is greater than zero. Therefore, the path 1504a and the bump interconnect 1512a are separated.In another embodiment, the distance between the path and the bump interconnection is zero, or may even overlap. For example, FIG. 22 shows a cross-sectional view of the integrated circuit area 2200 in which the bump interconnect 2202a is adhered to the routing interconnect 1506a and covers the path 1504a. In fact, the bump interconnect 2202a completely covers the path 1504a in FIG. As shown in FIG. 22, the opening of the path 1504a has a center point 2204. There is a center point 2206 at the bottom of the bump interconnect 2202a. The distance 2208 between the center point 2204 of the path 1504a and the center point 2206 at the bottom of the bump interconnect 2202a is greater than zero. Thus, in an embodiment, the path 1504a and the bump interconnect 2202a may overlap but are not concentric in the integrated circuit area 2200, that is, their center points are staggered from each other.In addition, when overlapping, the bump interconnections may partially or completely fill the corresponding path 1504a. For example, in the embodiment shown in FIG. 22, the bump interconnect 2202a fills the path 1504a.In another embodiment, the path and the corresponding bump interconnection may be separated by a distance, but the routing interconnection may be designed to allow solder to flow from the bump interconnection to the path (such as during reflow soldering of the bump interconnection) . For example, FIG. 23 shows a cross-sectional view of an integrated circuit area 2300 in which bump interconnect 2302a is adhered to routing interconnect 2304a. In FIG. 23, bump interconnect 2302a and path 1504a do not overlap. As shown in FIG. 23, the routing interconnect 2304a includes a first part 2306, a second part 2308, and a third part 2310. The first part 2306 is connected to the terminal 604a through a path 1504a. The first bump interconnect 2302a is connected to the second part 2308. The third portion 2310 is similar to the trace formed on the insulating layer 1502, which connects the first and second portions 2306 and 2308 together. The design of the third portion 2310 enables the solder applied to the second portion 2308 of the routing interconnect 2304a to flow into the path 1504a (eg, during the reflow soldering of the bump interconnect 2302a). In this way, the third portion 2310 acts like a solder conduit from the second portion 2308 to the first portion 2306.The third part 2310 may be designed in various forms to control the flow rate of the solder from the second part 2308 to the first part 2306. 24 and 25 show an exemplary embodiment of the third part 2310. FIG. 24 shows a top view of the routing interconnect 2304a, where the width 2402 of the third portion 2310 is larger than the diameter 2404 of the path 1504a. FIG. 25 shows a top view of another routing interconnect 2304a, where the width 2502 of the third portion 2310 is smaller than the diameter 2404 of the path 1504a. In this way, the embodiment of FIG. 24 can make the flow rate of the solder flow larger because the third portion 2310 in FIG. 24 is wider than the third portion 2310 in FIG. 25. The embodiment of FIG. 25 can make the flow rate of the solder flow smaller because the third portion 2310 in FIG. 25 is narrower than the third portion 2310 in FIG. 24. The width of the third portion 2310 may be continuously reduced until the solder does not substantially flow to the path 1504a.in conclusionVarious embodiments of the present invention have been described above, and it should be understood that the purpose is only for illustration and not for limitation. Those skilled in the art know that various changes can be made in form and detail without departing from the spirit and scope of the present invention. Therefore, the scope of protection of the present invention is not limited to any of the embodiments described above, but should be defined in accordance with the claims and their equivalents.
Integrated circuit structures having concentrated source or drain structures with high germanium content are described. In an example, an integrated circuit structure includes a vertical arrangement of horizontal nanowires. The gate stack is around the vertical arrangement of the horizontal nanowires. A first epitaxial source or drain structure is at a first end of the vertical arrangement of the horizontal nanowires. A second epitaxial source or drain structure is at a second end of the vertical arrangement of the horizontal nanowires. Each of the first epitaxial source or drain structure and the second epitaxial source or drain structure includes silicon and germanium, where a germanium atomic concentration is higher at a core of the epitaxial source or drain structure than at a periphery of the epitaxial source or drain structure.
1.An integrated circuit structure comprising:vertical arrangement of horizontal nanowires;a gate stack around the vertical arrangement of the horizontal nanowires;a first epitaxial source or drain structure at a vertically arranged first end of the horizontal nanowire; anda second epitaxial source or drain structure at the vertically arranged second end of the horizontal nanowire, in the first epitaxial source or drain structure and the second epitaxial source or drain structure Each includes silicon and germanium, wherein the atomic concentration of germanium is higher at the core of the epitaxial source or drain structure than at the periphery of the epitaxial source or drain structure.2.The integrated circuit structure of claim 1, further comprising:A first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer along the first and second sides of the gate stack, respectively.3.3. The integrated circuit structure of claim 2, wherein the first epitaxial source or drain structure and the second epitaxial source or drain structure adjoin the first dielectric gate sidewall spacer and the second epitaxial source or drain structure, respectively. the second dielectric gate sidewall spacers.4.3. The integrated circuit structure of claim 3 wherein said first epitaxial source or drain structure and said second epitaxial source or drain structure abut said first dielectric gate sidewall spacer and the germanium atomic concentration of each of the first epitaxial source or drain structure and the second epitaxial source or drain structure at the location of the second dielectric gate sidewall spacer lowest.5.4. The integrated circuit structure of claim 1, 2, 3, or 4, wherein each of the first epitaxial source or drain structure and the second epitaxial source or drain structure further comprises boron.6.The integrated circuit structure according to claim 1, 2, 3 or 4, further comprising:a first conductive contact on the first epitaxial source or drain structure; andA second conductive contact on the second epitaxial source or drain structure.7.An integrated circuit structure comprising:vertical arrangement of horizontal nanowires;a gate stack around the vertical arrangement of the horizontal nanowires;a first condensed epitaxial source or drain structure at a vertically arranged first end of the horizontal nanowire; andthe second condensed epitaxial source or drain structure at the vertically arranged second end of the horizontal nanowire, the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain Each of the electrode structures includes silicon and germanium, wherein the lowest germanium atomic concentration is at the free surface of the epitaxial source or drain structure.8.The integrated circuit structure of claim 7, further comprising:A first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer along the first and second sides of the gate stack, respectively.9.9. The integrated circuit structure of claim 8, wherein the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure, respectively, adjoin the first dielectric gate sidewall spacer body and the second dielectric gate sidewall spacer.10.9. The integrated circuit structure of claim 9, wherein the free surface adjoins the first dielectric gate at the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure pole sidewall spacers and the second dielectric gate sidewall spacers.11.7. The integrated circuit structure of claim 7, 8, 9, or 10, wherein each of the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure further comprises boron.12.A computing device comprising:board; andA component coupled to the board, the component including an integrated circuit structure including:vertical arrangement of horizontal nanowires;a gate stack around the vertical arrangement of the horizontal nanowires;a first epitaxial source or drain structure at a vertically arranged first end of the horizontal nanowire; anda second epitaxial source or drain structure at the vertically arranged second end of the horizontal nanowire, in the first epitaxial source or drain structure and the second epitaxial source or drain structure Each includes silicon and germanium, wherein the atomic concentration of germanium is higher at the core of the epitaxial source or drain structure than at the periphery of the epitaxial source or drain structure.13.The computing device of claim 12, further comprising:A memory coupled to the board.14.The computing device of claim 12 or 13, further comprising:A communication chip coupled to the board.15.The computing device of claim 12 or 13, further comprising:A camera coupled to the board.16.The computing device of claim 12 or 13, further comprising:A battery coupled to the board.17.The computing device of claim 12 or 13, further comprising:An antenna coupled to the board.18.13. The computing device of claim 12 or 13, wherein the component is a packaged integrated circuit die.19.13. The computing device of claim 12 or 13, wherein the component is selected from the group consisting of a processor, a communication chip, and a digital signal processor.20.The computing device of claim 12 or 13, wherein the computing device is selected from the group consisting of a mobile phone, a laptop computer, a desktop computer, a server and a set top box.
Concentrated source or drain structures with high germanium contenttechnical fieldEmbodiments of the present disclosure pertain to the field of advanced integrated circuit structure fabrication and, in particular, to integrated circuit structures having condensed source or drain structures with high germanium content.Background techniqueThe shrinking of features in integrated circuits has been the driving force behind the growing semiconductor industry over the past few decades. Shrinking to smaller and smaller features enables an increase in the density of functional units on the limited real estate of a semiconductor chip. For example, shrinking transistor size allows a greater number of memory or logic devices to be incorporated on a chip, helping to manufacture products with increased capacity. However, the drive to larger and larger capacities is not without problems. The need to optimize the performance of each device becomes increasingly important.In the manufacture of integrated circuit devices, multi-gate transistors (eg, tri-gate transistors) have become more common as device dimensions continue to be scaled down. In conventional processes, tri-gate transistors are typically fabricated on bulk silicon substrates or silicon-on-insulator substrates. In some cases, a bulk silicon substrate is preferred because of its lower cost and because it enables a less complex tri-gate fabrication process. On the other hand, as microelectronic device dimensions shrink to less than the 10 nanometer (nm) node, maintaining improved mobility and short channel control presents challenges in device fabrication. Nanowires used to fabricate devices offer improved short-channel control.However, shrinking multi-gate and nanowire transistors is not without consequences. As the size of these basic building blocks of microelectronic circuits has decreased, and as the absolute number of basic building blocks fabricated in a given area has increased, constraints on the lithography process used to pattern these building blocks have become unbearable. In particular, there may be a trade-off between the minimum dimension (critical dimension) of the features patterned in the semiconductor stack and the spacing between these features.Variability in conventional and currently known fabrication processes may limit the possibility of extending them further into the 10nm node or sub-10nm node range. Therefore, the fabrication of functional components required for future technology nodes may require the introduction of new methods or the integration of new technologies in the current manufacturing process, or its replacement with the current manufacturing process.Description of drawingsFIGS. 1A-1D illustrate cross-sectional views representing operations in a method of fabricating a gate-all-around integrated circuit structure having a concentrated source having a high germanium content in accordance with an embodiment of the present disclosure. pole or drain structure.2 illustrates a cross-sectional view representing a gate-all-around integrated circuit structure having a condensed source or drain structure with high germanium content, in accordance with an embodiment of the present disclosure.3A illustrates a plan view of a plurality of gate lines over a pair of semiconductor fins according to another embodiment of the present disclosure.3B illustrates a cross-sectional view taken along the a-a' axis of FIG. 3A, according to an embodiment of the present disclosure.4 illustrates a cross-sectional view of an integrated circuit structure with trench contacts for PMOS devices in accordance with another embodiment of the present disclosure.5 illustrates a cross-sectional view of an integrated circuit structure having conductive contacts on raised source or drain regions in accordance with an embodiment of the present disclosure.6A and 6B illustrate cross-sectional views of various integrated circuit structures, each integrated circuit structure having trench contacts including an overlying insulating cap layer and having an overlying insulating cap including layer of the gate stack.7A shows a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure according to an embodiment of the present disclosure.7B illustrates a cross-sectional source or drain view of the nanowire-based integrated circuit structure of FIG. 7A, taken along the a-a' axis, in accordance with an embodiment of the present disclosure.7C illustrates a cross-sectional channel view of the nanowire-based integrated circuit structure of FIG. 7A taken along the b-b' axis, according to an embodiment of the present disclosure.8A illustrates a computing device according to one embodiment of the present disclosure.FIG. 8B illustrates an interposer including one or more embodiments of the present disclosure.9 is an isometric view of a mobile computing platform employing an IC fabricated according to one or more processes described herein or including one or more features described herein, according to an embodiment of the present disclosure.10 shows a cross-sectional view of a flip-chip mounted die in accordance with an embodiment of the present disclosure.Detailed waysIntegrated circuit structures with concentrated source or drain structures with high germanium content are described. In the following description, numerous specific details are set forth, such as specific integrations and material systems, in order to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that embodiments of the present disclosure may be practiced without these specific details. In other instances, well-known features such as integrated circuit design layouts have not been described in detail to avoid unnecessarily obscuring embodiments of the present disclosure. Furthermore, it should be appreciated that the various embodiments shown in the figures are illustrative representations and are not necessarily drawn to scale.The following detailed description is merely illustrative in nature and is not intended to limit embodiments of the subject matter or the application and uses of such embodiments. As used herein, the word "exemplary" means "serving as an example, instance, or illustration." Any implementation described herein as exemplary is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description.This specification includes references to "one embodiment" or "an embodiment." The appearances of the phrases "in one embodiment" or "in an embodiment" are not necessarily referring to the same embodiment. The particular features, structures or characteristics may be combined in any suitable manner consistent with this disclosure.the term. The following paragraphs provide definitions or context for terms found in this disclosure, including the appended claims:"include". The term is open ended. As used in the appended claims, this term does not exclude additional structures or operations."configured as". Various units or components may be described or claimed to be "configured to" perform one or more tasks. In this context, "configured to" is used to imply structure by indicating that the unit or component includes structure that performs one or more of those tasks during operation. In this way, a designated unit or component may be said to be configured to perform a task even when the specified unit or component is not currently operating (eg, not turned on or active). Detailing that a unit or circuit or component is "configured to" perform one or more tasks is expressly intended not to invoke paragraph six of 35 U.S.C. §112 for that unit or component."First", "Second", etc. As used herein, these terms are used as markers for the nouns that follow them, and do not imply any type of order (eg, spatial, temporal, logical, etc.)."Coupled" - The following description refers to elements or nodes or features that are "coupled" together. As used herein, unless expressly stated otherwise, "coupled" means that one element or node or feature is directly or indirectly connected to (or in direct or indirect communication with) another element or node or feature, and not necessarily mechanically.Additionally, certain terms are also used in the following description for reference purposes only, and thus these terms are not intended to be limiting. For example, terms such as "upper," "lower," "above," and "below" refer to the directions to which reference is provided in the figures. Terms such as "front", "back", "rear", "side", "outside" and "inside" describe the orientation or position or both of parts of a component within a consistent but arbitrary frame of reference, which are described by reference is clearly understood from the text and associated drawings of the components in question. Such terms may include the words specifically mentioned above, their derivatives, and words of similar import."Suppression" - As used herein, suppression is used to describe reducing or minimizing an effect. When a component or feature is described as inhibiting a behavior, movement, or condition, it can completely prevent an outcome or consequence or future state. Additionally, "suppressing" can also mean reducing or reducing a consequence, property, or effect that might otherwise occur. Thus, when a component, element or feature is referred to as inhibiting a result or condition, it does not necessarily completely prevent or eliminate that result or condition.Embodiments described herein may relate to front end of line (FEOL) semiconductor processing and structures. FEOL is the first part of integrated circuit (IC) fabrication in which individual devices (eg, transistors, capacitors, resistors, etc.) are patterned in a semiconductor substrate or layer. FEOL generally covers everything up to, but not including, the deposition of metal interconnect layers. After the final FEOL operation, the result is usually a wafer with isolated transistors (eg, without any wiring).Embodiments described herein may relate to back end of line (BEOL) semiconductor processing and structures. BEOL is the second part of IC fabrication where individual devices (eg, transistors, capacitors, resistors, etc.) are interconnected using lines on the wafer, eg, one or more metallization layers. A BEOL includes contacts, insulating layers (dielectrics), metal levels, and bonding sites for chip-to-package connections. In the BEOL portion of the manufacturing stage, contacts (pads), interconnects, vias and dielectric structures are formed. For modern IC processes, more than 10 metal layers can be added to the BEOL.The embodiments described below may apply to FEOL processes and structures, BEOL processes and structures, or both FEOL and BEOL processes and structures. In particular, although an exemplary processing scheme may be shown using the FEOL processing scenario, such an approach may also be applicable to BEOL processing. Likewise, although an exemplary processing scheme may be shown using the BEOL processing scenario, such an approach may also be applicable to FEOL processing.According to one or more embodiments of the present disclosure, source or drain epitaxy (Epi) concentration for achieving strain on a nanoribbon MOSFET is described.To provide context, today's nanoribbon p-FET architectures may suffer from drive current degradation due to strain losses in the channel. Unlike FinFETs, it can be challenging to achieve high-quality embedded epitaxial S/Ds, which are necessary to apply strain in the channel, in nanoribbon or nanosheet FETs. Indeed, after epitaxial S/D growth, the nanoribbon channel may be in a stretched state, and this may further degrade the performance of the device.According to embodiments of the present disclosure, concentration techniques are implemented to repair and/or eliminate defects in the epitaxial S/D and increase the effective Ge concentration in the epitaxial S/D to make the structure a better stressor for the channel source. Advantages of implementing the embodiments described herein may include achieving high quality embedded epitaxy S/Ds with effective Ge concentrations greater than nominal (eg, greater than 30%). This concentration process can provide a gradient of Ge concentration in the ESD, with a peak concentration near the edge of the epibubble. Compressive strain in the nanoribbon channel can be achieved in the presence of bottom isolation and internal spacers.As an exemplary process flow, FIGS. 1A-1D illustrate cross-sectional views representing operations in a method of fabricating a gate-all-around integrated circuit structure having a gate-all-around integrated circuit structure in accordance with an embodiment of the present disclosure. Concentrated source or drain structures with high germanium content.Referring to FIG. 1A , the process flow may begin with a dummy gate (eg, a polysilicon gate) that has been formed with Si nanoribbons and layers on a bonded or buried oxide (BOX) surface (eg, a silicon oxide layer) on a stack of alternating layers of sacrificial SiGe layers. As depicted, starting structure 100 includes an oxide layer 104 (eg, a silicon oxide layer) on a substrate 102 (eg, a silicon substrate). A plurality of alternating SiGe sacrificial layers 110 and Si channel layers 108 are formed on oxide layer 104 . The dummy gate structure 106 includes a spacer layer 116 and/or a dummy gate dielectric and a dummy gate electrode 114 .Referring again to FIG. 1A, an undercut etch is performed between adjacent dummy gate structures to create source or drain (S/D) cavities. As depicted, cavities 118 are formed in the source or drain locations. The etch used to form the cavity 118 forms alternating patterned SiGe sacrificial layers 110 and patterned Si channel layers 108 . Then, internal spacers are formed along the edges of the patterned SiGe sacrificial layer 110 to ensure final separation between the source or drain epitaxy (S/D epitaxy) material and the gate that is subsequently formed in the final device structure. As depicted, the patterned SiGe sacrificial layer 110 is recessed laterally relative to the patterned Si channel layer 108 . Then, a dielectric spacer material is deposited and patterned to form internal spacers 112 .Referring to Figure IB, SiGe source or drain material growth is initiated using the Si channel layer 108 as a seed layer. As depicted, an initial stage SiGe nub 120 is formed on the Si channel layer 108 in the cavity 118 .Referring to FIG. 1C , silicon germanium source or drain material growth continues to fill the source or drain regions of cavity 118 . As depicted, a merged source or drain structure 122 is formed. Adjacent merged source or drain structures 122 may meet at interface 126, as depicted. Additionally or alternatively, a cavity 124 may be formed between the oxide layer 104 and the bottom of the merged source or drain structure 122 .Without being bound by theory, as best understood, the reason why strained Si channels are difficult to achieve in nanoribbon architectures is because of source/drain SiGe epitaxy (eg, merged source or drain structures 122 ) full of flaws. In the nanoribbon architecture, SiGe epitaxy grows laterally from the channel at the multiple fronts due to the presence of internal dielectric spacers. Epitaxial films grown in this way may have a large number of dislocations and stacking faults, and thus are no longer effective in applying compressive strain to the channel, and possibly tensile stress.Referring to Figure ID, structure 150 is depicted after a concentration process for changing S/D epitaxial composition and stress inducing capability. In certain embodiments, a "concentration" process is implemented in which the structure is subjected to a high temperature anneal in an oxidizing environment. This process allows preferential oxidation of Si at the surface and indiffusion of Ge into the S/D epitaxial core. As a result, the Ge concentration in the core increases, resulting in S/D SiGe epitaxy with effectively a higher Ge concentration than that of the initial growth. It was also observed that thermal treatment can repair some lattice defects in S/D epitaxy, making it more effective in applying strain to the channel. Both effects can place the channel in compressive strain. As depicted, the combined source or drain structures 122 are condensed to form condensed source or drain structures 152 . Adjacent concentrated source or drain structures 152 may meet at interface 126, as depicted, or alternatively may be seamless. Additionally or alternatively, cavity 124 may remain between oxide layer 104 and the bottom of concentrated source or drain structure 152 .In an embodiment, the concentration process may form a silicon oxide layer at the free or exposed surface of the source or drain structure and drive the remaining Ge from the free surface, eg, into region 154, as depicted. Free surfaces may also be at the top and/or bottom surfaces of the source or drain structures. In an embodiment, the free surface of the source or drain structure is the exposed surface of the surface that is in contact with material other than the epitaxial seed layer (eg, silicon nanowires or nanoribbons). For example, the free surface may be where the source or drain structure has an interface with the internal spacer 112 . Such embedded epitaxial S/Ds may have an effective Ge concentration greater than the initial nominal deposition (eg, greater than 30%). This concentration process can provide a gradient of Ge concentration in the epitaxial source or drain structure, with a peak concentration near the edge of the epitaxial bubble, eg, at the central portion of the interface 126 .It should be understood that the structure 150 of Figure ID may be subjected to further processing. As an exemplary structure fabricated with further processing, FIG. 2 shows a cross-sectional view representing a gate-all-around integrated circuit structure 200 having a concentrated source or drain structure with high germanium content, according to an embodiment of the present disclosure. It should be understood that the features of structure 200 are based on modifications to structure 150 to provide additional depictions of the embodiments described herein. For example, the condensed source or drain regions are shown completely filling the cavity between the gates, and further filling the recesses in the underlying dielectric layer 204 .Referring to FIG. 2 , where a dummy structure is used, the dummy gate structure is removed, and the SiGe sacrificial layer is then selectively etched away to release the Si channel 208 . In the positions previously occupied by the dummy gate structure and the SiGe sacrificial layer, gate dielectric 252 (eg, a high-k gate dielectric) and gate electrode 254 are formed. With the gate electrode 254 recessed between the gate spacers 258 , an etch stop layer 256 may be formed on the gate electrode 254 . A source or drain contact 260 is formed on the concentrated source or drain structure 230 . The concentrated source or drain structure 230 may be partially recessed in this process to form a recessed concentrated source or drain structure 230, as depicted. With the source or drain contacts 260 recessed between the gate spacers 258, an etch stop layer 262 may be formed on the source or drain contacts 260, as depicted. An interlayer dielectric layer 264 may be included between the source or drain contact 260 and the gate spacer 258, also as depicted.Referring again to FIGS. 1D and 2 , according to an embodiment of the present disclosure, an integrated circuit structure includes a vertical arrangement of horizontal nanowires. The gate stack surrounds the vertical arrangement of horizontal nanowires. The first epitaxial source or drain structure is at the vertically arranged first end of the horizontal nanowire. A second epitaxial source or drain structure is at the vertically arranged second end of the horizontal nanowire. Each of the first epitaxial source or drain structure and the second epitaxial source or drain structure includes silicon and germanium, wherein the germanium atomic concentration is higher at the core of the epitaxial source or drain structure than at the epitaxial source or drain structure. High at the periphery of the drain structure.In an embodiment, the integrated circuit structure further includes a first dielectric gate sidewall spacer and a second dielectric gate sidewall spacer along the first side and the second side, respectively, of the gate stack. In one embodiment, the first epitaxial source or drain structure and the second epitaxial source or drain structure adjoin the first dielectric gate sidewall spacer and the second dielectric gate sidewall spacer, respectively. In certain such embodiments, the first epitaxial source or drain structure and the second epitaxial source or drain structure are adjacent to the first dielectric gate sidewall spacer and the second dielectric gate sidewall spacer. The location where each of the first epitaxial source or drain structure and the second epitaxial source or drain structure has the lowest concentration of germanium atoms. In an embodiment, each of the first epitaxial source or drain structure and the second epitaxial source or drain structure further includes boron.As used throughout this document, a silicon layer or structure may be used to describe a silicon material composed of a substantial amount, if not all, of silicon. However, it should be understood that in practice, 100% pure Si may be difficult to form and, therefore, may include minor percentages of carbon, germanium or tin. These impurities may be included as unavoidable impurities or constituents during Si deposition, or may "contaminate" Si upon diffusion during post-deposition processing. Accordingly, embodiments described herein involving silicon layers may include silicon layers that contain relatively small amounts (eg, "impurity" levels) of non-Si atoms or species (eg, Ge, C, or Sn). It should be understood that the silicon layers described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.As used throughout this document, a germanium layer or structure may be used to describe a germanium material composed of a substantial amount, if not all, of germanium. However, it should be understood that in practice, 100% pure Ge may be difficult to form and, therefore, may include minor percentages of silicon, carbon or tin. These impurities may be included as unavoidable impurities or constituents during Ge deposition, or may "contaminate" Ge upon diffusion during post-deposition processing. Accordingly, embodiments described herein involving germanium layers may include germanium layers that contain relatively small amounts (eg, "impurity" levels) of non-Ge atoms or species (eg, carbon, silicon, or tin). It should be understood that the germanium layers described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.As used throughout this document, a silicon germanium layer or structure may be used to describe a silicon germanium material consisting of a substantial portion of both silicon and germanium (eg, at least 5% of both). In some embodiments, the amount of germanium is greater than the amount of silicon. In a particular embodiment, the silicon germanium layer includes about 60% germanium and about 40% silicon (Si40Ge60). In other embodiments, the amount of silicon is greater than the amount of germanium. In a particular embodiment, the silicon germanium layer includes about 30% germanium and about 70% silicon (Si70Ge30). It should be appreciated that, in practice, 100% pure silicon germanium (commonly referred to as SiGe) may be difficult to form and, therefore, may include minor percentages of carbon or tin. These impurities may be included as unavoidable impurities or constituents during SiGe deposition, or may "contaminate" SiGe upon diffusion during post-deposition processing. Accordingly, embodiments described herein involving silicon germanium layers may include silicon germanium layers containing relatively small amounts (eg, "impurity" levels) of non-Ge and non-Si atoms or species (eg, carbon or tin). It should be understood that the silicon germanium layers described herein may be undoped or may be doped with dopant atoms such as boron, phosphorous or arsenic.In embodiments, semiconductor structures or devices that benefit from the source or drain structures described herein are non-planar devices such as, but not limited to, fin-FET devices, tri-gate devices, nanoribbon devices, or nanowire devices. Additionally, although structures may be distinguished as nanowires or nanoribbons, unless otherwise stated, the term nanowire may be used to describe both.In another aspect, the source or drain structures described herein can be implemented for integrated circuit structures based on architectures other than nanowires and nanoribbons (eg, for fin-based devices). In an example, FIG. 3A shows a plan view of a plurality of gate lines over a pair of semiconductor fins according to another embodiment of the present disclosure.Referring to FIG. 3A , a plurality of active gate lines 304 are formed over a plurality of semiconductor fins 300 . Dummy gate lines 306 are at the ends of the plurality of semiconductor fins 300 . Space 308 between gate lines 304/306 may be where trench contacts may be located to provide conductive contacts to source or drain regions (eg, source or drain regions 351, 352, 353, and 354) s position. In an embodiment, the pattern of the plurality of gate lines 304/306 or the pattern of the plurality of semiconductor fins 300 is described as a grating structure. In one embodiment, the grating-like pattern includes a pattern of a plurality of gate lines 304/306 and/or a plurality of semiconductor fins 300 spaced at a constant pitch and having a constant width or both.3B illustrates a cross-sectional view taken along the a-a' axis of FIG. 3A, according to an embodiment of the present disclosure.Referring to FIG. 3B , a plurality of active gate lines 364 are formed over semiconductor fins 362 formed over substrate 360 . Dummy gate lines 366 are at the ends of semiconductor fins 362 . Dielectric layer 370 is external to dummy gate line 366 . The trench contact material 397 is between the active gate lines 364 and between the dummy gate line 366 and the active gate line 364 . Embedded source or drain structures 368 and corresponding suicide layers 369 are in semiconductor fins 362 between active gate lines 364 and between dummy gate lines 366 and active gate lines 364 . In an embodiment, the embedded source or drain structures 368 have structures and/or compositions such as those described above in connection with the concentrated source or drain structures 152 of FIG. ID and/or other embodiments described herein.Active gate line 364 includes gate dielectric structures 398 / 399 , work function gate electrode portion 374 and fill gate electrode portion 376 , and dielectric capping layer 378 . Dielectric spacers 380 line active gate lines 364 and dummy gate lines 366 .In another aspect, trench contact structures (eg, for source or drain regions) are described. In an example, FIG. 4 shows a cross-sectional view of an integrated circuit structure with trench contacts for PMOS devices according to another embodiment of the present disclosure.4, an integrated circuit structure 450 includes fins 452, eg, silicon fins. A gate dielectric layer 454 is over the fins 452 . Gate electrode 456 is over gate dielectric layer 454 . In an embodiment, gate electrode 456 includes conformal conductive layer 458 and conductive fill 460 . In an embodiment, dielectric cap 462 is over gate electrode 456 and over gate dielectric layer 454 . The gate electrode has a first side 456A and a second side 456B opposite the first side 456A. Dielectric spacers are along the sidewalls of gate electrode 456 . In one embodiment, gate dielectric layer 454 is also between a first dielectric spacer in dielectric spacers 463 and first side 456A of gate electrode 456 and a second dielectric spacer in dielectric spacers 463 and the second side 456B of the gate electrode 456, as depicted. In an embodiment, although not depicted, a thin oxide layer (eg, a thermal or chemical silicon oxide or silicon dioxide layer) is between the fins 452 and the gate dielectric layer 454 .The first semiconductor source or drain region 464 and the second semiconductor source or drain region 466 are adjacent to the first side 456A and the second side 456B, respectively, of the gate electrode 456 . In one embodiment, first semiconductor source or drain region 464 and second semiconductor source or drain region 466 include embedded epitaxial regions and corresponding silicide layers 495 or 497 and are formed on fins 452, respectively of recesses 465 and 467, as depicted. In embodiments, embedded source or drain structures 464 and 466 have structures and/or compositions such as those described above in connection with source or drain structure 152 of FIG. ID and/or other embodiments described herein.The first trench contact structure 468 and the second trench contact structure 470 are at the first semiconductor source or drain region 464 and the second semiconductor source, respectively, adjacent to the first side 456A and the second side 456B of the gate electrode 456 over the electrode or drain region 466 . Both the first trench contact structure 468 and the second trench contact structure 470 include a U-shaped metal layer 472 and a T-shaped metal layer 474 on and covering the entire U-shaped metal layer 472 . In one embodiment, the U-shaped metal layer 472 and the T-shaped metal layer 474 are of different compositions. In one such embodiment, U-shaped metal layer 472 includes titanium and T-shaped metal layer 474 includes cobalt. In one embodiment, both the first trench contact structure 468 and the second trench contact structure 470 further include a third metal layer 476 on the T-shaped metal layer 474 . In one such embodiment, the third metal layer 476 and the U-shaped metal layer 472 have the same composition. In certain embodiments, the third metal layer 476 and the U-shaped metal layer 472 include titanium, and the T-shaped metal layer 474 includes cobalt.The first trench contact via 478 is electrically connected to the first trench contact 468 . In certain embodiments, the first trench contact via 478 is on and coupled to the third metal layer 476 of the first trench contact 468 . The first trench contact via 478 is also over and in contact with portions of one of the dielectric spacers 463 and over and in contact with portions of the dielectric cap 462 . The second trench contact via 480 is electrically connected to the second trench contact 470 . In certain embodiments, the second trench contact via 480 is on and coupled to the third metal layer 476 of the second trench contact 470 . The second trench contact via 480 is also over and in contact with a portion of another of the dielectric spacers 463 and over and in contact with another portion of the dielectric cap 462 .In an embodiment, the metal silicide layer 495 or 497 includes nickel, platinum, and silicon. In certain such embodiments, the first semiconductor source or drain region 464 and the second semiconductor source or drain region 466 are a first P-type semiconductor source or drain region and a second P-type semiconductor source or drain region polar region. In one embodiment, the metal silicide layer 495 or 497 also includes boron.5 illustrates a cross-sectional view of an integrated circuit structure having conductive contacts on raised source or drain regions in accordance with another embodiment of the present disclosure.Referring to FIG. 5 , a semiconductor structure 550 includes a gate structure 552 over a substrate 554 . The gate structure 552 includes a gate dielectric layer 552A, a work function layer 552B, and a gate fill 552C. Source regions 558 and drain regions 560 are on opposite sides of gate structure 552 . Source or drain contact 562 is electrically connected to source region 558 and drain region 560 and is spaced apart from gate structure 552 by one or both of interlayer dielectric layer 564 or gate dielectric spacer 566 . Source region 558 and drain region 560 include epitaxial or embedded material regions and corresponding silicide semiconductor layers 502 formed in etched regions of substrate 554 . Embedded source or drain regions 558 and 560 have structures and/or compositions such as those described above in connection with source or drain structure 152 of FIG. ID and/or other embodiments described herein.In an embodiment, the source or drain contact 562 includes a barrier layer 562A and a conductive trench fill material 562B. In one embodiment, barrier layer 562A is a high-purity metal layer having a total atomic composition that includes 98% or higher percent titanium. In one such embodiment, the total atomic composition of the high purity metal layer further includes 0.5-2% chlorine. In an embodiment, the high purity metal layer has a thickness variation of 30% or less. In an embodiment, the conductive trench fill material 562B is composed of a conductive material such as, but not limited to, Cu, Al, W, Co, or alloys thereof.In another aspect, contact on active gate (COAG) structures and processes are described. One or more embodiments of the present disclosure relate to a semiconductor structure or device having one or more gate contact structures (eg, as gate contact via). One or more embodiments of the present disclosure relate to methods of fabricating a semiconductor structure or device having one or more gate contact structures formed over an active portion of a gate electrode of the semiconductor structure or device . The methods described herein can be used to reduce standard cell area by enabling gate contact formation over the active gate region. In one or more embodiments, the gate contact structures fabricated to contact the gate electrodes are self-aligned via structures.In an embodiment, the integrated circuit structure, semiconductor structure or device is a non-planar device such as, but not limited to, a fin-FET or a tri-gate device. In such an embodiment, the corresponding semiconductor channel region consists of or is formed in a three-dimensional body. In one such embodiment, the gate electrode stack of the gate line surrounds at least a top surface and a pair of sidewalls of the three-dimensional body. In another embodiment, at least the channel region is made as a discrete three-dimensional body, such as in a gate-all-around device. In one such embodiment, each gate electrode stack of the plurality of gate lines completely surrounds the channel region.More generally, one or more embodiments relate to methods and structures formed therefrom for directly landing gate contact vias on gates of active transistors. This approach can eliminate the need to extend gate lines over isolation for contact purposes. This approach can also eliminate the need for a separate gate contact (GCN) layer for conducting signals from the gate lines or structures. In an embodiment, elimination of the aforementioned features is accomplished by recessing the contact metal in the trench contact (TCN) and introducing additional dielectric material (eg, TILA) in the process flow. Include additional dielectric material as trench contact dielectric capping layer with gate dielectric material aligned with trench contacts already used in Gate Aligned Contact Process (GAP) processing schemes (eg, GILA) Different etching properties of the capping layer.In an embodiment, providing integrated circuit structures involves the formation of contact patterns that align substantially perfectly with existing gate patterns, while eliminating the use of photolithographic operations with very tight registration budgets. In one such embodiment, the method enables the use of inherently highly selective wet etching (eg, compared to dry or plasma etching) to create contact openings, in an embodiment, by utilizing existing The gate pattern is combined with a contact plug lithography operation to form the contact pattern. In one such embodiment, the method enables the elimination of other critical lithography operations for generating the contact pattern as used in other methods. need. In an embodiment, the trench contact grid is not patterned separately, but is formed between polysilicon (gate) lines. For example, in one such embodiment, the trench contact grid is formed after gate grating patterning but before gate grating dicing.Furthermore, the gate stack structure can be fabricated by a replacement gate process. In this approach, the dummy gate material (eg, polysilicon or silicon nitride pillar material) can be removed and replaced with a permanent gate electrode material. In one such embodiment, the permanent gate dielectric layer is also formed in this process, rather than from an earlier process. In an embodiment, the dummy gate is removed by a dry etch or wet etch process. In one embodiment, the dummy gate is composed of polysilicon or amorphous silicon and is removed using a dry etching process including SF6. In another embodiment, the dummy gate is composed of polysilicon or amorphous silicon and is removed using a wet etch process including aqueous NH4OH or tetramethylammonium hydroxide. In one embodiment, the dummy gate is composed of silicon nitride and removed using a wet etch including aqueous phosphoric acid.In embodiments, one or more of the methods described herein substantially contemplate dummy and replacement gate processes combined with dummy and replacement contact processes to obtain integrated circuit structures. In one such embodiment, the replacement contact process is performed after the replacement gate process to allow high temperature annealing of at least a portion of the permanent gate stack. For example, in certain such embodiments, the annealing of at least a portion of the permanent gate structure is performed at a temperature greater than about 600 degrees Celsius, eg, after forming the gate dielectric layer. Annealing is performed prior to forming permanent contacts.It will be appreciated that different structural relationships between the insulating gate capping layer and the insulating trench contact capping layer may be fabricated. As examples, FIGS. 6A and 6B show cross-sectional views of various integrated circuit structures, each integrated circuit structure having trench contacts including an overlying insulating cap layer and having an overlying insulating cap layer, in accordance with embodiments of the present disclosure. Gate stack of insulating capping layers.6A and 6B, integrated circuit structures 600A and 600B, respectively, include fins 602, eg, silicon fins. Although depicted as a cross-sectional view, it should be understood that the fin 602 has a top 602A and sidewalls (in and out of the page from the perspective shown). The first gate dielectric layer 604 and the second gate dielectric layer 606 are over the top 602A of the fin 602 and laterally adjacent to the sidewalls of the fin 602 . The first gate electrode 608 and the second gate electrode 610 are over the first gate dielectric layer 604 and the second gate dielectric layer 606, respectively, and the first gate dielectric layer 604 and the second gate dielectric layer 606 are on the fin Above the top 602A of the fin 602 and laterally adjacent to the sidewalls of the fin 602 . The first gate electrode 608 and the second gate electrode 610 each include a conformal conductive layer 609A (eg, a work function setting layer) and a conductive fill material 609B over the conformal conductive layer 609A. Both the first gate electrode 608 and the second gate electrode 610 have a first side 612 and a second side 614 opposite the first side 612 . Both the first gate electrode 608 and the second gate electrode 610 also have an insulating cap 616 having a top surface 618 .The first dielectric spacer 620 is adjacent to the first side 612 of the first gate electrode 608 . The second dielectric spacer 622 is adjacent to the second side 614 of the second gate electrode 610 . The semiconductor source or drain region 624 is adjacent to the first dielectric spacer 620 and the second dielectric spacer 622 . The trench contact structure 626 is over the semiconductor source or drain region 624 adjacent to the first dielectric spacer 620 and the second dielectric spacer 622 . In an embodiment, the semiconductor source or drain region 624 has a structure and/or composition such as those described above in connection with the source or drain structure 152 of FIG. ID and/or other embodiments described herein.The trench contact structure 626 includes an insulating cap 628 over the conductive structure 630 . The insulating cap 628 of the trench contact structure 626 has a top surface 629 that is substantially coplanar with the top surfaces 618 of the insulating caps 616 of the first gate electrode 608 and the second gate electrode 610 . In an embodiment, insulating caps 628 of trench contact structures 626 extend laterally into recesses 632 in first dielectric spacer 620 and second dielectric spacer 622 . In such an embodiment, the insulating cap 628 of the trench contact structure 626 overhangs the conductive structure 630 of the trench contact structure 626 . However, in other embodiments, the insulating caps 628 of the trench contact structures 626 do not extend laterally into the recesses 632 in the first dielectric spacer 620 and the second dielectric spacer 622 and thus do not overhang beyond the trench contact structures Conductive structure 630 of 626 .It should be appreciated that the conductive structures 630 of the trench contact structures 626 may not be rectangular, as depicted in Figures 6A and 6B. For example, the conductive structure 630 of the trench contact structure 626 may have a cross-sectional geometry similar to or the same as the geometry shown for the conductive structure 630A shown in the projection of FIG. 6A.In an embodiment, the insulating cap 628 of the trench contact structure 626 has a different composition than that of the insulating cap 616 of the first gate electrode 608 and the second gate electrode 610 . In one such embodiment, the insulating cap 628 of the trench contact structure 626 includes a carbide material, eg, a silicon carbide material. The insulating caps 616 of the first gate electrode 608 and the second gate electrode 610 include a nitride material, eg, a silicon nitride material.In an embodiment, the insulating caps 616 of the first gate electrode 608 and the second gate electrode 610 both have a bottom surface 617A below the bottom surface 628A of the insulating cap 628 of the trench contact structure 626, as in FIG. 6A depicted. In another embodiment, the insulating caps 616 of the first gate electrode 608 and the second gate electrode 610 both have a bottom surface 617B that is substantially coplanar with the bottom surface 628B of the insulating cap 628 of the trench contact structure 626 , as depicted in Figure 6B. In another embodiment, although not depicted, both the insulating caps 616 of the first gate electrode 608 and the second gate electrode 610 have bottom surfaces above the bottom surfaces of the insulating caps 628 of the trench contact structures 626 .In an embodiment, the conductive structure 630 of the trench contact structure 626 includes a U-shaped metal layer 634, a T-shaped metal layer 636 over and covering the entire U-shaped metal layer 634, and a T-shaped metal layer 636 over the entire U-shaped metal layer 634. Third metal layer 638 on 636 . The insulating cap 628 of the trench contact structure 626 is on the third metal layer 638 . In one such embodiment, the third metal layer 638 and the U-shaped metal layer 634 include titanium, and the T-shaped metal layer 636 includes cobalt. In certain such embodiments, T-shaped metal layer 636 also includes carbon.In an embodiment, the metal silicide layer 640 is directly between the conductive structure 630 of the trench contact structure 626 and the semiconductor source or drain region 624 . In one such embodiment, the metal silicide layer 640 includes nickel, platinum, and silicon. In certain such embodiments, semiconductor source or drain region 624 is a P-type semiconductor source or drain region.To emphasize an exemplary integrated circuit structure having three vertically arranged nanowires, FIG. 7A shows a three-dimensional cross-sectional view of a nanowire-based integrated circuit structure in accordance with an embodiment of the present disclosure. 7B shows a cross-sectional source or drain view of the nanowire-based integrated circuit structure of FIG. 7A taken along the a-a' axis. Figure 7C shows a cross-sectional channel view of the nanowire-based integrated circuit structure of Figure 7A taken along the b-b' axis.Referring to FIG. 7A , an integrated circuit structure 700 includes one or more vertically stacked nanowires (set 704 ) over a substrate 702 . In an embodiment, as depicted, relaxation buffer layer 702C, defect modification layer 702B, and lower substrate portion 702A are included in substrate 702 . For illustration purposes, the optional fins below the bottommost nanowire and formed by the substrate 702 are not depicted in order to emphasize the nanowire portion. Embodiments herein are directed to single-wire devices and multi-wire devices. As an example, a three nanowire based device with nanowires 704A, 704B, and 704C is shown for illustrative purposes. For ease of description, nanowire 704A is used as an example, where the description focuses on one of the nanowires. It should be understood that where the properties of one nanowire are described, embodiments based on multiple nanowires may have the same or substantially the same properties for each of the nanowires.Each of the nanowires 704 includes a channel region 706 in the nanowire. Channel region 706 has a length (L). Referring to Figure 7C, the channel region also has a perimeter (Pc) orthogonal to the length (L). Referring to FIGS. 7A and 7C , the gate electrode stack 708 surrounds the entire perimeter (Pc) of each of the channel regions 706 . Gate electrode stack 708 includes a gate electrode and a gate dielectric layer between channel region 706 and the gate electrode (not shown). In an embodiment, the channel region is discrete in that it is completely surrounded by the gate electrode stack 708 without any intervening material (eg, underlying substrate material or overlying channel fabrication material). Thus, in embodiments with multiple nanowires 704, the channel regions 706 of the nanowires are also discrete with respect to each other.7A and 7B, the integrated circuit structure 700 includes a pair of non-discrete source or drain regions 710/712. The pair of non-discrete source or drain regions 710 / 712 are on either side of the channel region 706 of the plurality of vertically stacked nanowires 704 . Furthermore, the pair of non-discrete source or drain regions 710 / 712 adjoins the channel regions 706 of the plurality of vertically stacked nanowires 704 . In one such embodiment (not depicted), the pair of non-discrete source or drain regions 710/712 directly adjoins the channel region 706 vertically, since the epitaxial growth is on the portion of the nanowire extending beyond the channel region 706 and In between, where the nanowire ends are shown within the source or drain structures. In another embodiment, as depicted in Figure 7A, the pair of non-discrete source or drain regions 710/712 indirectly adjoins the channel region 706 vertically vertically because they are formed at the ends of the nanowire rather than at between nanowires. In an embodiment, the non-discrete source or drain regions 710/712 have structures and/or compositions such as those described above in connection with the source or drain structure 152 of FIG. ID and/or other embodiments described herein .In an embodiment, as depicted, the source or drain regions 710 / 712 are non-discrete because there is no separate and discrete source or drain region for each channel region 706 of the nanowire 704 . Thus, in embodiments with multiple nanowires 704, the source or drain regions 710/712 of the nanowires are global or integrated source or drain regions, rather than discrete for each nanowire . That is, in a single integrated feature serving as a source or drain region for multiple (in this case, 3) nanowires 704, and more particularly, for more than one discrete trench In the sense of the channel region 706, the non-discrete source or drain regions 710/712 are global. In one embodiment, each of the pair of non-discrete source or drain regions 710/712 is approximately in shape with a bottom taper from a cross-sectional perspective orthogonal to the length of the discrete channel region 706 shape portion and top vertex portion of the rectangle, as depicted in Figure 7B. However, in other embodiments, the source or drain regions 710/712 of the nanowire are relatively large but discrete non-vertically merged epitaxial structures, eg, nubs.According to an embodiment of the present disclosure, and as depicted in FIGS. 7A and 7B , the integrated circuit structure 700 further includes a pair of contacts 714 , each contact 714 at the pair of non-discrete source or drain regions 710/712 on one of the. In one such embodiment, each contact 714 completely surrounds the corresponding non-discrete source or drain region 710/712 in a vertical sense. In another aspect, the entire perimeter of the non-discrete source or drain region 710 / 712 may not be in contact with the contact 714 , and thus the contact 714 only partially surrounds the non-discrete source or drain region 710 /712, as depicted in Figure 7B. In a comparative example not depicted, the entire perimeter (taken along the a-a' axis) of the non-discrete source or drain regions 710/712 is surrounded by contacts 714.Referring again to FIG. 7A , in an embodiment, the integrated circuit structure 700 further includes a pair of spacers 716 . As depicted, outer portions of the pair of spacers 716 may overlap portions of the non-discrete source or drain regions 710 / 712 , thereby providing non-discrete source or drain regions underlying the pair of spacers 716 The "Embedded" section of the 710/712. Still as depicted, the embedded portions of the non-discrete source or drain regions 710 / 712 may not extend under the entire pair of spacers 716 .Substrate 702 may be composed of materials suitable for the fabrication of integrated circuit structures. In one embodiment, the substrate 702 includes an underlying substrate composed of a single crystal material, which may include, but is not limited to, silicon, germanium, silicon germanium, germanium tin, silicon germanium tin, or group III-V compound semiconductor materials. An upper insulator layer composed of a material that may include, but is not limited to, silicon dioxide, silicon nitride, or silicon oxynitride is on the lower bulk substrate. Thus, structure 700 can be fabricated from a starting semiconductor-on-insulator substrate. Alternatively, structure 700 is formed directly from the bulk substrate, and local oxidation is used to form electrically insulating portions in place of the upper insulator layer described above. In another alternative embodiment, the structure 700 is formed directly from the bulk substrate, and electrically isolated active regions, such as nanowires, are formed thereon using doping. In one such embodiment, the first nanowire (ie, near the substrate) is in the form of an Ω-FET (omega-FET) type structure.In embodiments, the nanowires 704 may be sized as lines or ribbons, as described below, and may have square or rounded corners. In an embodiment, the nanowires 704 are composed of materials such as, but not limited to, silicon, germanium, or combinations thereof. In one such embodiment, the nanowires are single crystalline. For example, for silicon nanowires 704, the single crystal nanowires may be based on a (100) global orientation, ie, having a <100> plane in the z-direction. Other orientations are also contemplated, as described below. In an embodiment, the dimensions of the nanowires 704 are nanoscale from a cross-sectional perspective. For example, in particular embodiments, the smallest dimension of the nanowires 704 is less than about 20 nanometers. In an embodiment, the nanowires 704 are composed of a strained material, particularly in the channel region 706 .Referring to FIG. 7C, in an embodiment, each of the channel regions 706 has a width (Wc) and a height (Hc), the width (Wc) being approximately the same as the height (Hc). That is, in both cases, the cross-sectional profile of channel region 706 is square-like, or, if rounded, circular-like. In another aspect, the width and height of the channel region need not be the same, eg, as is the case with nanoribbons as described throughout.As described throughout this application, the substrate can be composed of a semiconductor material that can withstand manufacturing processes and in which charges can migrate. In embodiments, the substrate is described herein as a bulk substrate composed of crystalline silicon, silicon/germanium or germanium layers doped with charge carriers such as, but not limited to, phosphorus, arsenic, boron, or the like. combined to form active regions. In one embodiment, the concentration of silicon atoms in such a bulk substrate is greater than 97%. In another embodiment, the bulk substrate consists of an epitaxial layer grown on top of a different crystalline substrate, such as a silicon epitaxial layer grown on top of a boron-doped bulk silicon single crystal substrate. The bulk substrate may alternatively be composed of III-V materials. In an embodiment, the bulk substrate is composed of III-V materials such as, but not limited to, gallium nitride, gallium phosphide, gallium arsenide, indium phosphide, indium antimonide, indium gallium arsenide, aluminum gallium arsenide, Indium Gallium Phosphide or a combination thereof. In one embodiment, the bulk substrate is composed of a III-V material, and the charge carrier dopant impurity atoms are atoms such as, but not limited to, carbon, silicon, germanium, oxygen, sulfur, selenium, or tellurium.As described throughout this application, isolation regions such as shallow trench isolation regions or sub-fin isolation regions may be formed by a portion suitable for ultimately electrically isolating the permanent gate structure from the underlying bulk substrate, or to be formed in the underlying Active area isolation within the bulk substrate (eg, isolating the fin active area), or materials that contribute to isolation. For example, in one embodiment, the isolation regions are comprised of one or more layers of dielectric materials such as, but not limited to, silicon dioxide, silicon oxynitride, silicon nitride, carbon-doped silicon nitride, or combinations thereof.As described throughout this application, a gate line or gate structure may be comprised of a gate electrode stack that includes a gate dielectric layer and a gate electrode layer. In an embodiment, the gate electrode of the gate electrode stack is composed of a metal gate, and the gate dielectric layer is composed of a high-k material. For example, in one embodiment, the gate dielectric layer is made of materials such as, but not limited to, hafnium oxide, hafnium oxynitride, hafnium silicate, lanthanum oxide, zirconium oxide, zirconium silicate, tantalum oxide, barium strontium titanate, barium titanate, The material is composed of strontium titanate, yttrium oxide, aluminum oxide, lead scandium titanium oxide, lead zinc niobate or a combination thereof. Additionally, a portion of the gate dielectric layer may include a native oxide layer formed from the top layers of the semiconductor substrate. In an embodiment, the gate dielectric layer is composed of a top high-k portion and a lower portion composed of an oxide of semiconductor material. In one embodiment, the gate dielectric layer consists of a top portion of hafnium oxide and a bottom portion of silicon dioxide or silicon oxynitride. In some embodiments, a portion of the gate dielectric is a "U"-shaped structure including a bottom portion substantially parallel to the surface of the substrate and two sidewall portions substantially perpendicular to the top surface of the substrate.In one embodiment, the gate electrode is composed of a metal layer such as, but not limited to, metal nitride, metal carbide, metal silicide, metal aluminide, hafnium, zirconium, titanium, tantalum, aluminum, ruthenium, palladium, Platinum, cobalt, nickel or conductive metal oxides. In a specific embodiment, the gate electrode is composed of a non-workfunction setting fill material formed over the metal workfunction setting layer. Depending on whether the transistor is a PMOS or NMOS transistor, the gate electrode layer may be composed of a P-type work function metal or an N-type work function metal. In some embodiments, the gate electrode layer may be composed of a stack of two or more metal layers, wherein one or more of the metal layers is a work function metal layer and at least one of the metal layers is a conductive fill layer. For PMOS transistors, metals that can be used for the gate electrode include, but are not limited to, ruthenium, palladium, platinum, cobalt, nickel, and conductive metal oxides such as ruthenium oxide. The P-type metal layer will enable the formation of a PMOS gate electrode with a work function between about 4.9 eV and about 5.2 eV. For NMOS transistors, metals that can be used for the gate electrode include, but are not limited to, hafnium, zirconium, titanium, tantalum, aluminum, alloys of these metals, and carbides of these metals, such as hafnium carbide, zirconium carbide, titanium carbide, tantalum carbide and aluminum carbide. The N-type metal layer will enable the formation of an NMOS gate electrode with a work function between about 3.9 eV and about 4.2 eV. In some embodiments, the gate electrode may be formed of a "U"-shaped structure including a bottom portion substantially parallel to the substrate surface and two sidewall portions substantially perpendicular to the substrate top surface. In another embodiment, at least one of the metal layers forming the gate electrode may simply be a planar layer that is substantially parallel to the top surface of the substrate and does not include sidewall portions that are substantially perpendicular to the top surface of the substrate. In other embodiments of the present disclosure, the gate electrode may be composed of a combination of U-shaped structures and planar non-U-shaped structures. For example, the gate electrode may be composed of one or more U-shaped metal layers formed on top of one or more planar non-U-shaped layers.As described throughout this application, spacers associated with gate lines or electrode stacks may be adapted to ultimately electrically isolate the permanent gate structure from adjacent conductive contacts (eg, self-aligned contacts), or to Isolate the contributing material composition. For example, in one embodiment, the spacers are composed of a dielectric material such as, but not limited to, silicon dioxide, silicon oxynitride, silicon nitride, or carbon-doped silicon nitride.In embodiments, the methods described herein may involve the formation of contact patterns that align very well with existing gate patterns, while eliminating the use of lithographic operations with very tight registration budgets. In one such embodiment, the method enables the use of inherently highly selective wet etching (eg, compared to dry or plasma etching) to create contact openings, in an embodiment, by utilizing existing The gate pattern is combined with a contact plug lithography operation to form the contact pattern, in one such embodiment, the method enables the elimination of other critical lithography operations for generating the contact pattern as used in other methods need. In an embodiment, the trench contact grid is not patterned separately, but is formed between polysilicon (gate) lines. For example, in one such embodiment, the trench contact grid is formed after gate grating patterning but before gate grating dicing.The pitch division process and patterning scheme may be implemented to implement the embodiments described herein, or may be included as part of the embodiments described herein. Pitch division patterning generally refers to pitch halving, pitch quartering, and the like. The pitch partitioning scheme may be applicable to FEOL processing, BEOL processing, or both FEOL (device) and BEOL (metallization) processing. According to one or more embodiments described herein, photolithography is first performed to print unidirectional lines (eg, strictly unidirectional or predominantly unidirectional) with a predefined pitch. The pitch division process is then implemented as a technique for increasing the line density.In embodiments, the term "grating structure" for fins, gate lines, metal lines, ILD lines, or hard mask lines is used herein to refer to close pitch grating structures. In one such embodiment, tight spacing cannot be achieved directly by selected lithography. For example, a selected lithography-based pattern can be formed first, but the pitch can be halved by patterning using a spacer mask, as is known in the art. Still further, the initial pitch can be quartered by a second round of spacer mask patterning. Accordingly, the grating-like patterns described herein may have metal lines, ILD lines, or hard mask lines spaced at substantially uniform pitches and having substantially uniform widths. For example, in some embodiments, the change in spacing will be within ten percent and the change in width will be within ten percent, and in some embodiments, the change in spacing will be within five percent, and the change in width will be within five percent. Patterns can be made by pitch halving or pitch quartering, or other pitch division methods. In an embodiment, the gratings are not necessarily a single pitch.In an embodiment, as used throughout this specification, an interlayer dielectric (ILD) material consists of or includes a dielectric layer or a layer of insulating material. Examples of suitable dielectric materials include, but are not limited to, oxides of silicon (eg, silicon dioxide (SiO2)), doped oxides of silicon, fluorinated oxides of silicon, carbon-doped oxides of silicon, known in the art. Various low-k dielectric materials and combinations thereof are known. The interlayer dielectric material may be formed by techniques such as chemical vapor deposition (CVD), physical vapor deposition (PVD), or by other deposition methods.In embodiments, as also used throughout this specification, the metal line or interconnect material (and via material) is comprised of one or more metals or other conductive structures. A common example is the use of copper lines and structures that may or may not include a barrier between the copper and the surrounding ILD material. As used herein, the term metal includes alloys, stacks, and other combinations of metals. For example, metal interconnects may include barrier layers (eg, layers including one or more of Ta, TaN, Ti, or TiN), stacks of different metals or alloys, and the like. Thus, the interconnect line may be a single layer of material, or may be formed from several layers, including conductive pad layers and fill layers. Interconnects may be formed using any suitable deposition process, such as electroplating, chemical vapor deposition, or physical vapor deposition. In an embodiment, the interconnects are composed of conductive materials such as, but not limited to, Cu, Al, Ti, Zr, Hf, V, Ru, Co, Ni, Pd, Pt, W, Ag, Au, or alloys thereof. In the art, interconnects are also sometimes referred to as traces, wires, lines, metals, or simply interconnects.In an embodiment, as also used throughout this specification, the hardmask material is composed of a dielectric material that is different from the interlayer dielectric material. In one embodiment, different hardmask materials may be used in different regions in order to provide different growth or etch selectivities relative to each other and relative to the underlying dielectric and metal layers. In some embodiments, the hard mask layer includes a silicon nitride (eg, silicon nitride) layer or a silicon oxide layer, or both, or a combination thereof. Other suitable materials may include carbon-based materials. In another embodiment, the hardmask material includes a metal species. For example, the hardmask or other overlying material may include a layer of titanium or a nitride of another metal (eg, titanium nitride). Potentially smaller amounts of other materials, such as oxygen, may be included in one or more of these layers. Alternatively, depending on the particular implementation, other hard mask layers known in the art may be used. The hard mask layer may be formed by CVD, PVD or other deposition methods.In embodiments, as also used throughout this specification, lithography operations are performed using 193 nm immersion lithography (i193), extreme ultraviolet (EUV) lithography, or electron beam direct write (EBDW) lithography, or the like. Positive or negative tone resists can be used. In one embodiment, the photolithography mask is a three-layer mask consisting of a topography masking portion, an anti-reflective coating (ARC) and a photoresist layer. In certain such embodiments, the topography masking portion is a carbon hard mask (CHM) layer and the anti-reflective coating is a silicon ARC layer.It should be understood that not all aspects of the above-described processes need to be practiced to fall within the spirit and scope of embodiments of the present disclosure. For example, in one embodiment, the dummy gate need not always be formed prior to fabricating the gate contact over the active portion of the gate stack. The gate stacks described above may actually be initially formed permanent gate stacks. Furthermore, one or more semiconductor devices may be fabricated using the processes described herein. The semiconductor device may be a transistor or similar device. For example, in an embodiment, the semiconductor device is a metal oxide semiconductor (MOS) transistor for a logic cell or memory, or a bipolar transistor. Also, in an embodiment, the semiconductor device has a three-dimensional architecture, such as a tri-gate device, an independently accessed dual-gate device, a FIN-FET, a nanowire device, or a nanoribbon device. One or more embodiments may be particularly useful for fabricating semiconductor devices at the sub-ten nanometer (10 nm) technology node.Additional or intermediate operations for FEOL layer or structure fabrication may include standard microelectronics fabrication processes such as photolithography, etching, thin film deposition, planarization (eg, chemical mechanical polishing (CMP)), diffusion, metrology, use of sacrificial layers , the use of an etch stop layer, the use of a planarization stop layer, or any other action associated with the fabrication of microelectronic components. Furthermore, it should be understood that the process operations described with respect to the preceding process flows may be practiced in alternate orders, that not every operation need be performed, or that additional process operations may be performed, or both.It should be appreciated that in the above exemplary FEOL embodiments, 10 nanometer or sub-10 nanometer node processing is implemented directly into the fabrication scheme and resulting structure as a technology driver in the embodiment. In other embodiments, FEOL considerations may be driven by BEOL 10 nanometer or sub-10 nanometer processing requirements. For example, material selection and layout for FEOL layers and devices may need to accommodate BEOL processing. In one such embodiment, the material selection and gate stack architecture are chosen to accommodate the high density metallization of the BEOL layer, eg, to reduce the amount of metal formed in the FEOL layer but coupled together by the high density metallization of the BEOL layer Fringe capacitance in transistor structures.Embodiments described herein can be used to fabricate a wide variety of different types of integrated circuits or microelectronic devices. Examples of such integrated circuits include, but are not limited to, processors, chipset components, graphics processors, digital signal processors, microcontrollers, and the like. In other embodiments, semiconductor memories may be fabricated. Furthermore, integrated circuits or other microelectronic devices can be used in a wide range of electronic devices known in the art. For example, in computer systems (eg, desktops, laptops, servers), cellular telephones, personal electronic devices, and the like. Integrated circuits may be coupled to buses and other components in the system. For example, a processor may be coupled to memory, a chipset, etc. by one or more buses. Each of the processors, memories, and chipsets can potentially be fabricated using the methods disclosed herein.FIG. 8A shows a computing device 800A according to one embodiment of the present disclosure. Computing device 800A houses board 802A. Board 802A may include several components including, but not limited to, processor 804A and at least one communication chip 806A. Processor 804A is physically and electrically coupled to board 802A. In some embodiments, at least one communication chip 806A is also physically and electrically coupled to board 802A. In other embodiments, the communication chip 806A is part of the processor 804A.Depending on its application, computing device 800A may include other components that may or may not be physically and electrically coupled to board 802A. These other components include, but are not limited to, volatile memory (eg, DRAM), non-volatile memory (eg, ROM), flash memory, graphics processors, digital signal processors, cryptographic processors, chipsets, antennas, displays , touchscreen displays, touchscreen controllers, batteries, audio codecs, video codecs, power amplifiers, global positioning system (GPS) devices, compasses, accelerometers, gyroscopes, speakers, cameras, and mass storage devices (e.g., hard drives, compact discs (CDs), digital versatile discs (DVDs, etc.).Communication chip 806A enables wireless communication for transferring data to and from computing device 800A. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communication channels, etc. that can transmit data through a non-solid medium using modulated electromagnetic radiation. The term does not imply that the associated devices do not contain any wires, although in some embodiments they may not. Communication chip 806A may implement any of several wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and any other wireless protocol designated as 3G, 4G, 5G and later. Computing device 800A may include multiple communication chips 806A. For example, the first communication chip 806A may be dedicated to shorter-range wireless communication such as Wi-Fi and Bluetooth, and the second communication chip 806A may be dedicated to communication such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, or others long-range wireless communication.The processor 804A of the computing device 800A includes an integrated circuit die packaged within the processor 804A. In some implementations of embodiments of the present disclosure, the integrated circuit die of processor 804A includes one or more structures, such as integrated circuit structures constructed in accordance with embodiments of the present disclosure. The term "processor" may refer to any device or portion of a device that processes electronic data from registers or memory or both to convert the electronic data into other electronic data that may be stored in registers or memory or both.Communication chip 806A also includes an integrated circuit die packaged within communication chip 806A. According to another embodiment of the present disclosure, the integrated circuit die of the communication chip 806A is constructed according to an embodiment of the present disclosure.In other implementations, another component housed within computing device 800A may comprise an integrated circuit die constructed in accordance with implementations of embodiments of the present disclosure.In various embodiments, computing device 800A may be a laptop computer, netbook, notebook, ultrabook, smartphone, tablet, personal digital assistant (PDA), ultra mobile PC, mobile phone, desktop computer, server, printer, Scanners, monitors, set-top boxes, entertainment control units, digital cameras, portable music players or digital video recorders. In other implementations, computing device 800A may be any other electronic device that processes data.FIG. 8B illustrates an interposer 800B including one or more embodiments of the present disclosure. The interposer 800B is an intervening substrate for bridging the first substrate 802B to the second substrate 804B. The first substrate 802B may be, for example, an integrated circuit die. The second substrate 804B may be, for example, a memory module, a computer motherboard, or another integrated circuit die. Generally, the purpose of the interposer 800B is to extend the connections to wider pitches or to reroute the connections to different connections. For example, the interposer 800B can couple the integrated circuit die to a ball grid array (BGA) 806B, which can then be coupled to the second substrate 804B. In some embodiments, the first and second substrates 802B/804B are attached to opposite sides of the interposer 800B. In other embodiments, the first and second substrates 802B/804B are attached to the same side of the interposer 800B. And in other embodiments, three or more substrates are interconnected using interposer 800B.The interposer 800B may be formed of epoxy, glass fiber reinforced epoxy, a ceramic material, or a polymeric material such as polyimide. In other embodiments, the interposer 800B may be formed of alternating rigid or flexible materials, which may include the same materials described above for use in semiconductor substrates, such as silicon, germanium, and other III-V and Group IV materials.Interposer 800B may include metal interconnects 808B and vias 810B, including but not limited to through silicon vias (TSVs) 812B. Interposer 800B may also include embedded devices 814B, including both passive and active devices. Such devices include, but are not limited to, capacitors, decoupling capacitors, resistors, inductors, fuses, diodes, transformers, sensors, and electrostatic discharge (ESD) devices. More complex devices such as radio frequency (RF) devices, power amplifiers, power management devices, antennas, arrays, sensors, and MEMS devices can also be formed on the interposer 800B. According to embodiments of the present disclosure, the apparatus or processes disclosed herein may be used in the manufacture of interposer 800B or in the manufacture of components included in interposer 800B.9 is an isometric view of a mobile computing platform 900 manufactured according to one or more processes described herein or including one or more features described herein, according to an embodiment of the present disclosure. Integrated Circuits (ICs).Mobile computing platform 900 may be any portable device configured for each of electronic data display, electronic data processing, and wireless electronic data transmission. For example, mobile computing platform 900 may be any of a tablet computer, smartphone, laptop, etc., and includes display screen 905, integrated system on chip (SoC) or package level 910, and battery 913, in an exemplary implementation In an example, the display screen 905 is a touch screen (capacitive, inductive, resistive, etc.). As shown, the higher the level of integration in the integrated system 910 enabled by higher transistor packing densities, the larger the portion of the mobile computing platform 900 that can be occupied by the battery 913 or non-volatile storage such as a solid state drive, Alternatively, the number of transistor gates for increased platform functionality is greater. Similarly, the greater the carrier mobility of each transistor in the integrated system 910, the greater the functionality. As such, the techniques described herein may enable performance and form factor improvements in mobile computing platform 900 .The integrated system 910 is further shown in the expanded view 920 . In an exemplary embodiment, packaged device 977 includes at least one memory chip (eg, RAM), or at least one processor chip ( For example, multi-core microprocessors and/or graphics processors). Packaged device 977 along with power management integrated circuit (PMIC) 915, RF (wireless) integrated circuit (RFIC) 925 including wideband RF (wireless) transmitter and/or receiver (eg, including digital baseband and analog front-end modules, also includes A power amplifier on the transmit path and a low noise amplifier on the receive path) together with one or more of their controllers 911 are further coupled to board 960 . Functionally, the PMIC 915 performs battery power conditioning, DC to DC conversion, etc., and thus has an input coupled to the battery 913 and an output that provides current supply to all other functional modules. As further shown, in an exemplary embodiment, RFIC 925 has an output coupled to an antenna to provide implementation of any of several wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 series), WiMAX (IEEE 802.16 series), IEEE 802.20, Long Term Evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, and designated as 3G, 4G , 5G and any other wireless protocol later. In alternative embodiments, each of these board level modules may be integrated on a separate IC coupled to the package substrate of package device 977 or within a single IC (SoC) coupled to the package substrate of package device 977 .In another aspect, a semiconductor package is used to protect an integrated circuit (IC) chip or die, and also to provide the die with an electrical interface to external circuitry. As the demand for smaller electronic devices increases, semiconductor packages are designed to be more compact and must support greater circuit densities. Additionally, the need for higher performance devices has led to a need for improved semiconductor packages that enable thin package profiles and low overall warpage compatible with subsequent assembly processes.In an embodiment, wire bonding to a ceramic or organic packaging substrate is used. In another embodiment, the die is mounted to a ceramic or organic packaging substrate using a C4 process. In particular, C4 solder ball connections may be implemented to provide flip-chip interconnections between the semiconductor device and the substrate. Flip chip or controlled collapse chip connection (C4) is a type used for the mounting of semiconductor devices such as integrated circuit (IC) chips, MEMS or components that utilize solder bumps instead of wire bonds. Solder bumps are deposited on the C4 pads at locations on the top side of the substrate package. To mount a semiconductor device to a substrate, it is turned upside down with the active side facing down over the mounting area. Solder bumps are used to connect semiconductor devices directly to the substrate.10 shows a cross-sectional view of a flip-chip mounted die in accordance with an embodiment of the present disclosure.10, an apparatus 1000 includes a die 1002, such as an integrated circuit (IC) fabricated according to one or more processes described herein or including one or more features described herein. Metallization pads 1004 are included on the die 1002 . A package substrate 1006, such as a ceramic or organic substrate, includes connections 1008 thereon. Die 1002 and package substrate 1006 are electrically connected by solder balls 1010 coupled to metallization pads 1004 and connections 1008 . The underfill material 1012 surrounds the solder balls 1010 .Handling flip chips can be similar to conventional IC fabrication, with several additional operations. Towards the end of the manufacturing process, the attach pads are metallized to make them more receptive to solder. This typically consists of several processes. A small dot of solder is then deposited on each metallized pad. The chips are then diced from the wafer as normal. To attach a flip chip into a circuit, the chip is turned upside down to place the solder points down on the underlying electronics or connectors on the circuit board. The solder is then re-melted to create an electrical connection, typically using ultrasonic waves or alternatively a reflow soldering process. This also leaves a small space between the chip's circuitry and the mounting below. In most cases, the electrically insulating adhesive is then "underfilled" to provide a stronger mechanical connection, provide a thermal bridge, and ensure that the solder joints are not stressed due to differential heating of the chip and the rest of the system.In other embodiments, according to embodiments of the present disclosure, newer packaging and die-to-die interconnect methods, such as through-silicon vias (TSVs) and silicon interposers, are implemented to fabricate a method incorporating a method according to the description herein. High performance multi-chip modules (MCMs) and systems in packages (SiPs) of integrated circuits (ICs) that manufacture or include one or more of the features described herein by one or more processes.Accordingly, embodiments of the present disclosure include integrated circuit structures having condensed source or drain structures having high germanium content, and methods of fabricating integrated circuit structures having condensed source or drain structures having high germanium content.While specific embodiments have been described above, these embodiments are not intended to limit the scope of the disclosure, even if only a single embodiment has been described with respect to specific features. The examples of features provided in this disclosure are intended to be illustrative and not restrictive, unless stated otherwise. The above description is intended to cover such alternatives, modifications, and equivalents that would be apparent to those skilled in the art having the benefit of this disclosure.The scope of the present disclosure includes any feature or combination of features (express or implied) disclosed herein, or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be conceived for any such combination of features during the prosecution of this application (or an application from which priority is claimed). In particular, with reference to the appended claims, features of the dependent claims may be combined with features of the independent claims and from the respective independent claims may be combined in any suitable way and not only by the specific combinations recited in the appended claims Characteristics.The following examples relate to other embodiments. The various features of the different embodiments may be combined differently with some features included and others excluded to suit a variety of different applications.Exemplary Embodiment 1: An integrated circuit structure includes a vertical arrangement of horizontal nanowires. The gate stack surrounds the vertical arrangement of horizontal nanowires. The first epitaxial source or drain structure is at the vertically arranged first end of the horizontal nanowire. A second epitaxial source or drain structure is at the vertically arranged second end of the horizontal nanowire. Each of the first epitaxial source or drain structure and the second epitaxial source or drain structure includes silicon and germanium, wherein the germanium atomic concentration is higher at the core of the epitaxial source or drain structure than at the epitaxial source or drain structure. High at the periphery of the drain structure.Exemplary Embodiment 2: The integrated circuit structure of Exemplary Embodiment 1, further comprising a first dielectric gate sidewall spacer and a second dielectric gate along the first and second sides of the gate stack, respectively Pole sidewall spacers.Exemplary Embodiment 3: The integrated circuit structure of Exemplary Embodiment 2, wherein the first epitaxial source or drain structure and the second epitaxial source or drain structure, respectively, adjoin the first dielectric gate sidewall spacer body and a second dielectric gate sidewall spacer.Exemplary Embodiment 4: The integrated circuit structure of Exemplary Embodiment 3, wherein the first epitaxial source or drain structure and the second epitaxial source or drain structure adjoin the first dielectric gate sidewall spacer Each of the first epitaxial source or drain structure and the second epitaxial source or drain structure has the lowest concentration of germanium atoms at the location of the body and the second dielectric gate sidewall spacer.Exemplary Embodiment 5: The integrated circuit structure of Exemplary Embodiment 1, 2, 3, or 4, wherein each of the first epitaxial source or drain structure and the second epitaxial source or drain structure Also includes boron.Exemplary Embodiment 6: The integrated circuit structure of Exemplary Embodiment 1, 2, 3, 4, or 5, further comprising a first conductive contact on the first epitaxial source or drain structure and a second A second conductive contact on the epitaxial source or drain structure.Exemplary Embodiment 7: An integrated circuit structure includes a vertical arrangement of horizontal nanowires. The gate stack surrounds the vertical arrangement of horizontal nanowires. A first condensed epitaxial source or drain structure is at a first end of the vertical arrangement of the horizontal nanowire. A second condensed epitaxial source or drain structure is at the vertically arranged second end of the horizontal nanowire. Each of the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure includes silicon and germanium, wherein the lowest germanium atomic concentration is at the free surface of the epitaxial source or drain structure.Exemplary Embodiment 8: The integrated circuit structure of Exemplary Embodiment 7, further comprising a first dielectric gate sidewall spacer and a second dielectric gate along the first and second sides of the gate stack, respectively Pole sidewall spacers.Exemplary Embodiment 9: The integrated circuit structure of Exemplary Embodiment 8, wherein the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure, respectively, adjoin the first dielectric gate side wall spacers and second dielectric gate sidewall spacers.Exemplary Embodiment 10: The integrated circuit structure of Exemplary Embodiment 9, wherein the free surface is adjacent to the first dielectric gate in the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure at the location of the pole sidewall spacer and the second dielectric gate sidewall spacer.Exemplary Embodiment 11: The integrated circuit structure of Exemplary Embodiment 7, 8, 9, or 10, wherein one of the first condensed epitaxial source or drain structure and the second condensed epitaxial source or drain structure Each also includes boron.Exemplary Embodiment 12: A computing device includes a board and a component coupled to the board. Components include integrated circuit structures. The integrated circuit structure includes a vertical arrangement of horizontal nanowires. The gate stack surrounds the vertical arrangement of horizontal nanowires. The first epitaxial source or drain structure is at the vertically arranged first end of the horizontal nanowire. A second epitaxial source or drain structure is at the vertically arranged second end of the horizontal nanowire. Each of the first epitaxial source or drain structure and the second epitaxial source or drain structure includes silicon and germanium, wherein the germanium atomic concentration is higher at the core of the epitaxial source or drain structure than at the epitaxial source or drain structure. High at the periphery of the drain structure.Exemplary Embodiment 13: The computing device of Exemplary Embodiment 12, further comprising a memory coupled to the board.Exemplary Embodiment 14: The computing device of Exemplary Embodiment 12 or 13, further comprising a communication chip coupled to the board.Exemplary Embodiment 15: The computing device of Exemplary Embodiment 12, 13, or 14, further comprising a camera coupled to the board.Exemplary Embodiment 16: The computing device of Exemplary Embodiment 12, 13, 14, or 15, further comprising a battery coupled to the board.Exemplary Embodiment 17: The computing device of Exemplary Embodiment 12, 13, 14, 15, or 16, further comprising an antenna coupled to the board.Exemplary Embodiment 18: The computing device of Exemplary Embodiment 12, 13, 14, 15, 16, or 17, wherein the component is a packaged integrated circuit die.Exemplary Embodiment 19: The computing device of Exemplary Embodiment 12, 13, 14, 15, 16, 17, or 18, wherein the component is selected from the group consisting of a processor, a communication chip, and a digital signal processor.Exemplary Embodiment 20: The computing device of Exemplary Embodiment 12, 13, 14, 15, 16, 17, 18, or 19, wherein the computing device is selected from a mobile phone, a laptop computer, a desktop computer, a server and set-top boxes.
Binary hysteresis equal comparator circuits and methods. An equal comparator does not indicate an equal condition until the two binary input values are exactly the same. However, after the two binary input values first become equal, a window of variation comes into effect, within which the first of the two values is allowed to vary while the circuit continues to report an equal condition. This window can extend only above the equal condition, only below the equal condition, or both above and below the equal condition. The width of the window is determined by providing one or two predetermined constant values, a first constant defining the amount of hysteresis provided above the second value, and a second constant defining the amount of hysteresis provided below the second value. Related methods are also described of performing equal comparisons while providing binary hysteresis.
1. A binary hysteresis comparator circuit, comprising:first and second multi-bit circuit input terminals;a circuit output terminal;an A-equals-B comparator having a multi-bit A input terminal coupled to the first circuit input terminal, a multi-bit B input terminal coupled to the second circuit input terminal, and an output terminal;an A-greater-than-B comparator having a multi-bit A input terminal coupled to the first circuit input terminal, a multi-bit B input terminal, and an output terminal;an A-less-than-B comparator having a multi-bit A input terminal coupled to the first circuit input terminal, a multi-bit B input terminal, and an output terminal;a logic gate having a first input terminal coupled to the output terminal of the A-greater-than-B comparator, a second input terminal coupled to the output terminal of the A-less-than-B comparator, and an output terminal;a first multiplexer circuit having a first data input terminal coupled to the output terminal of the A-equals-B comparator, a second data input terminal coupled to the output terminal of the logic gate, a select terminal, and an output terminal;a memory element having a data input terminal coupled to the output terminal of the first multiplexer circuit, and further having a data output terminal coupled to the circuit output terminal and to the select terminal of the first multiplexer circuit; anda first hysteresis circuit coupled between the second circuit input terminal and the B input terminal of one of the A-greater-than-B comparator and the A-less-than-B comparator.2. The binary hysteresis comparator circuit of claim 1, wherein the first hysteresis circuit comprises:an adder circuit having a multi-bit input terminal coupled to the second circuit input terminal and further having a multi-bit output terminal;a second multiplexer circuit having a first multi-bit data input terminal coupled to the output terminal of the adder circuit, a second multi-bit data input terminal coupled to the second circuit input terminal, a select terminal, and a multi-bit output terminal coupled to the B input terminal of the one of the A-greater-than-B comparator and the A-less-than-B comparator; andan overflow prevention circuit coupled between the second circuit input terminal and the select terminal of the second multiplexer circuit.3. The binary hysteresis comparator circuit of claim 2, wherein the one of the A-greater-than-B comparator and the A-less-than-B comparator comprises the A-greater-than-B comparator, and the adder circuit comprises an adder, wherein the adder adds a first constant to a value on the second circuit input terminal.4. The binary hysteresis comparator circuit of claim 3, wherein the first constant is a binary one.5. The binary hysteresis comparator circuit of claim 2, wherein the one of the A-greater-than-B comparator and the A-less-than-B comparator comprises the A-less-than-B comparator, and the adder circuit comprises a subtractor, wherein the subtractor subtracts a second constant from a value on the second circuit input terminal.6. The binary hysteresis comparator circuit of claim 5, wherein the second constant is a binary one.7. The binary hysteresis comparator circuit of claim 1, wherein the first hysteresis circuit comprises:a second multiplexer circuit having a first multi-bit data input terminal coupled to receive an all-zero value, a second multi-bit data input terminal coupled to receive a constant, a select terminal, and a multi-bit output terminal;an adder circuit having a first multi-bit input terminal coupled to the second circuit input terminal, a second multi-bit terminal coupled to the output terminal of the second multiplexer circuit, and a multi-bit output terminal coupled to the B input terminal of the one of the A-greater-than-B comparator and the A-less-than-B comparator; anda first overflow prevention circuit coupled between the second circuit input terminal and the select terminal of the second multiplexer circuit.8. The binary hysteresis comparator circuit of claim 7, wherein the constant is a binary one.9. The binary hysteresis comparator circuit of claim 7, wherein the one of the A-greater-than-B comparator and the A-less-than-B comparator comprises the A-greater-than-B comparator, and the adder circuit comprises an adder, wherein the adder adds a value provided by the second multiplexer circuit to a value on the second circuit input terminal.10. The binary hysteresis comparator circuit of claim 7, wherein the one of the A-greater-than-B comparator and the A-less-than-B comparator comprises the A-less-than-B comparator, and the adder circuit comprises a subtractor, wherein the subtractor subtracts a value provided by the second multiplexer circuit from a value on the second circuit input terminal.11. The binary hysteresis comparator circuit of claim 1, further comprising:a second hysteresis circuit coupled between the second circuit input terminal and the B input terminal of another of the A-greater-than-B comparator and the A-less-than-B comparator.12. The binary hysteresis comparator circuit of claim 11, wherein each of the first and second hysteresis circuits comprises:an adder circuit having a multi-bit input terminal coupled to the second circuit input terminal and further having a multi-bit output terminal;a second multiplexer circuit having a first multi-bit data input terminal coupled to the output terminal of the adder circuit, a second multi-bit data input terminal coupled to the second circuit input terminal, a select terminal, and a multi-bit output terminal coupled to the B input terminal of the one of the A-greater-than-B comparator and the A-less-than-B comparator; andan overflow prevention circuit coupled between the second circuit input terminal and the select terminal of the second multiplexer circuit.13. The binary hysteresis comparator circuit of claim 12, wherein:in the first hysteresis circuit, the adder circuit is an adder and the output terminal of the second multiplexer circuit is coupled to the B input terminal of the A-greater-than-B comparator, wherein the adder adds a first constant to a value on the second circuit input terminal; andin the second hysteresis circuit, the adder circuit is a subtractor and the output terminal of the second multiplexer circuit is coupled to the B input terminal of the A-less-than-B comparator, wherein the subtractor subtracts a second constant from the value on the second circuit input terminal.14. The binary hysteresis comparator circuit of claim 13, wherein the first and second constants are the same.15. The binary hysteresis comparator circuit of claim 14, wherein the first and second constants are both binary ones.16. The binary hysteresis comparator circuit of claim 11, wherein each of the first and second hysteresis circuits comprises:a second multiplexer circuit having a first multi-bit data input terminal coupled to receive an all-zero value, a second multi-bit data input terminal coupled to receive a constant, a select terminal, and a multi-bit output terminal;an adder circuit having a first multi-bit input terminal coupled to the second circuit input terminal, a second multi-bit terminal coupled to the output terminal of the second multiplexer circuit, and a multi-bit output terminal coupled to the B input terminal of the one of the A-greater-than-B comparator and the A-less-than-B comparator; andan overflow prevention circuit coupled between the second circuit input terminal and the select terminal of the second multiplexer circuit.17. The binary hysteresis comparator circuit of claim 16, wherein:in the first hysteresis circuit, the adder circuit is an adder and the output terminal of the adder circuit is coupled to the B input terminal of the A-greater-than-B comparator, wherein the adder adds a value provided by the second multiplexer circuit to a value on the second circuit input terminal; andin the second hysteresis circuit, the adder circuit is a subtractor and the output terminal of the adder circuit is coupled to the B input terminal of the A-less-than-B comparator, wherein the subtractor subtracts the value provided by the second multiplexer circuit from the value on the second circuit input terminal.18. The binary hysteresis comparator circuit of claim 16, wherein in each of the first and second hysteresis circuits the constants are equal.19. The binary hysteresis comparator circuit of claim 18, wherein in each of the first and second hysteresis circuits the constant is a binary one.20. The binary hysteresis comparator circuit of claim 1, wherein the logic gate is a logical NOR gate.21. The binary hysteresis comparator circuit of claim 1, wherein the memory element is a D-type flip-flop.22. The binary hysteresis comparator circuit of claim 1, further comprising:a first multi-bit register coupled between the first circuit input terminal and the A input terminals of the A-equals-B comparator, the A-greater-than-B comparator, and the A-less-than-B comparator, the first register having a clock input terminal; anda second multi-bit register coupled between the second circuit input terminal and the B input terminals of the A-equals-B comparator, the A-greater-than-B comparator, and the A-less-than-comparator, the second register having a clock input terminal,wherein the memory element comprises a clock input terminal coupled to the clock input terminals of the first and second registers.23. The binary hysteresis comparator circuit of claim 22, further comprising an inverting logic gate coupled between the clock input terminals of the first and second registers and the clock input terminal of the memory element.24. The binary hysteresis comparator circuit of claim 22, wherein the first and second registers and the memory element each comprise a reset input terminal, and the reset input terminals of the first and second registers and the memory element are all coupled one to another.25. A method of performing an equal comparison between first and second binary values while providing binary hysteresis, the method comprising:reporting, when the first binary value has an initial value not equal to the second binary value, that the first and second binary values are not equal;reporting, when the first binary value assumes a value equal to the second binary value, that the first and second binary values are equal; andcontinuing to report, when the first binary input value assumes a first new value, that the first and second binary values are equal,wherein the first new value differs from the second binary value by a number not exceeding a predetermined constant.26. The method of claim 25, wherein the first new value is greater than the second binary value by the number.27. The method of claim 25, wherein the first new value is less than the second binary value by the number.28. The method of claim 25, further comprising:reporting, when the first binary value assumes a second new value differing from the second binary value by a number exceeding the predetermined constant, that the first and second binary values are not equal.29. The method of claim 25, wherein the predetermined constant is a binary one.30. A binary hysteresis comparator circuit performing an equal comparison between first and second binary values while providing binary hysteresis, the circuit comprising:means for reporting, when the first binary value has an initial value not equal to the second binary value, that the first and second binary values are not equal;means for reporting, when the first binary value assumes a value equal to the second binary value, that the first and second binary values are equal; andmeans for continuing to report, when the first binary input value assumes a new value, that the first and second binary values are equal,wherein the new value differs from the second binary value by a number not exceeding a predetermined constant.31. The circuit of claim 30, wherein the new value is greater than the second binary value by the number.32. The circuit of claim 30, wherein the new value is less than the second binary value by the number.33. The circuit of claim 30, wherein the predetermined constant is a binary one.34. A method of performing an equal comparison between first and second binary values while providing binary hysteresis, the method comprising:reporting, when the first binary value has an initial value not equal to the second binary value, that the first and second binary values are not equal;reporting, when the first binary value assumes a value equal to the second binary value, that the first and second binary values are equal;continuing to report, when the first binary input value increases to a first new value, that the first and second binary values are equal, wherein the first new value is greater than the second binary value by a first number not exceeding a first predetermined constant; andcontinuing to report, when the first binary input value decreases to a second new value, that the first and second binary values are equal, wherein the second new value is less than the second binary value by a second number not exceeding a second predetermined constant.35. The method of claim 34, wherein the first and second predetermined constants are the same.36. The method of claim 35, wherein the first and second predetermined constants are binary ones.37. The method of claim 34, further comprising:reporting, when the first binary input value increases to a third new value greater than the second binary value by a third number exceeding the first predetermined constant, that the first and second binary values are not equal.38. The method of claim 34, further comprising:reporting, when the first binary input value decreases to a fourth new value less than the second binary value by a fourth number exceeding the second predetermined constant, that the first and second binary values are not equal.39. A binary hysteresis comparator circuit performing an equal comparison between first and second binary values while providing binary hysteresis, the circuit comprising:means for reporting, when the first binary value has an initial value not equal to the second binary value, that the first and second binary values are not equal;means for reporting, when the first binary value assumes a value equal to the second binary value, that the first and second binary values are equal;means for continuing to report, when the first binary input value increases to a first new value, that the first and second binary values are equal, wherein the first new value is greater than the second binary value by a first number not exceeding a first predetermined constant; andmeans for continuing to report, when the first binary input value decreases to a second new value, that the first and second binary values are equal, wherein the second new value is less than the second binary value by a second number not exceeding a second predetermined constant.40. The circuit of claim 39, wherein the first and second predetermined constants are the same.41. The circuit of claim 40, wherein the first and second predetermined constants are binary ones.
FIELD OF THE INVENTIONThe invention relates to digital circuits that provide hysteresis. More particularly, the invention relates to equal comparator circuits providing binary hysteresis.BACKGROUND OF THE INVENTIONThe term "hysteresis" generally refers to the process of compensating for variations (e.g., "noise") in an input signal by adjusting the point at which a system reacts to the input signal. For example, in electrical circuits a rising signal can be detected at a first and higher voltage level (the "rising edge trip point"), while a falling signal can be detected at a second and lower voltage level (the "falling edge trip point"). FIGS. 1-3 are waveform diagrams that can be used to describe this type of hysteresis, which is referred to herein as "level hysteresis".FIG. 1 illustrates the process of an ideally clean input signal IN rising and falling, and its effect on an output signal OUT of inverter 101. Input signal IN rises linearly from a low value (e.g., a ground value) to a high value (e.g., power high VDD). Half-way through the rising edge, at time Tr, the voltage level on signal IN reaches the trip point tp and inverter 101 is triggered. Thus, the output signal OUT from inverter 101 begins to fall. Signal OUT also falls linearly in this ideal circuit, from the high value to the low value. After a time, input signal IN changes state again, falling linearly from the high value to the low value. Half-way through the falling edge, at time Tf, the voltage level on signal IN reaches the trip point tp and inverter 101 is triggered. Thus, the output signal OUT from inverter 101 begins to rise. Signal OUT also rises linearly in this ideal circuit, from the low value to the high value. Thus, signal OUT is a noise-free output signal ideally suited to drive other circuitry.FIG. 2 illustrates what happens to the idealized signals of FIG. 1 in a noisy signal environment. Both the rising and falling edges of signal IN are subject to sudden alterations that can momentarily cause the signal to rise above, then fall below, the trip point tp. Each time input signal IN rises above the trip point (e.g., at times T1, T3, and T5), output signal OUT changes from the high value to the low value. Each time input signal IN falls below the trip point (e.g., at times T2, T4, and T6), output signal OUT changes from the low value to the high value. The result is a noisy output signal OUT, as shown in FIG. 2.FIG. 3 illustrates the resulting waveforms when inverter 101 is replaced by a Schmitt trigger 301. Schmitt triggers are well known. For example, one Schmitt trigger is described by Hsieh in U.S. Reissue Patent No. Re. 34,808, "TTL/CMOS Compatible Input Buffer with Schmitt Trigger", which is incorporated herein by reference. A Schmitt trigger provides level hysteresis in the manner previously described, by providing different trip points for the rising and falling edges of the input signal. The rising edge trip point tpr is higher than the falling edge trip point tpf. Thus, the brief and limited negative movements in voltage level during the rising edge of input signal IN do not cause the output signal OUT to rise to the high value. Similarly, the brief and limited positive movements in voltage level during the falling edge of input signal IN do not cause the output signal OUT to fall to the low value. Hence, the circuit of FIG. 3 is noise-immune, provided the extent of the noise does not exceed the protection provided by the difference in trip-points.Schmitt triggers can be very useful, when they are available. However, they do have their drawbacks in some applications. For example, Schmitt triggers are analog circuits that cannot readily be implemented in the digital programmable logic generally available in programmable logic devices (PLDs). PLDs typically provide arrays of digital logic elements that can be programmed to assume various configurations performing desired digital functions. However, analog functions typically cannot be implemented in a PLD unless they are deliberately included in the fabric of the PLD by the PLD designer and manufacturer.Therefore, it is desirable to provide digital circuits and methods of providing hysteresis, e.g., hysteresis circuits and methods that can be implemented in digital PLDs.SUMMARY OF THE INVENTIONThe invention provides binary hysteresis equal comparator circuits and methods. An equal comparator according to the present invention does not indicate an equal condition until the two binary input values are exactly the same. However, after the two binary input values first become exactly equal, a window of variation comes into effect, within which the first of the two values is allowed to vary while the circuit continues to report an equal condition. The window of allowable variation provides hysteresis to the first binary input value. This window can extend only above the equal condition, only below the equal condition, or both above and below the equal condition. The width of the window is determined by providing one or two predetermined constant values, a first predetermined constant defining the amount of hysteresis provided above the second value, and a second predetermined constant defining the amount of hysteresis provided below the second value. In some embodiments, the two constants are the same, e.g., a single constant is used to perform both functions. Further, because the equal comparator is evaluating the relationship between the first and second binary values, hysteresis is also provided to the second binary value.Applications for these circuits include, for example, control circuits in situations subject to signal noise. Exemplary digital circuits are easily implemented using the digital programmable elements provided in programmable logic devices (PLDs), for example. These circuits can be used, for example, in clocking circuits to compensate for variations in temperature and power supply.The invention also encompasses related methods of performing equal comparisons between first and second binary values while providing binary hysteresis.BRIEF DESCRIPTION OF THE DRAWINGSThe present invention is illustrated by way of example, and not by way of limitation, in the following figures.FIG. 1 is a waveform diagram of idealized input and output signals for an inverter.FIG. 2 is a waveform diagram illustrating input and output signals for an inverter in a noisy signal environment.FIG. 3 is a waveform diagram illustrating input and output signals for a Schmitt trigger in a noisy signal environment.FIG. 4 illustrates a first binary hysteresis equal comparator circuit according to one embodiment of the present invention.FIG. 5 illustrates an exemplary maximum overflow prevention circuit that can be used in the binary hysteresis comparator circuit of FIG. 4.FIG. 6 illustrates an exemplary minimum overflow prevention circuit that can be used in the binary hysteresis comparator circuit of FIG. 4.FIG. 7 illustrates an exemplary A-equals-B comparator circuit that can be used in the binary hysteresis comparator circuit of FIG. 4.FIG. 8 illustrates an exemplary A-greater-than-B comparator circuit that can be used in the binary hysteresis comparator circuit of FIG. 4.FIG. 9 illustrates an exemplary A-less-than-B comparator circuit that can be used in the binary hysteresis comparator circuit of FIG. 4.FIG. 10 illustrates a second binary hysteresis equal comparator circuit according to one embodiment of the present invention.FIG. 11 illustrates a third binary hysteresis equal comparator circuit according to one embodiment of the present invention.FIG. 12 illustrates a fourth binary hysteresis equal comparator circuit according to one embodiment of the present invention.FIG. 13 illustrates the steps of a first exemplary method of performing an equal comparison between first and second binary values while providing binary hysteresis.FIG. 14 illustrates the steps of a second exemplary method of performing an equal comparison between first and second binary values while providing binary hysteresis.DETAILED DESCRIPTION OF THE DRAWINGSIn the following description, numerous specific details are set forth to provide a more thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention can be practiced without these specific details.FIG. 4 illustrates a first equal comparator circuit having binary hysteresis, according to one embodiment of the present invention. "Binary hysteresis" differs from "level hysteresis" in that instead of providing two different voltage trip points for the rising and falling edges of an input signal, two different values are used when comparing increasing and decreasing binary values.The circuit of FIG. 4 includes two registers 401-402, two adder circuits 403 and 405, two overflow prevention circuits 404 and 406, three multiplexer circuits 407-408 and 414, a NOR gate 413, a D-type flip-flop 415, and three comparator circuits 410-412. Because the circuit compares two binary values, the two input signals AIN and BIN are each multi-bit signals. (In the present specification, the same reference characters are used to refer to terminals, signal lines, and their corresponding signals.) In the pictured embodiment, input signals AIN and BIN are each M bits wide, where M is an integer. Similarly, many of the elements in FIG. 4 and the other figures herein represent M-bit-wide circuits, as will be clear to those of skill in the art on perusal of the figures.M-bit register 401 registers data provided by M-bit input terminal AIN, and provides M-bit signal RAIN. Similarly, M-bit register 402 registers data provided by M-bit input terminal BIN, and provides M-bit signal RBIN. Registers 401-402 are both clocked by signal CLK and reset by signal RST."+w" (plus w) adder circuit 403 takes the value of signal RBIN and adds a constant w, providing the resulting value to the "0" data terminal of multiplexer circuit 407. Signal RBIN is also provided to the "1" data terminal of multiplexer circuit 407. Maximum overflow prevention circuit 404 is driven by signal RBIN and provides a single-bit output signal MAX to the select terminal of multiplexer circuit 407. Multiplexer circuit 407 drives the B input terminal of A>B comparator 411, while the A input terminal is driven by signal RAIN. The output terminal of A>B comparator 411 provides signal A-GT-B."-x" (minus x) adder circuit 405 (a subtractor) takes the value of signal RBIN and subtracts a constant x, providing the resulting value to the "0" data terminal of multiplexer circuit 408. Signal RBIN is also provided to the "1" data terminal of multiplexer circuit 408. Minimum overflow prevention circuit 406 is driven by signal RBIN and provides a single-bit output signal MIN to the select terminal of multiplexer circuit 408. Multiplexer circuit 408 drives the B input terminal of A<B comparator 412, while the A input terminal is driven by signal RAIN. The output terminal of A<B comparator 412 provides signal A-LT-B.Signals A-GT-B and A-LT-B are combined in NOR gate 413, which drives the "1" data terminal of multiplexer circuit 414. Signal A-EQ-B is provided to the "0" data terminal of multiplexer circuit 414. Multiplexer circuit 414 drives the data input terminal D of flip-flop 415. Clock signal CLK is inverted by inverter 409 and provided to the clock input terminal CK of flip-flop 415. Reset signal RST is provided to the reset terminal R of flip-flop 415. The output terminal OUT-EQ of flip-flop 415 provides circuit output signal OUT-EQ and is coupled to the select terminal of multiplexer circuit 414.Broadly speaking, the circuit of FIG. 4 includes three comparator circuits. A first comparator circuit 410 checks for equal values between the registered versions (RAIN and RBIN) of signals AIN and BIN. When the two values of signals RAIN and RBIN are equal, signal A-EQ-B is high; otherwise, signal A-EQ-B is low. A second comparator circuit 411 utilizes a first hysteresis circuit (comprising elements 403-404 and 407) to check for a condition in which signal RAIN is greater than signal RBIN by more than a predetermined constant w. When this condition is satisfied, signal A-GT-B is high; otherwise, signal A-GT-B is low. A third comparator circuit 412 utilizes a second hysteresis circuit (comprising elements 405-406 and 408) to check for a condition in which signal RAIN is less than signal RBIN by more than a predetermined constant x. When this condition is satisfied, signal A-LT-B is high; otherwise, signal A-LT-B is low.The circuit of FIG. 4 operates as follows. Initially, flip-flop 415 is reset. Thus, signal OUT-EQ is low, and the value of A-EQ-B is passed to the data input terminal D of flip-flop 415. Thus, signal OUT-EQ does not go high until signals RAIN and RBIN are exactly equal (and, in the pictured embodiment, an active edge is received on clock signal CK).Assume first that signal RAIN increases from the value of RBIN to a value greater than RBIN. Adder circuit 403 adds a value of w to signal RBIN. Therefore, the output of adder circuit 403 is RBIN+w. Signal MAX is high only when adding the constant w to signal RBIN causes adder circuit 403 to exceed the maximum value that can be represented by M bits. When signal MAX is low, multiplexer circuit 407 selects the signal RBIN+w and provides this value to the B input terminal of comparator 411. Thus, signal A-GT-B will go high only when signal RAIN is greater than signal RBIN by a value greater than w.When signal MAX is high, multiplexer circuit 407 selects the signal RBIN and provides this value to the B input terminal of comparator 411. Thus, as the value of signal RBIN approaches within a value w of the maximum value that can be represented by M bits, signal A-GT-B represents the simple "greater-than" value, without hysteresis.When signal A-GT-B goes high, NOR gate 413 provides a low value to the "1" data input terminal of multiplexer circuit 414. Because signal OUT-EQ is high, this low value is passed to the data input terminal D of flip-flop 415. At the next active edge of clock signal CK, signal OUT-EQ goes low again.Secondly, assume that signal RAIN decreases from the value of RBIN to a value less than RBIN. Adder circuit 405 subtracts a value of x from signal RBIN. Therefore, the output of adder circuit 405 is RBIN-x. Signal MIN is high only when subtracting the constant x from signal RBIN causes adder circuit 405 to produce a negative result. When signal MIN is low, multiplexer circuit 408 selects the signal RBIN-x and provides this value to the B input terminal of comparator 412. Thus, signal A-LT-B will go high only when signal RAIN is less than signal RBIN by a value greater than x.When signal MIN is high, multiplexer circuit 408 selects the signal RBIN and provides this value to the B input terminal of comparator 412. Thus, as the value of signal RBIN approaches within a value x of zero, signal A-LT-B represents the simple "less-than" value, without hysteresis.When signal A-LT-B goes high, NOR gate 413 provides a low value to the "1" data input terminal of multiplexer circuit 414. Because signal OUT-EQ is high, this low value is passed to the data input terminal D of flip-flop 415. At the next active edge of clock signal CK, signal OUT-EQ goes low again.FIG. 5 illustrates an exemplary embodiment of overflow prevention circuit 404 that can be used, for example, in the binary hysteresis comparator circuit of FIG. 4. In the embodiment of FIG. 5, M is four and the constant w is a binary one (0 . . . 01). (In other embodiments, M and/or w have other values.) Thus, signal MAX needs to be high only when adding one to signal RBIN causes adder circuit 403 to roll over from all ones to all zeros. Thus, the circuit of FIG. 5 checks for a value of all ones on signal RBIN. In some embodiments, overflow prevention circuit 404 checks for a value of RBIN equal to a value of ("all ones"-w+1). In other embodiments, overflow prevention circuit 404 checks for a value of RBIN greater than or equal to a value of ("all ones"-w+1).The circuit of FIG. 5 comprises a NAND gate 501 driving an inverter 502, which provides signal MAX. Signals RBIN[3]-RBIN[0] are provided to the input terminals of NAND-gate 501.Clearly, the circuit of FIG. 5 can be implemented in many different ways. For example, in another embodiment in which M is four and w is one, the overflow prevention circuit is similar to the circuit of FIG. 5, but with inverter 502 omitted. Signal MAX is replaced by signal MAX-B, which is low when signal RBIN is all ones. Thus, the "0" and "1" data input terminals of multiplexer circuit 407 are reversed.FIG. 6 illustrates an exemplary embodiment of overflow prevention circuit 406 that can be used, for example, in the binary hysteresis comparator circuit of FIG. 4. In the embodiment of FIG. 6, M is four and the constant x is one. (In other embodiments, M and/or x have other values.) Thus, signal MIN needs to be high only when subtracting one from signal RBIN causes adder circuit 403 to roll over from all zeros to all ones. Thus, the circuit of FIG. 6 checks for a value of all zeros on signal RBIN. In some embodiments, overflow prevention circuit 406 checks for a value of RBIN equal to a value of ("all zeros"+x-1), or (x-1). In other embodiments, overflow prevention circuit 406 checks for a value of RBIN less than or equal to a value of ("all zeros"+x-1), or (x-1).The circuit of FIG. 6 comprises a NOR gate 601 that provides signal MIN. Signals RBIN[3]-RBIN[0] are provided to the input terminals of NOR-gate 601.FIG. 7 illustrates an exemplary embodiment of A-equals-B comparator 410 that can be used, for example, in the binary hysteresis comparator circuit of FIG. 4. Note that any appropriate equal comparator can be used to implement circuit 410 in FIG. 4; the implementation shown in FIG. 7 is merely exemplary. In the embodiment of FIG. 7, M is four. The circuit includes XNOR (exclusive NOR) gates 701-704, NAND gate 705, and inverter 706.Signal A-NEQ-B is provided by NAND gate 705, which is driven by XNOR gates 701-704. Each of XNOR gates 701-704 compares two corresponding bits from the two 4-bit input signals A and B. XNOR gate 701 is driven by signals B[3] and A[3], XNOR gate 702 is driven by signals B[2] and A[2], and so forth. If any pair of corresponding bits includes two different values, the associated XNOR gate provides a low signal to NAND gate 705, and signal A-NEQ-B goes high. Signal A-NEQ-B is inverted by inverter 706 to provide output signal O3 (A-EQ-B).In another embodiment (not shown), the circuit of FIG. 7 implemented using four XOR gates instead of XNOR gates 701-704. NAND gate 705 is replaced by a NOR gate, while node A-NEQ-B and inverter 706 are omitted.FIG. 8 illustrates an exemplary embodiment of A-greater-than-B comparator 411 that can be used, for example, in the binary hysteresis equal comparator circuit of FIG. 4. Note that any appropriate greater-than comparator can be used to implement circuit 411 in FIG. 4; the implementation shown in FIG. 8 is merely exemplary. In the embodiment of FIG. 8, M is four. The circuit of FIG. 8 includes inverters 801-808, NAND gates 811-819, and NOR gate 821.NAND gate 814 is driven by signal A[3], the most significant bit of signal A, and by signal B[3], the most significant bit of signal B, inverted by inverter 801. NAND gate 814 drives NAND gate 819, which provides the comparator output signal O2. Signal O2 (A-GT-B) is high whenever the binary value of signal A is greater than the binary value of signal B.NAND gate 811 is driven by signal B[3] and by signal A[3] inverted by inverter 805. NAND gate 815 is driven by NAND gate 811, signal A[2], and signal B[2] inverted by inverter 802. NAND gate 815 also drives NAND gate 819.NAND gate 812 is driven by signal B[2] and by signal A[2] inverted by inverter 806. NAND gate 816 is driven by NAND gate 811, NAND gate 812, signal A[1], and signal B[1] inverted by inverter 803. NAND gate 816 also drives NAND gate 819.NAND gate 813 is driven by signal B[1] and by signal A[1] inverted by inverter 807. NAND gate 817 is driven by NAND gate 811 and NAND gate 812. NAND gate 818 is driven by NAND gate 813, signal A[0], and signal B[0] inverted by inverter 804. NOR gate 821 is driven by NAND gates 817 and 818, and drives inverter 808, which also drives NAND gate 819. Note that inverter 808, NOR gate 821, and NAND gates 817 and 818 together implement a 5-input NAND gate NAND5.FIG. 9 illustrates an exemplary embodiment of A-less-than-B comparator 412 that can be used, for example, in the binary hysteresis equal comparator circuit of FIG. 4. Note that any appropriate less-than comparator can be used to implement circuit 412 in FIG. 4; the implementation shown in FIG. 9 is merely exemplary. In the embodiment of FIG. 9, M is four. The circuit of FIG. 9 is the same as the circuit of FIG. 8, but with the A and B input signals reversed. Thus, the circuit is not described here. Signal O1 (A-LT-B) is high whenever the binary value of signal A is less than the binary value of signal B.The embodiment of FIG. 4 provides binary hysteresis to signal RAIN whenever RAIN is either increasing or decreasing in value. However, binary hysteresis can also be provided only for increasing values, or only for decreasing values. FIG. 10 illustrates an equal comparator circuit providing binary hysteresis only for values of RAIN that are increasing in value. (Note that the circuit also provides hysteresis for values of RBIN that are decreasing in value.)The circuit of FIG. 10 is the same as the circuit of FIG. 4, except that adder circuit 405, overflow prevention circuit 406, and multiplexer circuit 408 are removed. Signal RBIN is provided directly to the B input terminal of A-less-than-B comparator 412. Thus, circuit output signal OUT-EQ goes low as soon as signal RAIN decreases to a value less than RBIN. However, signal OUT-EQ does not go low in response to an increasing value for signal RAIN until signal RAIN is greater than signal RBIN by more than constant w.FIG. 11 illustrates an equal comparator circuit providing binary hysteresis only for values of RAIN that are decreasing in value. (Note that the circuit also provides hysteresis for values of RBIN that are increasing in value.) The circuit of FIG. 11 is the same as the circuit of FIG. 4, except that adder circuit 403, overflow prevention circuit 404, and multiplexer circuit 407 are removed. Signal RBIN is provided directly to the B input terminal of A-greater-than-B comparator 411. Thus, circuit output signal OUT-EQ goes low as soon as signal RAIN increases to a value greater than RBIN. However, signal OUT-EQ does not go low in response to an decreasing value for signal RAIN until signal RAIN is less than signal RBIN by more than constant x.FIG. 12 illustrates another binary hysteresis circuit according to another embodiment of the invention. The circuit of FIG. 12 is similar to the circuit of FIG. 4, except that the hysteresis circuits are implemented in a different way. The first hysteresis circuit of FIG. 4 (comprising elements 403-404 and 407), is replaced by a new hysteresis circuit comprising elements 1203-1204 and 1207. The second hysteresis circuit of FIG. 4 (comprising elements 405-406 and 408), is replaced by a new hysteresis circuit comprising elements 1205-1206 and 1208.The circuit of FIG. 12 operates as follows.Assume first that signal RAIN increases from the value of RBIN to a value greater than RBIN. Signal MAX-B from overflow prevention circuit 1204 is low only when adding the constant w to signal RBIN causes the result to exceed the maximum value that can be represented by M bits. When signal MAX-B is high, multiplexer circuit 1207 passes the binary constant "w" to the s terminal of adder circuit 1203. Thus, adder circuit 1203 adds "w" to the value of signal RBIN, and passes the resulting signal "RBIN+w" to the B input terminal of comparator 411, as in the embodiment of FIG. 4. When signal MAX-B is low, multiplexer circuit 1207 passes an all-zero value to the s terminal of adder circuit 1203. Thus, adder circuit 1203 adds a "zero" to the value of signal RBIN, and passes the signal "RBIN" to the B input terminal of comparator 411, as in the embodiment of FIG. 4.Secondly, assume that signal RAIN decreases from the value of RBIN to a value less than RBIN. Signal MIN from overflow prevention circuit 1206 is high only when subtracting the constant x from signal RBIN produces a negative result. When signal MIN is low, multiplexer circuit 1208 passes the binary constant "x" to the t terminal of adder circuit 1205. Thus, adder circuit 1205 subtracts "x" from the value of signal RBIN, and passes the resulting signal "RBIN-x" to the B input terminal of comparator 412, as in the embodiment of FIG. 4. When signal MIN is high, multiplexer circuit 1208 passes an all-zero value to the t terminal of adder circuit 1205. Thus, adder circuit 1205 subtracts a "zero" from the value of signal RBIN, and passes the signal "RBIN" to the B input terminal of comparator 412, as in the embodiment of FIG. 4.In other embodiments (not shown), one or the other of the hysteresis circuits is omitted from the circuit of FIG. 12.The figures shown and described herein illustrate a variety of different equal comparator circuits providing binary hysteresis. It will be apparent to one skilled in the art after perusing the present specification and drawings that the present invention can be practiced within these and other architectural variations.FIG. 13 illustrates the steps of an exemplary method of performing an equal comparison between first and second binary values while providing binary hysteresis. These steps can be performed, for example, using the exemplary circuits illustrated in FIGS. 4 and 10-12. However, other circuits can also be used.In step 1301, signal AIN (a first binary value) has an initial value of "init". Signal BIN (a second binary value) has a different initial value. The circuit implementing the method reports that the first and second binary values are not equal (e.g., signal OUT-EQ is low). The initial value can be either greater than or less than the second binary value.Signal AIN then assumes a new value, such that signals AIN and BIN are equal. In step 1302, the circuit reports that the first and second binary values are equal (e.g., signal OUT-EQ goes high).Signal AIN then assumes a first new value ("new1"), where the first new value differs from the second binary value by a value less than or equal to (i.e., not exceeding) a predetermined constant K (e.g., w or x in the exemplary circuits illustrated herein). The first new value can be either greater than or less than the second binary value. Instead of reporting that the two signals are not equal, in step 1303 the circuit continues to report that the first and second binary values are equal (e.g., signal OUT-EQ remains high).Eventually, the first binary value assumes a second new value ("new2") differing from the second binary value by a number exceeding the predetermined constant K. In step 1304, the circuit reports that the first and second binary values are not equal, e.g., signal OUT-EQ goes low again.FIG. 14 illustrates the steps of another exemplary method of performing an equal comparison between first and second binary values while providing binary hysteresis. These steps can be performed, for example, using the exemplary circuits illustrated in FIGS. 4 and 10-12. However, other circuits can also be used.In step 1401, signal AIN (a first binary value) has an initial value of "init". Signal BIN (a second binary value) has a different initial value. The circuit implementing the method reports that the first and second binary values are not equal (e.g., signal OUT-EQ is low). The initial value can be either greater than or less than the second binary value.Signal AIN then assumes a new value, such that signals AIN and BIN are equal. In step 1402, the circuit reports that the first and second binary values are equal, (e.g., signal OUT-EQ goes high).In a first scenario, signal AIN then assumes a first new value ("new1"). The first new value is greater than the second binary value by a value less than or equal to (i.e., not exceeding) a predetermined constant (e.g., w in the exemplary circuits illustrated herein). Instead of reporting that the two signals are not equal, in step 1403 the circuit continues to report that the first and second binary values are equal. Eventually, the first binary value assumes another new value ("new3") greater than the second binary value by a number exceeding the constant w. In step 1405, the circuit reports that the first and second binary values are not equal.In a second scenario, after step 1402 signal AIN assumes a second new value ("new2"). The second new value is less than the second binary value by a value less than or equal to (i.e., not exceeding) a predetermined constant (e.g., x in the exemplary circuits illustrated herein). Instead of reporting that the two signals are not equal, in step 1404the circuit continues to report that the first and second binary values are equal. Eventually, the first binary value assumes another new value ("new4") less than the second binary value by a number exceeding the constant x. In step 1406, the circuit reports that the first and second binary values are not equal.Those having skill in the relevant arts of the invention will now perceive various modifications and additions that can be made as a result of the disclosure herein. For example, comparator circuits, comparators, A-equals-B comparators, A-greater-than-B comparators, A-less-than-B comparators, multiplexer circuits, adder circuits, adders, subtractors, overflow prevention circuits, registers, memory elements, flip-flops, inverters, NAND- and NOR-gates, and other components other than those described herein can be used to implement the invention. Active-high signals can be replaced with active-low signals by making straightforward alterations to the circuitry, such as are well known in the art of circuit design. Logical circuits can be replaced by their logical equivalents by appropriately inverting input and output signals, as is also well known.Moreover, some components are shown directly connected to one another while others are shown connected via intermediate components. In each instance the method of interconnection establishes some desired electrical communication between two or more circuit nodes. Such communication can often be accomplished using a number of circuit configurations, as will be understood by those of skill in the art.Accordingly, all such modifications and additions are deemed to be within the scope of the invention, which is to be limited only by the appended claims and their equivalents.
A method for transmitting a data packet is disclosed. First, the size of the data packet is determined. If the size of the data packet is below a predetermined threshold size, then a transmit complete interrupt delay is set to a predetermined time. The packet is transmitted over a network. Finally, upon completing transmission of the data packet, a transmit complete interrupt is sent after waiting the predetermined time.
1. A method for transmitting a data packet in a network comprising:setting a transmit complete interrupt delay (TxCID) value in the data packet to zero; determining that the transmit complete interrupt delay (TxCID) feature is enabled; determining a size of said data packet; if the size of said data packet is below a predetermined threshold size, setting the transmit complete interrupt delay (TxCID) value to a predetermined time; transmitting said data packet onto said network; and upon completing transmission of said data packet, sending a transmit complete interrupt (TxCI) after waiting said predetermined time. 2. The method of claim 1 wherein said network is an Ethernet network.3. The method of claim 1 further comprising determining if a transmit complete interrupt delay feature has been enabled.4. The method of claim 1 wherein said predetermined threshold is 1024 bytes.5. The method of claim 1 further comprising if the size of said data packet is above said predetermined threshold, then upon completing transmission of said data packet, sending a transmit complete interrupt immediately.6. The method of claim 1 wherein said transmit complete interrupt is sent to a host device.7. The method of claim 1 wherein said predetermined time is 184 micro seconds.8. A machine readable medium having stored thereon a set of instructions which when executed by a processor causes the processor to effect the following:set a transmit complete interrupt delay (TxCID) value in the data packet to zero; determine that the transmit complete interrupt delay (TxCID) feature is enabled; determine a size of said data packet; if the size of said data packet is below a predetermined threshold size, setting a transmit complete interrupt delay (TxCID) value to a predetermined time; transmit said data packet onto said network; and upon completing transmission of said data packet, sending a transmit complete interrupt (TxCI) after waiting said predetermined time. 9. The machine readable medium of claim 8 wherein said set of instructions causes the processor to further determine if a transmit complete interrupt delay feature has been enabled.10. The machine readable medium of claim 8 wherein said set of instructions causes the processor to further determine if the size of said data packet is above said predetermined threshold, and if so, then upon completing transmission of said data packet, sending a transmit complete interrupt immediately.11. An apparatus adapted for transmitting a data packet to a network comprising:a central processor unit (CPU); a memory; and a network controller operating in the following manner: determining that a transmit complete interrupt delay (TxCID) feature is enabled; determining a size of said data packet; if the size of said data packet is below a predetermined threshold size, setting a transmit complete interrupt delay (TxCID) value to a predetermined time. 12. The apparatus of claim 11 wherein said network is an Ethernet network.13. The apparatus of claim 11 wherein said predetermined threshold is 1024 bytes.14. The apparatus of claim 11 wherein said predetermined time is 184 micro seconds.15. The apparatus of claim 11 wherein said network controller further determines if a transmit complete interrupt delay feature has been enabled.16. The apparatus of claim 11 wherein said network controller determines if the size of said data packet is above said predetermined threshold, and if so, then upon completing transmission of said data packet, sending a transmit complete interrupt immediately to said CPU.
BACKGROUND OF THE INVENTION1. Field of the InventionThe invention relates generally to Ethernet controllers, and in particular, a method for using a delay on a transmit complete interrupt only on small sized packets.2. Background InformationPersonal computers (PCs), servers, printers, and other such devices (sometimes referred to as "data terminal equipment" or DTE) are often connected together as a network or LAN. Many LANs operate according to Ethernet standards and protocols. FIG. 1 shows such a LAN 101. With Ethernet technology, all DTEs 103 in the LAN share the bandwidth of a communication medium 105 (e.g., twisted pair or coaxial cables) that connects the DTEs 103 together. All DTEs 103 in the LAN are reached anytime there is a single transmission of data in the form of "Ethernet" frames having source and destination addresses. Only the DTE 103 having the destination addresses processes the received transmission. Ethernet networks are known as "connectionless" networks because by using source and destination addresses, communication can occur without the need to first establish a connection and without immediate acknowledgement of receipt.PCs and other devices are connected to the LAN by various Ethernet hardware interfaces installed in or coupled to these devices. For example, many PCs are equipped with Network Interface Cards (NIC) 107, such as the commonly used Ethernet NIC card and various Ethernet controller units. The terms Ethernet controller, NIC, and Ethernet NIC are synonymous as used herein. An Ethernet LAN often uses carrier sense multiple access with collision detection (CSMA/CD) methods, where different nodes listen for transmissions in progress in the communication medium before beginning to transmit.During the reception or transmittal of Ethernet packets, the NIC must request resources from the central processor unit (CPU) of its host device. The resources may include, for example, the system bus, input/output ports, or memory. Once the transmit or receipt function is complete, the NIC may release some of the allocated resources. When a packet is received, a receive complete interrupt is generated from the NIC to the host device's CPU to inform the CPU that the NIC needs to copy the received packets into the host device's memory.When a transmit of an Ethernet packet is complete, the NIC will generate an interrupt to the host device's CPU in order to inform the CPU that the NIC is ready to release the resources that it used to transmit the packet. This interrupt is referred to as a "transmit complete interrupt" or TxCI.In certain cases, the TxCI is not sent by the NIC immediately upon completion of the transmittal of the Ethernet packet. Instead, a delay is imposed before the transmit complete interrupt is forwarded to the CPU. This is referred to as a transmit complete interrupt delay (TxCID). This is done because the CPU overhead associated with the transmit complete interrupt is very high. In the prior art, the TxCID is constant for all Ethernet packets.BRIEF DESCRIPTION OF DRAWINGSNon-limiting and non-exhaustive embodiments of the present invention will be described in the following figures, wherein like reference numerals refer to like parts throughout the various views unless otherwise specified.FIG. 1 shows an Ethernet network showing an Ethernet switch connecting two segments of the Ethernet network.FIG. 2 is a flow diagram illustrating the method of the present invention.FIG. 3 is a schematic illustration of a host device implementing the method of the present invention.DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTSThe present invention relates to a method for selectively using the TxCID feature available on many Ethernet controllers. For example, the Model 8255x Ethernet controllers manufactured and sold by the assignee of the present invention each include the TxCID feature.As noted above, the TxCI is an interrupt message sent by the Ethernet controller to the CPU of the host device upon completion of the transmittal of an Ethernet packet. The purpose of the TxCI is to allow the CPU of the host device to re-allocate system resources. In the present invention as further described below, the TxCID is selectively applied to only those Ethernet packets that are smaller than a predetermined threshold.Turning to FIG. 2, the method of the present invention is illustrated. First, at a step 201, the default setting for TxCID is zero. In other words, when the NIC is activated, typically by turning on the host device, the delay value on the TxCI is set to zero. Thus, there is no delay in the forwarding of the TxCI signal to the CPU upon completion of transmittal of the Ethernet packet.Next, at step 203, a determination is made as to whether or not the transmit complete interrupt delay feature has been enabled. The term "enabled" means that the selective application of the TxCID feature is available. The "enabling" can be controlled by the user by configuration of the Ethernet controller. Thus, although the TxCID value may initially be zero, before it can be made non-zero (by enabling the TxCID feature), a determination must be made to ensure that the user did not explicitly disable the TxCID feature. Moreover, since the TxCID feature is applied dynamically, at times the value of TxCID will be zero even though the TxCID feature is enabled.If the selective application of the TxCID feature is available, then at step 205, a determination is made as to whether or not the size of the Ethernet packet to be transmitted is less than a predetermined threshold. In the preferred embodiment, the predetermined threshold is 1024 bytes. Although the threshold in the preferred embodiment is 1024 bytes, it can be appreciated by those in the art having the benefit of this disclosure, the predetermined threshold can be optimized depending upon specific network characteristics, such as network speed. As one example, if the network is a gigabit network, the predetermined threshold may be higher. Conversely, if the network is a 10 megabit Ethernet network, the predetermined threshold may be lower. Note that the size of the Ethernet packet can easily be determined by examining predetermined fields in the Ethernet packet itself.If the Ethernet packet is less than a predetermined threshold, then at step 207, the parameter TxCID is set to a predetermined delay value. The predetermined delay value is set forth as a number of PCI clock cycles. In one embodiment, the number of PCI clock cycles is 5888, corresponding to about 184 microseconds for a 33 MHz PCI bus system. Although the number of PCI clock cycles in the preferred embodiment is 5888, it can be appreciated by those in the art having the benefit of this disclosure, the number of PCI clock cycles can be optimized depending upon specific network characteristics. Alternatively, the predetermined delay value may be set forth as a specific delay time, typically in units of microseconds.If, however, the Ethernet packet is larger than the predetermined threshold at step 205, or if the transmit complete interrupt delay feature is disabled, then at step 209, TxCID is set to zero in the Ethernet transmit descriptor. Ethernet controllers use an implementation dependent structure to describe the Ethernet packet to be transmitted. In the case of the 8255x Ethernet controller, there is a field in the transmit descriptor structure that specifies the value of the TxCID. Thus, if the Ethernet packet is larger than the predetermined threshold, or if the transmit complete interrupt delay feature is disabled, then the TxCID value is set to zero in the transmit descriptor structure.The resulting method of only applying the TxCID to smaller sized packets has proven to increase throughput and reduce CPU utilization. Immediately processing the TxCI on larger sized Ethernet packetsprovides the best performance, however, on smaller sized packets, delaying the TxCI to allow processing of several packets at one time reduces software overhead, thus reducing CPU utilization. The method of the present invention can be implemented as shown in FIG. 3, which shows a DTE 103 in greater detail. The DTE 103 is also referred to as the host device. The DTE 103 includes a central processing unit (CPU) 301, a memory 303, and a network interface card (NIC) 107. The NIC 107 is also known as a network controller. In operation, the NIC 107 is the communciations interface to the Ethernet communication medium 105. Machine readable instructions that implement the method of FIG. 2 may be stored in local memory in the NIC 107 or in memory 303. The instructions are executed by the CPU 301 and/or the NIC 107.The above description of illustrated embodiments of the invention is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. For example, while the invention has been described in the context of an Ethernet LAN and an Ethernet NIC, the invention can be applied to any type of network or NIC.These modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
To provide a system and a method which may be associated with data transaction including micro-sector architectures.SOLUTION: A control circuit may organize transactions to a micro-sector architecture and transactions from a micro-sector, and for example, enable direct addressing transactions as well as batch transactions across a plurality of micro-sectors. A data path disposed between a programmable logic circuit of a column of micro-sectors and a column of row controllers may form a micro-network-on-chip used by a network-on-chip in order to interface with the programmable logic circuit.SELECTED DRAWING: Figure 1
A first network-on-chip located at least partially around a plurality of microsectors arranged in a row and column grid, the plurality of microsectors communicably coupled to a first row controller. The first row from the second row controller using the second network-on-chip under the column of the first network-on-chip and row, including the first microsector that has been A controller configured to transmit commands and first data to the controller, the first row controller configured to use the first data to perform an operation in response to the command. And if the operation causes the generation of the second data, the integration including the controller is configured to use the second network-on-chip to return the second data to the controller. circuit.The plurality of microsectors include a second microsector arranged at a position different from that of the first microsector in the row and column grid, and the first row controller is the second row controller. The integrated circuit of claim 1, wherein is configured to program the first microsector, at least partially in parallel with programming the second microsector.The integrated circuit of claim 1, wherein the second network on chip comprises a data path characterized by the same data width as the routing block of the controller.A third row controller placed beneath the first row controller, the third row controller and the first row controller are coupled to a shared data path, and the first row controller is the first row controller. The integrated circuit of claim 1, wherein the second row controller is configured to access the command transmitted over the shared data path before being allowed to access the command.The integrated circuit according to any one of claims 1 to 4, wherein the streaming data packet includes the command and the first data, and the streaming data packet contains the command as a part of a header.The first row controller determines that the header matches at least a portion of the identifier associated with the first row controller and ceases transmission of the streaming data packet over the shared data path. 5. The integrated circuit of claim 5, configured to shift the streaming data packet from the shared data path.The integrated circuit of claim 5, wherein the header comprises instructions from the first row controller.The integrated circuit according to any one of claims 1 to 7, wherein the first microsector includes a plurality of logical access blocks coupled to data registers.The data register includes a 1-bit wide data path, a first flip-flop, and a second flip-flop, and the 1-bit wide data path includes the first flip-flop and the second flip-flop. The integrated circuit according to claim 8, which is coupled to and from a flip-flop.The step of receiving an access command from a part of the programmable logic circuit, the step of determining the target node specified by the access command, and the step of determining the target micro network on chip sequence using the target node. A step and a step of generating a message to read or write data related to the target node, the message comprising a first identifier for a target micro network on chip row containing the target node. , And a step of outputting the message to a routing fabric configured to pass or direct the message based on the first identifier.10. The method of claim 10, comprising a step of determining a parameter from the access command and a step of determining the target node from the parameter.In the step of generating the message to include a second identifier for the target node, each node between the target node and the first node in the target micro network on chip row is a node. The method of claim 10 or 11, comprising the step of determining whether the second identifier of the message matches its own identifier.A step of receiving the message from the routing network, wherein the message comprises a step comprising the request data previously stored in the target node before the target node inserts the request data into the message. The method according to any one of claims 10 to 12.13. The method of claim 13, comprising the step of receiving a toggled confirmation signal in response to the target node inserting the request data into the message.The second control circuit includes a first control circuit arranged between a programmable logic circuit including a configuration memory and a portion of the programmable logic circuit, and a second control circuit arranged outside the programmable logic circuit. The control circuit receives an access command from a part of the programmable logic circuit, determines the target node specified by the access command, and uses the target node to determine the target micro network on chip sequence. Generates a message to read or write data related to the target node, the message includes an identifier for a target micro network on chip row containing the target node, and the message is referred to as the first. A system configured to output the message to a routing fabric configured to pass or direct the message based on the identifier in order to route to the control circuit of.The first control circuit is at least in the constituent memory of the microsector of the target node based at least in part on shifting the target data of the message at least once through each 1-bit data register of the microsector. 15. The system of claim 15, configured to read the data from some.The first control circuit is at least in the constituent memory of the microsector of the target node, at least partially based on shifting the target data of the message one or less times through each 1-bit data register of the microsector. 15. The system of claim 15, configured to write said data to some.The system according to any one of claims 15 to 17, wherein the target node includes a scan register used to perform a verification operation.The system of any one of claims 15-18, wherein the message comprises a header configured to indicate a command executed by the target node.The row controller of the target node receives the message, verifies that the header contains an identifier that matches the identifier of the row controller, and then generates a plurality of control signals for executing the command. 19. The system of claim 19.A means of receiving an access command from a part of the programmable logic circuit, a means of determining the target node specified by the access command, and a means of determining the target micro network on chip sequence using the target node. Means and means to generate a message that causes the data associated with the target node to be read or written, the message comprising a first identifier for a target micro network on chip sequence that includes the target node. , And means for outputting the message to a routing fabric configured to pass or direct the message based on the first identifier.21. The system of claim 21, comprising means for determining a parameter from the access command and means for determining the target node from the parameter.A means of generating the message to include a second identifier for the target node, each node between the target node and the first node in the target micro network on chip row. 22. The system of claim 21 or 22, comprising means for determining whether the second identifier of the message matches its own identifier.Means for receiving the message from the routing network, the message comprising means including the request data previously stored in the target node before the target node inserts the request data into the message. The system according to any one of claims 21 to 23.24. The system of claim 24, comprising means for receiving a toggled confirmation signal in response to the target node inserting the request data into the message.
The present disclosure relates to integrated circuit devices using programmable structures located in microsectors.This passage is intended to introduce to the reader various aspects of the technique that may be relevant to the various aspects of the present disclosure, as described below and / or in the claims. This description will help provide the reader with background information to facilitate a better understanding of the various aspects of this disclosure. Therefore, it should be understood that these references are read from this point of view, not as a certification of prior art.Advances in microelectronics have enabled continuous increases in transistor density and bandwidth for a variety of integrated circuit devices and communication technologies. In fact, some advanced integrated circuits, such as field programmable gate arrays (FPGAs) or other programmable logic devices, have an increasingly broad programmable circuit design to implement a number of different functions. May contain a large number of transistors that allow it to be programmed into a programmable fabric. In some cases, the data generated by these features may be packetized and routed to or from other devices to perform the operation or to communicate the result of the operation. .. However, the circuit design for programmable logic devices may be customized by the user for specific applications, so the relatively large sector-based registers used in the logic fabric of these devices are for circuit design. May overallocate space in the logic fabric of.The advantages of the present disclosure may be apparent by reading the detailed description below and referring to the drawings. FIG. 6 is a block diagram of a system used to program an integrated circuit according to an embodiment. It is a block diagram of the integrated circuit of FIG. 1 by one Embodiment. FIG. 3 is a block diagram of an application system including the integrated circuit and memory of FIG. 1 according to an embodiment. FIG. 1 is a block diagram of the programmable logic of the integrated circuit of FIG. 1 implemented using sector allocation according to one embodiment. A block of programmable logic of the integrated circuit of FIG. FIG. 6 is a block diagram of the programmable logic of FIG. 4B according to an embodiment. It is a block diagram of the microsector of the programmable logic of FIG. 5 by one embodiment. FIG. 3 is a block diagram of at least some control circuits for a portion of programmable logic and a portion of programmable logic according to an embodiment. FIG. 6 is a block diagram of a micro network on chip data path coupled to the row controller of FIG. 7 according to an embodiment. FIG. 6 is a diagram of an exemplary data flow relating to a column manager (CM) of FIG. 8 according to an embodiment. It is a block diagram of the column manager (CM) of FIG. 8 according to one embodiment. FIG. 10 is a diagram of a logical address space associated with the column manager (CM) of FIG. 10 according to an embodiment. FIG. 5 is a one-column data packing diagram that may be used by the column manager (CM) of FIG. 10 according to one embodiment. FIG. 5 is a 4-column data packing diagram that may be used by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 5 is a memory spatial indexing diagram referenced from a register transfer level (RTL) design file by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 3 is a diagram of a first exemplary memory operation performed by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 6 is a diagram of a second exemplary memory operation performed by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 3 is a diagram of a third exemplary memory operation performed by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 5 is a diagram of a fourth exemplary memory operation performed by the column manager (CM) of FIG. 10 according to an embodiment. FIG. 5 is a diagram of a fifth exemplary memory operation performed by the column manager (CM) of FIG. 10 according to an embodiment.One or more specific embodiments of the present disclosure will be described below. Not all features of the actual implementation are described in the specification for the purpose of providing a brief description of these embodiments. In developing any such actual implementation, as in any engineering or design plan, set developer-specific goals such as compliance with system-related or business-related constraints. It should be recognized that many implementation-specific decisions must be made to achieve these, which can vary from implementation to implementation. Moreover, such development efforts can be complex and time consuming, but nevertheless, for those skilled in the art who benefit from the present disclosure, they are routine tasks of design, production and manufacturing. Should be recognized. The techniques presented herein and in the claims are referred to and applied to tangible objects and examples of practical properties that clearly improve the art and are therefore abstract, intangible or pure. Not theoretical.When introducing the elements of the various embodiments of the present disclosure, the indefinite articles ("a" and "an") and the definite articles ("the") are intended to mean that one or more of the elements are present. do. The terms "comprising", "including" and "having" mean that they are inclusive and that additional elements other than those listed may be present. Intended. Further, it is understood that reference to "one embodiment" or "embodiment" of the present disclosure is not intended to be construed as excluding the existence of further embodiments that also include the features described. It should be. In addition, the phrase A "based on" B intends that A is at least partially based on B. Further, unless otherwise stated, the term "or" is intended to be inclusive (eg, OR) and not exclusive (eg, Exclusive OR (XOR)). In other words, the phrase A "or" B is intended to mean A, B, or both A and B.Programmable logic devices are becoming more and more popular in the market, allowing customers to implement circuit designs in logic fabrics (eg, programmable logic). Due to the highly customizable nature of programmable logic devices, the logic fabric should be constructed by circuit design before using the circuit corresponding to the circuit design. When implementing a design in the logic fabric, sectors may be used to allocate parts of the logic fabric to implement the circuit. However, sectors can result in relatively inaccurate allocations and / or large allocations of the entire logic fabric area, at least in part, due to the physical placement and data registers of the interconnects of the programmable logic devices.By rearranging some of the interconnects of the programmable logic device and / or reducing the data width of the data registers, the system and process for implementing the circuit design in the logic fabric can be improved. For example, by making some of these changes, the size of the sectors may be reduced, microsectors may be formed, and the relatively fine particle size allocation used to allocate the logic fabric to the circuit design is possible. do. This allows, for example, more efficient allocation of resources to each circuit design, thus allowing the circuit design to use less resources in the implementation.The ability to partition and control fine-grained and / or parallel device configurations (interconnection rearrangements and / or data), as circuit designs for programmable logic devices may be customized by the user for specific applications. (Can be brought about by reducing the data width of the register) enables many advantages specific to devices with programmable logic. Some of these advantages may lie in the construction of the device, and some advantages lie in the enabled device usage model (eg, enabled or acceptable use cases). For device construction, the fine-grained configurable area may be a mechanism that allows the device to be constructed with an appropriate or tuned amount of resources for device implementation. Some of the new usage models have faster configurations, faster partial reconfigurations and faster single events for smaller areas of the device when compared to other systems and methods for programming programmable logic devices. This is possible with SEU, single-event update detection.These changes in the system implementation also include reducing the configuration time used when performing partial reconfiguration, which may improve (eg, reduce) the overall configuration time, and may be faster. It may enable single-event upset (SEU) detection. For example, the proposed structural changes described herein may allow the partial reconstruction to occur in the same time as a normal configuration.The microsector infrastructure may use fewer columns (eg, 8 columns for 50 columns) in a single fabric row (row area). The row area may receive data from smaller data registers (eg, 1-bit data registers relative to 32-bit data registers). The microsector may represent a relatively small percentage of the area of the programmable logic device (eg, less than 1% of the total fabric area), and it may be feasible to make the microsector a partial reconstruction amount. This may allow it to be a write-only operation that avoids performing read / modify / write each time a partial reconstruction occurs for a microsector, thereby saving time and resources for the partial reconstruction. save. In some cases, the partial reconstruction time may be reduced by a factor of 5 or 6 and is a relatively high amount of performance improvement. In addition, the number of columns is reduced, which may reduce the amount of time spent waiting for data transmission to complete (to or from the row area), thereby operating the programmable logic device. To improve.The microsector architecture may be combined with a network-on-chip (NOC) data transmission method. Standard NOC implementations are in some cases inefficiently applied to field programmable gate arrays (FPGAs) or other programmable logic devices. For example, these implementations do not consider the iterative nature of the FPGA programmable logic, nor do they consider the meaning of the data density and the difference in aspect ratios that connect to the FPGA programmable logic with standard NOCs. Therefore, simply using programmable logic with a standard NOC can limit its usefulness, reduce the available transaction bandwidth, and increase latency.The present disclosure describes an interface that allows communication between a programmable logic with a microsector architecture and a NOC while avoiding adverse effects from these two interfaces. In particular, the disclosure is placed within and / or integrated into a microsector architecture to form a columnar-oriented network structure that uses extensible data processing processes. Describes data transactions related to a microsector architecture that can use one or more micro-network-on-chips (microNOCs). A column-oriented network structure is an iterative structure used for the interface between programmable logic and one or more NOCs, which fits within a programmable logic memory column (eg, an FPGA fabric memory column). Expandable column-oriented network structures can allow high bandwidth and relatively complex data transactions, as well as transactions performed using network-on-chip (NOC). Does not impose a large footprint or performance penalty on the device. These advantages can be inherently provided by the architecture and are independent of any further performance optimization performed by the compiler or during the programmable logic design process.In fact, structures that provide one or more microNOCs, and methods that can be used to address a particular microNOC or a particular device of a microNOC (ie, a particular microsector) are described herein. These systems and methods are used to request loading or unloading of specific memory associated with a specific microNOC (eg, specific memory of a specific row controller) into or off-chip memory. A control mechanism may be provided. In addition, these systems and methods increase the ease of use for customers and control systems that implement transactions, while providing high bandwidth data between memories and into programmable logic (eg, deeply located configuration memory). It can dramatically reduce the complexity of bus routing. Reducing the complexity of the system may cause reduced power consumption and more efficient resource consumption in the integrated circuit performing these memory transactions. In fact, these systems and methods consume power associated with moving data from off-chip memory interfaces to programmable logic by using dedicated bus routing to parts of the microNOC, as opposed to soft logic routing. The amount can be reduced. Soft-logic routing uses a relatively large amount of flip-flops and / or latches to exchange data, which can increase the latency of data transmission and propagate clocks with aligned timing. Keep in mind that it may depend on the distributed clock signal network. With the added benefit of reducing the reliance on accurate clock alignment and freeing up soft logic for other uses by reducing the amount of soft logic-based routing used to transmit data. , Data transmission can be faster.The microNOC may include a shared data path (eg, a shared vertical data path) and a column of row controllers connected to each microsector. The microNOC data path and row controller may include hard logic. The row controller may include hard logic, which interfaces with the hard and soft logic of the microsector. The row controller may communicate with a controller located outside the programmable logic by means of a message transmitted over a shared data path. These messages are for communicating between a row controller and other devices, such as devices outside the microsector, other row controllers or parts of programmable logic programmed to perform logical functions. It may include transaction-related data, headers, command instructions, slots for data to be stored, and the like.Data may be transmitted to one or more microsectors using a data streaming protocol and using bidirectional movement. Thus, one or more row controllers may inspect the packet header before accessing the packet payload to determine to which row controller the packet should be delivered. If the row controller detects that the packet has a header that matches its identifier, the row controller may receive the packet and process any data and / or command contained in the packet. This structure can help improve transaction speed, as multiple simultaneous traffic flows in one or two data movement directions can occur even within the same column of microsectors. For example, microNOC includes a shared data path that uses a data streaming process to simultaneously deliver different commands to different row controllers by separating command delivery within different packets with different headers.The microNOC, column manager and / or row controller may be individually addressed using the logical addresses described herein. It may allow direct access to a location in programmable memory by addressing it directly to its corresponding row controller. Logical address spaces are discussed herein. Addressing a packet to a particular row controller using the logical address space, in combination with the routing circuit between the column manager and the route to the microNOC, communicates with the NOC and / or any column manager. Any peripheral device may be allowed to communicate with a particular row controller.Data transactions may occur between the row controller and any suitable data source and / or endpoint using direct addressing. This allows a logic design implemented in one part of the programmable logic to generate instructions and trigger reading or writing of data to other parts of the programmable logic. Each column manager may help execute several types of transactions, and each type of transaction may use a direct addressing process. These transactions include direct addressing reads, direct addressing writes, first-in, first-out (FIFO, first-in, first-out) reads (eg, streaming reads), FIFO writes (eg, streaming writes), and loads (eg, multiple writes). , Batch write) and unload (eg, multiple reads, batch reads).Direct Addressed Transactions involving reads or writes may use addresses from the global address space that reference a particular row controller (or group of row controllers) to access the data stored in the microsector. .. These transactions may read or write any appropriate number of words from any position within any valid row controller (eg, row controller with an assigned address). Transactions involving FIFO reads or writes may continuously stream data to or from one or more row controllers and other devices (eg, on-chip memory, off-chip memory, one or more processors). .. In addition, transactions involving loads or unloads are used to perform block moves between one or more row controllers and other devices (eg, on-chip memory, off-chip memory, one or more processors). May be done.Direct addressing and data streaming methods may also allow a relatively large amount of data to be transmitted between the programmable logic and the data source (or data endpoint). For example, a column manager that directly addresses one or more row controllers and / or one or more microNOCs for a transaction can otherwise be used for machine learning, signal processing by simplifying these complex transactions. It can improve the processing speed associated with moving data for applications, graphic processing unit (GPU) calculations and / or other data intensive applications.The addressing methods described herein and other advantages of using microNOC include the ability to store data in a different order than the logical read and / or write order. The data may be read from the columns manager's registers in logical order. However, the data may be read from the programmable logic in a different order than the logical order. The ability to read and write data to and from different row controllers in a different order than this logical order represents a dramatic improvement in memory access, especially programmable logic access methods. This is a typical process improvement that goes beyond reading and writing data to and from programmable logic according to logical order. The ability to store data in any order may allow the column manager to store the data in an order that is convenient for the operation, rather than being limited to a logical order. Therefore, the column manager follows the data striping process within a single microNOC column or among multiple microNOC columns, in an order that is considered more convenient by the column manager and / or the entire system (eg, for example). It may have the ability to pack data at low cost, overall low memory usage, low footprint).With the above in mind, FIG. 1 shows a block diagram of a system 10 in which arithmetic operations can be implemented. The designer may implement features such as the arithmetic operations of the present disclosure on an integrated circuit 12 (eg, a programmable logic device such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC)). You may want it. In some cases, the designer may specify a high-level program to be implemented, such as an OPENCL program, which is a low-level hardware description language (eg Verilog, ultrafast integrated circuit hardware description). More efficient and easier programming instructions for designers to configure a set of programmable logic cells for integrated circuits 12 without specific knowledge of the language (VHDL, very high speed integrated circuit hardware description language). May be made possible to provide. For example, OPENCL is so similar to other high-level programming languages like C ++ that programmable logic designers familiar with such programming languages will implement new features in integrated circuit 12. It may have a reduced learning curve than a designer who is required to learn an unfamiliar low-level hardware description language to do so.Designers may use design software 14, such as the version of INTEL QUARTUS by INTEL CORPORATION, to implement high-level designs. Design software 14 may use compiler 16 to convert high-level programs into low-level descriptions. Compiler 16 may provide machine-readable instructions representing high-level programs to host 18 and integrated circuit 12. Host 18 may receive host program 22 which may be implemented by kernel program 20. To implement the host program 22, the host 18 may use communication link 24, which may be, for example, direct memory access (DMA) communication or peripheral component interconnect express (PCIe) communication. Instructions may be communicated from the host program 22 to the integrated circuit 12. In some embodiments, the kernel program 20 and the host 18 may allow the configuration of the logic block 26 on the integrated circuit 12. The logic block 26 may include circuits and / or other logic elements and may be configured to perform arithmetic operations such as addition and multiplication.The designer may use the design software 14 to generate and / or specify a low-level program such as the low-level hardware description language described above. Further, in some embodiments, the system 10 may be implemented without a separate host program 22. Further, in some embodiments, the techniques described herein may be implemented in the circuit as a non-programmable circuit design. Accordingly, the embodiments described herein are exemplary and are intended to be non-limiting.Next, proceeding to a more detailed description of the integrated circuit 12, FIG. 2 is a block diagram of an example of the integrated circuit 12 as a programmable logic device such as a field programmable gate array (FPGA). Further, it should be understood that the integrated circuit 12 may be any other suitable type of programmable logic device (eg, ASIC and / or application specific standard product). The integrated circuit 12 has an input / output circuit 42 for driving a signal from the outside of the device (eg, the integrated circuit 12) and receiving a signal from another device via the input / output pin 44. May be good. Interconnect resources 46 such as global and local vertical and horizontal conductive lines and buses, and / or configuration resources (eg, wiring couplings, logical couplings not implemented by user logic) to route signals on integrated circuit 12. May be used for. In addition, the interconnect resource 46 may include fixed interconnects (conductive wires) and programmable interconnects (ie, programmable connections between the respective fixed interconnects). Programmable logic 48 may include combined logic circuits and sequential logic circuits. For example, the programmable logic 48 may include a look-up table, registers and multiplexer. In various embodiments, the programmable logic 48 may be configured to perform custom logic functions. Programmable interconnects related to interconnect resources may be considered part of programmable logic 48.A programmable logic device such as integrated circuit 12 may include a programmable element 50 with programmable logic 48. For example, as described above, a designer (eg, a customer) may (eg, (re) reconfigure) a programmable logic 48 to perform one or more desired functions. good. As an example, some programmable logic devices may be programmed or reprogrammed by configuring the programmable element 50 with a mask programming configuration performed during semiconductor manufacturing. Other programmable logic devices are configured after the semiconductor manufacturing operation is complete, for example by using electrical programming or laser programming to program the programmable element 50. In general, the programmable element 50 may be based on any suitable programmable technology such as fuses, anti-fuse, electrically programmable read-only memory cells, random access memory cells, mask programming elements and the like.Many programmable logic devices are electrically programmed. Depending on the electrical programming configuration, the programmable element 50 may be formed from one or more memory cells. For example, during programming, configuration data is loaded into memory cells using input / output pins 44 and input / output circuits 42. In one embodiment, the memory cell may be implemented as a random access memory (RAM) cell. The use of memory cells based on the RAM technology described herein is intended to be an example only. In addition, these RAM cells are sometimes referred to as configuration RAM cells (CRAM, configuration RAM) because configuration data is loaded during programming. Each of these memory cells may provide a corresponding static control output signal that controls the state of the associated logic component in the programmable logic 48. For example, in some embodiments, the output signal may be applied to the gate of a metal-oxide-semiconductor transistor in Programmable Logic 48.With the description of FIGS. 1 and 2 in mind, the user (eg, the designer) may use the design software 14 to implement the logic block 26 on the programmable logic 48 of the integrated circuit 12. In particular, the designer may specify that mathematical operations such as addition and multiplication are performed in high-level programs. Compiler 16 may translate the high-level program into the low-level description used to program the programmable logic 48 to perform the addition.Once programmed, integrated circuit 12 may process the dataset 60, as shown in FIG. FIG. 3 is a block diagram of an application system 62 including an integrated circuit 12 and a memory 64. The application system 62 may represent a device that uses the integrated circuit 12 to perform an operation based on a calculation result from the integrated circuit 12 or the like. The integrated circuit 12 may receive the data set 60 directly. The data set 60 may be stored in the memory 64 before, during, or at the same time as being transmitted to the integrated circuit 12.The emergence of 5th generation and higher communication technologies, and / or neural networks for performing calculations (eg, machine learning and / or artificial intelligence). As the bandwidth and processing expectations increase, such as with widespread use of calculations), the integrated circuit 12 can be expected to cope with subsequent increases in the size of the dataset 60 over time. The integrated circuit 12 also performs digital signal processing and ML operations on signals transmitted using 5G or higher technology (eg, signals with higher throughput and / or higher data transmission bandwidth). Can be expected. These desired applications run-time, such as during a partial reconfiguration that results in the configuration of a portion of the integrated circuit 12 without causing the configuration of other parts of the integrated circuit 12 during run-time operation of the integrated circuit. It may be carried out dynamically during. At least for these reasons, it may be desirable to improve the configuration method to meet the technical complexity and timing specifications. To do so, programmable logic 66, which includes at least programmable logic 48, input / output pins 44 and interconnect resources 46, utilizes 1-bit data registers to (re) configure programmable logic 48 using microsectors. You may. Using microsectors to program circuit functions within programmable logic 48 is a write-only reconstruction, a relatively small area SEU detection (eg, 1-bit area detection), a comparison for the reconstruction area. It can provide the advantages of allowing relatively small granularity and relatively large parallel configuration (eg, parallel configuration of 1-bit wide data channels) operation. As used herein, the term microsector refers to a sector of programmable logic that has relatively small data registers. In one example, the microsector has a 1-bit data register. In some embodiments, the microsector may have a larger data register, but may still be smaller than what is normally present in the sector (eg, less than 32 bits, less than 16 bits, 8 bits). Less than).In detail with respect to the smaller particle size for the reconstructed region, FIG. 4A is a block diagram of an exemplary programmable logic 66. The programmable logic 66 may include a controller 76 for programming the programmable logic 66. Once programmed, the circuit of programmable logic 66 may be used to perform digital signal processing, machine learning processing, computation, logic functions, etc. (eg, represented by part 78). However, the programmable logic 66 may be divided into relatively large logic sectors, and therefore portion 80 may be allocated to the circuit as opposed to the area of the circuit corresponding to the portion 78. This overallocation of resources can waste the circuit, as the size difference between part 80 and part 78 represents underutilized programmable logic 66. Keep in mind that when partially reconstructing programmable logic 66, it may be desirable to meet certain velocity metrics (eg, partial reconstruction may be desired to be completed within a relatively fast time). ). In these cases, over-allocation of resources can occur, for example, because slower configuration speeds that can improve resource allocation may not be desirable.In fact, when a device is embedded in multi-sector programmable logic, the device has more or less logic than what is desired to be allocated to assemble the device (eg, LAB, logic arithmetic). block), digital signal processing (DSP, digital signal processing) block) may be more likely to have. This overallocation can occur because a rectangular number of sectors are used to implement the exemplary device. By rearranging the interconnects to form microsectors and / or reducing the data width of the data registers, a relatively accurate amount of logic (eg, a more accurate number of LAB or DSP blocks) can be applied to the device. May be assigned to an implementation of.As shown in FIG. 4B, when implementing a circuit represented by a portion 78 within a programmable logic 66 that uses microsector logic division, less programmable logic 66 may be wasted when implementing the circuit. FIG. 4B is a block diagram of programmable logic 66 implemented using microsectors. In fact, the microsector may also allow the circuit corresponding to portion 78 to be mounted in the region represented by portion 82. Although not shown to scale, part 82, which implements the circuit corresponding to part 78, makes efficient use of programmable logic 66, and part 80, which implements part 78, uses programmable logic 66 in another way. Can be used inefficiently.More specifically, FIG. 5 is a block diagram of programmable logic 66. Programmable logic 66 may be coupled between microsectors 92 using interconnect resource 46. In fact, the interconnect resource 46 can be used to move data from a first position to a second position in the programmable logic 66 and / or integrated circuit 12 with a data shift register, register, logic gate, direct coupling. , Reprogrammable circuits, etc. may be included as appropriate. One or more microsectors 92 may be programmed by the controller 76 with information to perform circuit functions such as the circuit corresponding to portion 78. However, since the controller 76 may transmit configuration data (or any suitable data), the granularity of the area used to program the function into the programmable logic 66 may be reduced. As these particle sizes are reduced or become more precise (eg, smaller), the programming of the programmable logic 66 can be improved because the circuit design can be configured more efficiently in the programmable logic 66. Note that the programmable logic 66 and / or integrated circuit 12 may be any suitable type of software or hardware, or a combination of the two. The integrated circuit 12 and / or the programmable logic 66 may or may be a programmable logic 48, a programmable element 50, etc. to allow one or more parts to be reconfigurable (eg, reconfigurable). But it may be. Controller 76 uses an interconnect resource 46 that may include an interface bus such as an advanced interface bus (AIB) and / or an embedded multi-die interconnect bridge (EMIB). An interface connection may be made with sector 92. As mentioned above, the programmable logic 66 may be a reprogrammable circuit capable of performing a large number of tasks.FIG. 6 is a block diagram of two exemplary microsectors 92 (eg, microsector 92A, microsector 92B). Although this application describes a particular architecture for microsector 92, it should be understood that any suitable architecture may be used. In fact, each microsector 92 is one or more capable of interfacing with the interconnect resource 46, which is shown here to communicate with the microsector 92 via an address register (AR) 106. It may include a logic access block (LAB) 104 (eg, 8 LABs). In fact, the interconnect resource 46 may include one or more AR106s for transmitting and / or receiving signals from the microsector 92, in addition to or instead of other control circuits, logic circuits ( For example, AND gates, OR gates, not-OR gates, exclusive OR gates, flip-flops, switch-reset (SR, switch-reset) latches) and the like may be included. It should also be understood that the same or similar circuits may be included in each microsector 92.The LAB 104 may receive data from the AR 106 through the address line buffer (ALB) 108. The ALB 108 may include a digital signal processing (DSP) circuit and / or a control circuit that transforms the data from a suitable format for transmission to the microsector 92A to a suitable format for use by the LAB 104 circuit. In some cases, the LAB 104 may be coupled to scan registers used to perform operational verification and / or data integrity operations. The scan register may be a dedicated data transmission path, such as a path used independently of other data transmission paths through the microsector 92.Each LAB 104 may include several arithmetic logic element circuits (ALE, eg, 10 ALE 110). The micro-data register (μDR) 112 is located in at least part of the ALE 110, such as other layers of silicon or other materials used to physically form integrated circuits. May be good. μDR112 communicatively couples each LAB104 to ALB108. Each ALE110 of the LAB 104 may be shared and / or combined with a LAB-wide control block (LCB) 114. The LAB 104 are separated from each other by a routing fabric 116 (eg, Random Access Memory (CRAM), Configuration Memory). In this example, μDR112 passes through LCB114 through the center of the circuit line corresponding to microsector 92A.To further elaborate on the interconnection between the AR 106 and the microsector 92, FIG. 7 shows the communication coupling between the row controller 126 and the microsector 92 from FIG. 6 and the row area (row) implemented in the AR 106. It is a block diagram of region) 124 and row controller 126. It should be noted that the microsector 92 may in some cases be referred to with respect to the row area 124, as designs such as manufacturer design or user design may be loaded into the microsector 92 for implementation. The AR 106 may include any suitable control system circuit and / or logic circuit. In fact, AR106 may be an address register from INTEL STRATIX 10 or INTEL AGILEX by INTEL CORPORATION. Further, the illustrated AR 106 is located between at least two microsectors 92. To accommodate the physical boundaries of the programmable logic 66 or integrated circuit 12, or to avoid supporting left and right data movement patterns, the AR 106 may be placed by only one column area 128 of the microsector 92 ( There are several cases (for example, directed to the right side of AR106 or to the left side of AR106). The various row areas 124 and column areas 128 are arranged as a grid on the same physical substrate.Each row controller 126 may control the row area 124 of the microsector and may therefore be associated with or may be ALB 108 described above. For microsector implementation, the AR 106 may be iteratively shared among the column areas 128 of the microsector 92 (eg, column area 128A, column area 128B, column area 128C, column area 128D). For example, the column area 128A shares the AR106A with the column area 128B and is arranged adjacent to the column area 128C. The microsector 92 in the column area 128C may share the AR106B with the microsector 92 in the column area 128D. Therefore, the microsector 92 in column region 128C is controlled using the signals generated and / or transmitted by the row controller 126 of the AR106B, independent of at least some signals transmitted over the AR106A. May be good. Microsector 92C, which is part of the same row area 124, may be controlled differently than microsector 92B because the microsector 92 is associated with a different column area 128. Further, since the microsector 92C receives control signals from separate row controllers 126 (eg, row controller 126A, row controller 126B) that are part of the same column area (eg, column area 128C), the microsector It may be controlled differently from 92D. The microsector 92 may be formed to divide the row area 124 into smaller portions and thus provide a smaller particle size.Row controller 126 may use any suitable communication protocol to transmit and / or receive signals from the respective microsector 92. For example, the row controller 126 may receive AXI (Advanced) in an internal write register (eg, inside each row controller 126) to receive addresses and data corresponding to addresses within the same symbol (eg, the same packet transmission). eXtensible Interface) 4 Streaming protocols for transmitting one or more streaming data packets, such as streaming, may be used.Each AR106 may include a local sector manager 130 (eg, LSM130A, LSM130B) at the bottom or top of the AR106 column region to interface with its corresponding CM132. For example, the LSM130A is shown in the AR106A column region and at the top of the CM132A, which are communicably coupled. The LSM130A is also located outside the programmable logic 66. It should be understood that one LSM130 may be included per AR106, but the LSM130 may be shared by two or more AR106s so that one LSM130 controls two or more AR106s. Is.In some cases, the LSM 130 may be integrated with an AR column manager (CM, column manager) 132 (eg, CM132A, CM132B) to form a respective sector column manager (SCM). Although shown as separate blocks, CM132 may be included in the same column manager. An exemplary layout of CM132 with the relevant AR106 is described below with reference to FIG.Each CM132 may be responsible for managing transactions between its corresponding AR106 device and the interconnect resource 46. For example, CM132A may work with LSM130A to transmit commands to microsector 92A and microsector 92B. The CM132 and LSM130 may be involved in routing commands such as configuration instructions from other parts of integrated circuit 12 or other microsectors 92 to specific microsectors 92. If the interconnect resource 46 involves the use of network-on-chip, CM132 may manage transactions between the network-on-chip and the corresponding AR 106. This configuration may allow relatively high bandwidth data movement between the master bridge and the slave bridge performed via the interconnect resource 46. The reason for this may be, for example, the CM132 may help coordinate transmissions between multiple microsectors and / or multiple AR106s, whereby transmissions can be parallelized, or time and / or order. This is because they can be linked at least partially in.A controller such as controller 76 may transmit packets containing data and commands to perform configuration and configuration tests to LSM130 and / or CM132 respectively. To implement the configuration, one or more LSM 130s may generate their own commands that can be interpreted by each row controller 126, and each command controls the configuration of one or more microsectors 92. May be used for. The circuit represented by the configuration so that the data and commands transmitted from controller 76 to LSM130 are implemented in a subset of microsectors 92 managed (eg, communicably coupled) for each LSM130. It may correspond to a part of the design. Once the configuration is implemented in Programmable Logic 66 (or at least partially implemented), one or more LSM130s test the implemented configuration to verify that the configuration works as expected. May be good. The test may be performed using some of the data and commands received by the LSM 130 from the controller 76. For example, while one or more other row areas 124, column areas 128 or microsector 92 continue to be programmed (eg, configured), the LSM 130 takes time to program (eg, configure) further parts of the programmable logic 66. Each part of the circuit design corresponding to each intersection of the column region 128 and the row region 124 which overlaps at least partially may be tested. One or more row controllers 126 may program each microsector 92 in parallel and / or at least partially in overlapping periods. Once each part of the programmable logic 66 is programmed, the LSM 130 may work together to perform system-wide testing of one or more circuit designs implemented in one or more microsectors 92. In addition to verifying the overall circuit operation, the tests performed may include aggregated operations that verify the operation of parts of the circuit. Each LSM 130 may act as a management engine for a local set of microsectors 92.In fact, each row controller 126 may receive a command from its corresponding LSM 130 or decode the command to generate a control signal. The control signal may control the operation of the corresponding row area 124 of the microsector 92. For example, a row controller 126A coupled between microsector 92C and microsector 92E generates a control signal used to control the operation of microsector 92C and microsector 92E located in the same row area 124. You may. Further, each LSM 130 may control two column regions 128, as opposed to the LSM 130 controlling multiple column regions 128.For example, the LSM 130 may generate commands related to read and write operations. In some cases, the LSM 130 may also instruct the row controller 126 to decompress (eg, decode) the data associated with the command before transmitting the data to the respective microsector 92. The row controller 126 can be read and / or written by the LSM 130 and / or the controller 76 via the interconnect resource 46 to read or write data to the microsector 92 (eg, configuration data, test data). It may be considered an endpoint. Illustrated as containing 43 row regions 124 and 43 row controllers 126, but in order to implement the systems and methods described herein, any appropriate number of regions 124, columns. Note that region 128 and the like may be used in the integrated circuit 12.Next, illustrating an exemplary chip layout and AR106 (ie, micro network on chip), FIG. 8 shows a micro network on chip (ie, micro network on chip) containing bidirectional data paths 144 and multiple row controllers 126. It is a block diagram of microNOC) 142. This extensible column-oriented network structure fits within the fabric memory columns of Programmable Logic 66, enabling data transaction operations such as dynamic and / or static bandwidth allocation, virtual channels, and so on. This network structure may include control circuits located between or outside parts of programmable logic 66. For example, the CM 132 may be located outside the programmable logic 66 and / or the row controller 126 may be located between parts of the programmable logic. Each microNOC 142 is formed from a bidirectional data path 144 that interconnects the columns of row controller 126 to their respective CM 132s and, if used, their respective LSM 130s. Subsets of microNOC142 may share their respective CM132.Each CM 132 may be coupled to a network on chip (NOC) 146. In some cases, the interconnect resource 46 may include a network-on-chip (NOC) 146 and / or may form a network-on-chip (NOC) 146. The NOC146 may be placed around a portion of the programmable logic 66 and / or around the entire programmable logic 66. When used in FPGAs, the FPGA die fabric may integrate NOC146. The NOC146 may use commands sent through the microNOC142 to communicate with individual row controllers 126, and thus programmable logic 66. In some cases, the NOC146 may include a horizontal NOC circuit and a vertical NOC circuit so that the NOC146 is not continuous as a whole. However, even in these cases, the NOC146 intersects each microNOC142 horizontally and therefore horizontally with each microsector 92 corresponding to the programmable logic 66. Programmable logic 66 may be accessed by using row controller 126 to interface with the corresponding microsector 92. In addition, each row controller 126 may include memory (eg, random access memory (RAM), cache memory) that can be accessed before, after, or with access to the associated programmable logic 66. The row controller 126 in FIG. 8 may include a row controller 126A. It should be noted that one or more of the microNOC142s may include additional circuits not shown or described herein.The CM132 may span multiple microNOC 142 rows (eg, one, two, three, ten, any suitable number). In this example, one CM132 may control five microNOC142 columns. Each CM132 may communicate with a row controller 126 associated with a subset of microNOC142s attached to CM132. When transmitting a command, the CM132 receives the command, determines which part of the programmable logic 66 communicates based on the command, and which microNOC transmits the command based on the part of the programmable logic 66. You may. Since the data path 144 is bidirectional, the CM 132 may send and receive messages to and from the same microNOC 142 at the same time.The CM 132 may include a master interface 148 and a slave interface 150 for receiving and / or transmitting commands. In some cases, commands and / or data may be communicated from external software or peripheral components to the respective row controller 126 of each microNOC 142 using the Advanced Interface Bus (AIB) 140.To elaborate on the data processing operation, FIG. 9 shows CM132 (CM132A, CM132B, CM132C) coupled to microNOC142 (142A1, 142B1, 142C1, 142D1, 142A2, 142B2, 142C2, 142D2, 142A3, 142B3, 142C3, 142D3), respectively. ) Is a block diagram. Each CM132 communicates with four microNOC 142s to execute a transaction. Transactions may arise from parts of programmable logic 66, external software, peripheral components, other circuits of integrated circuits 12, or any suitable hardware or software component that can communicate via NOC146.The CM132, NOC146 and / or microNOC142 may be physically located on the integrated circuit 12 to improve data transmission. For example, the CM132 controls the operation of microNOC142s located relatively far apart, such as the microNOC142s located 1 mm (mm), 2 mm, 3 mm, etc. (eg, any appropriate distance) from the CM132. It may communicate with microNOC142. CM132 may also be integrated with microNOC142 and / or NOC146 to form an integrated communication network together. Co-integrated components may be implemented as a single block instead of separating the components into separate blocks that connect to each other on a high level metal, with higher bandwidth between the co-integrated components. It enables communication, a higher level of data integrity (eg, the quality of the signal used to communicate the data), or both.In addition, the microNOC 142 may be connected to a horizontal semi-static routing pipeline column represented by a semi-static routing block (SR) 152. The data width of each SR block 152 may be equal to the data width of the data path 144 of the microNOC 142 (eg, characterized by the same data width). SR block 152 provides a non-blocking pass-through to any of the four physical edges of the block. Thus, a command transmitted from a first physical path intersecting on the first side to SR block 152 may be physically transmitted to exit any of the remaining three sides of SR block 152. .. SR block 152 with non-blocking pass-through capability may increase the number of routing combinations that can be used when passing data between CM132 and microNOC142.In some cases, the message may include an identifier so that it can be recognized. The SR block 152 may read the identifier and determine when to direct messages from subsequent SR blocks to the microNOC142.For example, CM132A may generate and send a message to transition to the combined row controller 126A in microNOC142A1. The message may include an identifier indicating microNOC142A1 as the destination of the message (and may include an identifier indicating row controller 126A as a more specific destination of the message). SR block 152A receives the message and determines that the message is not for the microNOC142B1 and that the message is not for its corresponding microNOC142 (eg, microNOC142B1). The message may be passed downstream to SR block 152B. SR block 152B may repeat the sequence of operations performed by SR block 152A in response to the receipt of a message. In fact, SR block 152B may receive the message and decide whether to pass the message to the downstream SR block (not shown) via pipeline stage 158A. However, SR block 152B determines that the message is destined for the microNOC142A1 based on the identifier of the message and that the message is destined for its corresponding microNOC142 (eg, microNOC142A1). You may decide to direct the message to the microNOC142A1. The "point" action is indicated by an arrow indicating how the message is transmitted from the routing network 154 to each microNOC 142. Note that the reverse behavior may be applied. That is, each SR block 152 may perform a similar analysis of message identifiers to determine the destination CM132 of the message and route the message.In some cases, the CM132 may pass a message from a microNOC142 other than itself to the downstream SR block 152. To pass the message, CM132 may read the identifier of the message indicating the destination microNOC142 and / or row controller 126 and decide to pass the message without further processing. The bus used to pass the message through the CM132 is not shown and may be configured so that the bus does not interrupt the communication and / or coordination of the CM132 when it receives the message from its own microNOC142.In some cases, each SR block 152 and pipeline stage 158 may operate according to program behavior to pass or direct the message without first interpreting the message identifier. For example, SR block 152 is configured to instruct SR block 152 to pass to the left by default when a message is received from one path and to the microNOC 142 by default when a message is received from a second path. It may operate according to. The configuration may be a transmission path programmed via the kernel program 20, a configuration bitstream, or the like, or may be a hard-coded configuration. Similarly, the pipeline stage 158 may pass received messages between the first side and the second side of the pipeline stage 158. Each pipeline stage 158 may be free of inputs or outputs on the third or fourth side, or may be free of inputs or outputs used to transmit a message.The connection between CM132 and microNOC142 may follow a 1: 1 ratio and the number of SR blocks 152 and / or the number of pipeline stages 158 may be scaled during the design to keep the ratio constant. .. Take, for example, the CM132 controlling the operation of six microNOC142s. The CM 132 may have six connections (eg, one connection to each of the six microNOC 142s). Each of the six connections may have a suitable number of pipeline stages 158 and / or SR block 152 for transmitting messages between CM132 and each of the microNOC142s. The number of pipeline stages 158 and / or SR block 152 is the physical footprint that each message traverses (eg, the size of the routing network 154), the logical footprint that each message traverses (eg, for each message). It may be determined based on the number of clock delays applied), logical design considerations (eg, the number of times a particular message is inverted to return to its original value), and so on.In some devices, the CM 132 may transmit a message to the microNOC 142 corresponding to different subsets of the CM 132 via the pipeline stage 158 and / or the SR block 152. For example, the CM132A may transmit a message to the microNOC142B2 via the pipeline stage 158 and / or the SR block 152. In some cases, messages pass through CM132 to cross boundaries within the routing network. Each CM132 can also operate in pass-through mode and the received message may be transmitted to the other side of the CM132. When passing a message, the CM 132 may or may not inspect the header of the message to determine to which of its pipeline stages 158 the message should be output. In some cases, CM132 may leave the inspection to routing network 154, which may transmit the message to the correct microNOC142.Further, in some cases, the CM132 may not have a pass-through mode. In these cases, CM132 uses NOC146 to transmit the message to other subsets of microNOC142. For example, since the microNOC142D3 is outside the corresponding subset of the microNOC142 for the CM132B accessible via the horizontal parallel of the SR block 152, the CM132B may use the NOC146 to transmit the message to the microNOC142D3. As is generally indicated by reference numeral 156, the microNOC142A3 can access the CM132B via the horizontal parallel of the SR block 152, while the microNOC142D3 cannot access the CM132B via the horizontal parallel of the SR block 152. It may be possible.Each horizontal parallel of SR block 152 may correspond to each microNOC142. However, any number of SR blocks 152 and corresponding microNOC 142s may interconnect between CM 132s. The SR block 152 and the pipeline stage 158 may add latency to the message transmission path and may therefore be used to equalize the timing between columns or parts of the integrated circuit 12.More specifically, FIG. 10 is a block diagram of each CM132 (called CM 132A) coupled to the microNOC 142 (142A, 142B, 142C, 142D). CM132A manages transactions between NOC146 and one or more of microNOC142s. Each CM132 may use different routes to communicate with circuits corresponding to different microNOC142s.The CM132A may include an interface circuit 170 for receiving messages for transactions and a data converter 172 for changing the format of commands prior to transmission between the CM132A and NOC146 or microNOC142. CM132 may generate a message interpretable by row controller 126 from commands received on slave interface 150 and / or master interface 148. Each row controller 126 may update the bits in the message after completing the transaction dictated by the message. In some cases, new commands referenced or directed to the same position are delayed in command queue 174 and are therefore delayed from being written to that position until the current command completes.The interface circuit 170 may include one or more command queues 174 and one or more state machines 176. Interface circuit 170 may manage the transactions specified for the subset of microNOC142A, 142B, 142C and 142D corresponding to CM132. The command queue 174 may store commands received in the slave interface 150 and / or commands transmitted from the slave interface 150 in one or more queues. The command queue 174 may queue the actual command and / or the instruction of the command, which may indicate where the actual command can be obtained. The command may initiate and control a microNOC142 transaction between the CM132A and the microNOC142 and a data endpoint (eg, row controller 126, programmable logic 66, other microNOC142, integrated circuit 12 circuit, AXI interface endpoint). ..The state machine 176 may include one ratio of state machines per concurrent traffic thread of the microNOC 142. If each microNOC 142 is formed from the same type and number of components, each of command queue 174 and state machines 176 may contain the same number of state machines.Command queue 174 may contain a set of registers in the address space of slave interface 150 (eg, slave bridge). Command queue 174 may use command pointers within this address space. The command pointer may be incremented to the next command (eg, later queued command) when the current command is issued. Both the command queue 174 and the state machine 176 may refer to the command pointer. State machine 176 may use command pointers to sequentially execute commands in command queue 174.State machine 176 may include registers in slave interface 150 (eg, slave register space) that perform "go", "running" and "accept commands" operations, one during "go" operation. The above registers may handle data for causing the state machine 176 to process the commands stored in the command queue 174. During the operation of "running", one or more registers may handle data indicating that one or more state machines 176 are currently processing commands. During the operation of "accept commands", one or more registers may handle data for the state machine 176 to write to the command queue 174. The "accept commands" action may be used to gracefully (eg, not suddenly) stop the commands currently being sent. To do so, state machine 176 allows new commands to be written to command queue 174, thereby reducing the possibility that the command will be stopped while it is running. Data discard routing by reducing the likelihood that a command will be stopped during execution, and thus the possibility that inconvenient (or remaining) data from the stopped execution will be left in the path or circuit. Can be excluded from part of the circuit design, making routing less complex.The data converter 172 and / or NOC146 may read and / or write from the interface circuit 170, thereby allowing a message exchange to occur between the NOC146 and the microNOC142. Interface circuit 170 may generate a message from a received command using command queue 174 and / or state machine 176. The message may instruct the microNOC 142 (eg, data path 144, row controller 126) for a transaction. However, the receive command may be in a different format, such as a different addressing scheme or communication protocol than that used by the microNOC142. The data converter 172 may convert the message between the first format and the second format interpretable by the components of the microNOC142.In fact, the data converter 172 may convert commands from the format used for memory transactions (eg, DDR commands) to the format used for programmable logic transactions (eg, microNOC commands). The data converter 172 may use a look-up table or other suitable relational storage circuit to do this. The data converter 172 may determine whether the command uses a transaction to a single microNOC142 column or a transaction to multiple microNOC142 columns. If the command indicates a transaction across multiple columns, the data converter 172 may duplicate and modify the generated message for use in transactions that span multiple columns. The data converter 172 may select one or more of the microNOC columns to transmit the generated message. After selecting a column, the data converter 172 may embed the identifier of a particular row controller 126 in the selected column in the message. A subset of row controllers 126 may access the identifier to determine if a message with the identifier has been delivered to that row controller 126. If the transaction spans two or more microNOC 142 columns, the data converter 172 may sort or pack the data from each column into a word position suitable for access by DDR memory and / or other peripherals. good. The sort or packing may be retrieved by the data converter 172 according to the identifier. All or part of these operations may be performed on the output message in the same order or vice versa. Therefore, the data converter 172 is also a bidirectional circuit.These generation and conversion operations may allow for a direct interface connection between memory peripherals and programmable logic. The ability to connect a direct interface between a data or command source and an endpoint complicates operations when moving large amounts of data that can occur in machine learning applications, artificial intelligence applications, Internet of Things applications, etc. Can be reduced. Either memory peripherals, programmable logic or integrated circuit 12 or other components of system 10 may be data or command sources based on the type of operation the components of system 10 and / or integrated circuit 12 are performing. Keep in mind that it may act as an endpoint. For example, the endpoint of the data changes based on whether a particular transaction is a read or a write.The data converter 172 may include a plurality of concurrent operating systems to allow duplicate conversion operations. Further parallel operation may occur when additional state machines are included in the interface circuit 170. If the state machine 176 contains a first quantity of state machines, the same first quantity of unresolved transactions queued at once and at least partially in parallel by running the state machines at the same time. It may be executed. Further, since each microNOC142 is bidirectional, multiple commands for one or more transactions may be processed at least partially simultaneously on the same microNOC142 or different microNOC142s. The simultaneous transmission capability of the microNOC142s as described in the plurality of commands may allow all or nearly all bands of each microNOC142 to be utilized between one or more transactions.The data converter 172 may communicate with one or more message buffers 178 after the message is generated or before the message is converted to output to NOC146. The message buffer 178 holds message data waiting to be transferred from the corresponding microNOC142 to the interface bridge (for example, the AXI bridge) or from the interface bridge to the corresponding microNOC142.Communication to and / or from message buffer 178 may be at least partially managed by one or more message managers 180. One message manager 180 may correspond to one or more message buffers 178 based on the configuration of each circuit and vice versa. A 1: 1 ratio assignment is shown in FIG. 10, but other sequences may be used.In the simplest mode, message manager 180 may issue a message as a function of a command residing in command queue 174. In some cases, the message manager 180 may allocate the bandwidth referenced when reading or writing message data from the message buffer 178. The message scheduler (not shown) of the message manager 180 may operate based on the stored configuration. The configuration data received via the slave interface 150 may be used to tune the stored configuration and, therefore, to change the behavior of the message scheduler. The message manager 180 is based on the position of the endpoint of the message (ie, the physical placement of the target row controller 126 in the microNOC142 column) and / or the relative priority of the message (eg, as determined by the message scheduler). Based on this, the order in which the messages are issued to the microNOC142 may be determined.The message manager 180 may use the message scheduler to publish different types of messages at different rates. The message manager 180 refers to its stored configuration to determine different rates assigned to different microNOC 142s, different data sources (eg, on-chip memory, off-chip memory) and / or different subsets of row controllers 126. do. Since each row controller 126 is assigned to access different parts of programmable logic 66, defining access rates to different row controllers 126 and / or microNOC142 is a data transaction performed by different parts of programmable logic 66. May be used to squeeze or adjust relative to. This relative rate allocation may allow faster rate allocation to programmable logic 66 associated with higher priority tasks, or programmable logic 66 associated with customers who agree to a faster rate.Each microNOC142 is one or more for transferring data, provided that the total physical width of the physical channels fits within the physical width between the adjacent parts of programmable logic 66 and the adjacent columns of row controller 126. It may have a physical channel of. The message buffer 178 and message manager 180 may be replicated with routing to support replication of these components, such as routing to the data converter 172, or any other suitable component. Duplicating these components increases transaction performance (eg, lower latency, more parallel operation) because multiple physical channels may be used in a time domain (eg, time) multiplexing scheme. ) May be possible. The CM132 may include one message buffer 178 for each microNOC142 within its subset of microNOC142s.The message manager 180 may communicate messages between the microNOC142 and the NOC146. The message manager 180 monitors the bandwidth level and / or the expected bandwidth allocation, and the message buffer 178 has room for the next message to be scheduled and / or the state of the group transaction (eg, "ready" state or The "completed" state) may be determined. Customer agreements, service level agreements, etc. may be stored as accessible data and used to define the relative allocation of transmission rates and / or bandwidth. Customer agreements and / or service level agreements may include performance metrics agreed by customers who agree and / or subscribe to different quality of service (QoS) parameters. The QoS parameter may include the ratio of the total bandwidth allocated to that customer in each scheduling cycle and / or the transmission rate to the other customer's other transmission rate, which is for the other customer's application. The message corresponding to the customer's application may be given higher priority.To further elaborate on these transactions and addressing methods, FIG. 11 is a diagram of the logical address space 190 used by CM132. Each CM132 may use its own address for each transaction. Addressing may be done at various levels. The device and / or application of integrated circuit 12 may use the same addressing scheme used to reference row controller 126 to reference CM132. In fact, the integrated circuit 12 may use the CM address logical area 192, the column address logical area 194 and / or the row controller logical area 196. The CM address logical area 192 contains an address used to address each CM132 in each area. The column address logical region 194 contains an address used to address each microNOC142 in each region. Row controller logical region 196 contains the address used to address each row controller 126 in any of the regions. Therefore, to reference a particular row controller 126, a particular CM132 is identified by an index in the CM address logical region 192, and a particular microNOC142 is identified by an index in the column address logical region 194, from which. The particular row controller 126 may be identified by an index within the row controller logical region 196.Devices in an integrated circuit may refer to a base address and / or index when addressing a group of devices and / or regions 192, 194, 196 of a particular device, respectively. In fact, the index may be used when referencing an address that has an offset from the base address. Various base addresses 198 (198A, 198B, 198C) are visualized in FIG. If the base address is 0, the device uses indexing to reference a specific address that is different from the base address (for example, 0 + 3 for 3 indexed from the base address) to base the offset. May be added to. The address resulting from the offset that increments the base address value may refer to the component corresponding to the indexed position. In general, this may be referred to as "indexing", that is, the process of indirectly accessing an address value by using a combination of index offset and base address.Devices may also use logical addresses to directly address components in physical addresses. The direct address may use a specific logical address without offset from the base address.The logical address space 190 may be defined independently of the physical placement of each row controller 126. However, the logical address space 190 is independent of the NOC's logical address-to-physical address translation, as the physical placement of each CM132 can change the relative addressing used to access each CM132. It does not have to be defined. The component address corresponding to each CM132 may exist independently of the NOC logical address to physical address translation. The CM132 may be reached by translating a logical address to a physical address on the NOC146 master bridge from which commands are executed.Addressing based on logical address space 190 may provide a way to access each row controller 126 directly. Direct access to each row controller 126 may allow direct access to the programmable logic 66 corresponding to each row controller 126. In fact, the command for base address 198C processed according to base address 198C may be passed to row controller 126 via the relevant part of data path 144. The portion of the addressed row controller 126 and the data path 144 communicably coupled to the addressed row controller is represented as at least node 200, where node 200 may or may represent other circuits. But keep in mind that it is okay.A simple addressing method may allow data packing to occur on one or more microNOC142 columns, allowing many different combinations for storing and accessing data within programmable logic 66. For example, FIG. 12 is a diagram of 1-column data packing and FIG. 13 is a diagram of 4-column data packing. The striping method may be used to read and write data from each row controller 126 in the "correct" or expected logical order, even if it is stored or loaded out of order.12 and 13 are 256 bits output using a 256-bit wide interface after conversion from a 1024-bit word in RTL (eg, to an AXI bridge or other suitable output interface to CM132). Indicates a word. Traffic for message 216 traveled using the microNOC142 travels up or down each microNOC142. Message 216 has a header and payload. The entire payload is called data, and any empty portion of the payload is a slot each capable of storing data. The message 216 shown in FIGS. 12 and 13 is initially shown to have a slot, and as the message 216 transmits the respective microNOC 142, some of the slots are filled with data. It should be understood that the payload is an example of the data contained in the message. In fact, the payload may be added after the transaction command bit and / or the header of message 216.It should be understood that the data sizes used herein are exemplary sizes and that any suitable data interface and storage size ratio may be used. In both FIGS. 12 and 13, how the "correct" logical order or the assumed logical order is still when the data stored in the "correct" logical order or the order different from the assumed logical order is output. Indicates whether they are accessible in logical order. For ease of discussion, FIGS. 12 and 13 are discussed together herein. Note that each node 200 represents its own row controller 126, the corresponding portion of the data path 144, and the corresponding microsector 92 (eg, the interface between the programmable logic 66 and the row controller 126).The CM132 may communicate with the microNOC142 using a streaming data protocol. Messages according to the streaming data protocol are communicated one by one on the data path 144. Each message 216 may be inspected by each row controller 126. When message 216 traverses a portion of the data path 144 corresponding to row controller 126, the row controller inspects the header of message 216 to determine if message 216 is for itself. In fact, row controller 126 reads the identifier of message 216. The row controller 126 may receive a message on the data path 144 from the CM 132 or the upstream portion of the data path 144 (the arrow indicating the path of message 216 in FIG. 12 indicates that the message moves on the data path 144 of the microNOC 142. The direction in which the "E" data 214 is moved upstream from the "C" data 214 to "G" data 214). Note that the "E" data 214 corresponds to the microsector 92 in which the microsector 92 is located at a different location corresponding to the "C" data 214 (eg, at a different location relative to the corresponding microNOC142 node). If the identifier identifies another row controller 126 as the target endpoint, the row controller 126 uses the data path 144 to pass message 216 downstream.However, if the identifier points to row controller 126 as the target endpoint, row controller 126 may operate according to the transaction command indicated via the message and therefore the configuration programmed into row controller 126 (eg, message 216). Alternatively, at least a portion of the data in message 216 may be stored according to the configuration identified by the previously received message). If the transaction command indicates a read operation, row controller 126 may read data from its memory or microsector 92 and write the data to slot 218 of message 216. Row controller 126 may then return the message to data path 144.Since the data path 144 is transmitted according to the streaming data protocol, message 216 may be transmitted to the last row controller 126 in column microNOC 142 before being returned to CM132. In some cases, the row controller 126 gets the data in response to the first reception of message 216 (eg, in a downward transmission on the data path 144) and the second time message 216 (eg, data). Data may be written to slot 218 of message 216 in response to receipt (in upward transmission on path 144).The data read from each row controller 126 may be stored in buffer 210 in "correct" or expected logical order. The CM132 may use a buffer to pack or parse the data in message 216. Buffer 210 is either suitable to reduce or reduce start / stop latency from handling back pressure, such as back pressure between registers 212 and buffer 210, or between microNOC 142 and buffer 210. It may be any size. The data 214 is stored in the buffer 210 in the acquisition order that should be followed when it is output to the register 212. Register 212 may be filled with data over time (eg, represented by a "0" subscript for the first data and a "1" subscript for the next data). Register 212 may be bidirectional and data may be read from register 212 or written to register 212 based on the direction of data flow from CM 132. Note that the message buffer 178 may include buffer 210 and / or register 212, or buffer 210 and / or register 212 may be located somewhere in the system accessible by CM132.Data 214 may be stored in programmable logic 66 in a different order, even though the data is read from register 212 in the "correct" or expected logical order. For example, in FIG. 13, the “D” data 214A is stored at a bit position equal to 1 in message 216, but is first input to message 216. If the data is stored in the "correct" or expected logical order, the "D" data 214A is the second data to be stored in message 216. The stripe identifier (SID) associated with each data 214 indicates to each row controller 126 which slot to insert the data 214 into, even if the data is stored out of order. Since the "D" data 214A corresponds to SID = 1, the row controller at node 200A knows to write the "D" data 214A to the second slot 218. Data that are out of order using these stripe identifier methods but are rearranged in message 216 may be output in the "correct" or expected logical order in the exit message from the microNOC142. FIG. 13 shows the SID, and in the system shown in FIG. 12, a similar process may be used to read the data 214 via the microNOC 142 (as shown by the use of the SID in each data 214). Keep in mind the point. The SID may be defined for each traffic identifier (TID, Traffic idenfier), and the TID is used to identify each microNOC142. The TID may correspond to the logical address of each microNOC142 (eg, to guide the routing network 154 about where to direct message 216).More specifically, the CM 132 may use the SID to identify the slot 218 location within each message 216 transmitted over the data path 144. The identified slot 218 indicates to the row controller 126 in which of the slots 218 the read data is written. Message 216 and its slot return to CM132 with or without data. The CM312 may use message 216 with data from row controller 126 in its processing operations. The stripe identifier allows messages moving from top to bottom (or vice versa) in the microNOC142 column to organize the messages using logical order rather than physical order, or to organize the data in the message. do. This reduces or completely eliminates reordering the buffers to reorder the column data before output, thus making the circuit design more efficient and cheaper than other methods of using reordering buffers. May bring.FIG. 12 differs from FIG. 13 in that the data of FIG. 13 is accessed using the data striping method. FIG. 12 shows the word count bandwidth that occurs when all data for the same group is stored in the same microNOC142 column. In fact, the data 214 shown in FIG. 12 is of the same group of data and should be loaded into the same buffer 210 before being output to register 212. In contrast, FIG. 13 shows the process of moving data 214 to register 212 when the same group of data is stored between different microNOC142 columns using a data striping process. In fact, the data 214 is written to the programmable logic 66 based on the read order for the data 214. The read order defines the order that the device follows when reading data 214 from buffer 210. The reading order arrow 220 indicates an exemplary reading order. The read order arrow 220 emphasizes how data 214 "A0" is read before the second data 214 "B0" and how the last data 214 is read. "H0" may correspond to the 8th reading position. In the data striping process, software, CM132, compiler 16, design software 14, host 18, etc. may consider this reading order when deciding which of the microNOC142s to write the data 214 to. For example, data 214 "A0" and data 214 "B0" are stored in memory associated with different microNOC 142s so that one read does not delay the other.The operation shown in FIG. 13 may be more pipelined than the operation shown in FIG. In general, pipelined operations can result in operations that improve throughput and reduce execution time. Since the operation shown in FIG. 13 enables a pipelined read operation based on the process of striping the data in the storage, the integrated circuit 12 using such an operation has a similar performance improvement. obtain.These striping methods are formed from the original width of the row controller 126 (eg, 32-40 bits or any basic bit width) and the concatenation of the data widths of multiple row controllers 126 in the same or different microNOC 142. It may be possible to extend the word width to any width. FIG. 14 shows an example of this width extension.FIG. 14 is a diagram of memory spatial indexing referenced in a register transfer level (RTL) design file and related data striping implemented by CM132. In this example, the data from the two microNOC 142s has a packing display in memory 236 with a valid data width of 128 bits in the RTL using four row controllers 126 (eg, four nodes 200). Similar to FIGS. 12 and 13, FIG. 14 also has a logical sequence used to retrieve data from each row controller 126, which is different from the physical sequence inside the microNOC 142 and between the microNOC 142. For example, data 214 "B" is stored in a different column than data 214 "A" after data 214 "D". Note that the data 214 may span multiple nodes 200 or may be completely stored within one node 200. The RTL shows the concatenation of parts of data 214 (eg, code 240) and associates parts of data 214 with a logical read order. Concatenation may also allow larger words to be split into smaller words for storage via row controller 126, and the data is in any suitable order independent of the logical read order. And may be physically stored between row controllers 126.The data loaded or unloaded into node 200 may come from off-chip memory such as memory 236. The memory 236 may include any suitable type of memory such as memory 64, double data rate memory, read-only memory, read / write memory, high bandwidth memory, and the like. In some cases, the memory 236 may also be memory located in another component or device rather than a dedicated memory device.Each node 200 of the microNOC142 may have an identifier (MID) assigned to itself. Each node 200 of the microNOC142 has its own MID, but nodes 200 in different microNOC142 columns may share the MID. The MID may be assigned to a single row controller 126 or group 126 of row controllers. The memory controller 238 may use the MID to reference a single row controller 126. For example, including the MID in the message to the target row controller 126 may indicate the target row controller 126, where each row controller 126 has a separate MID. The memory controller 238 is a group of row controllers 126 to form words between different parts of programmable logic 66 such as load / unload batch operations, first-in first-out (FIFO, first-in first-out) streaming operations, etc. You may use a MID that references. Upon synchronization, memory controllers 238 and / or CM132 may use individually addressed row controllers 126 to perform operations. For example, if memory controllers 238 are synchronized as a group, but access row controllers 126 with their respective MIDs individually, the details about synchronization may be during the design phase or by input to the human-machine interface. May be offered at a level. Whether the memory controller 238 stores data and / or uses synchronization to form a "wide word" (ie, storage data that spans multiple storage locations within different nodes 200). May be instructed to each CM132.The memory controller 238 may use these batches or batch-like operations to acquire or store data corresponding to a wide range of words. When data moves between multiple nodes 200 forming a wide word, the CM132 may receive a confirmation signal (or instruction) when the operation is complete, and the received data is a complete wide word. Represents and therefore is final. The CM132 receives data from the microNOC 142 in a way that maintains synchronization between data from different nodes 200 that form a wide word. For example, in individual addressing operations, read and / or write operations may toggle the sync signal at node 200. This togling may be used to synchronize the completion of each read or write operation at the system level between different row controllers 126. Memory controllers 238 and / or CM132 perform load / unload batch operations, FIFO batch operations, or system-level synchronization operations between individually addressed row controllers 126 to write or read wide words. You may.1One or more row controllers 126 may be referenced using the same MID. In fact, CM132 may use one MID to call a group of row controllers 126. The CM132 may use the MID to reference a group of row controllers 126 with a single command, such as when performing a batch operation.An exemplary batch operation is a load and / or unload operation with a relatively large amount of data transmitted between the programmable logic 66 corresponding to the group of row controllers 126 and the device communicating with one or more CM132s. include. When performing a load or unload operation, CM132 may instruct each row controller 126 in the target group to repeat the same operation. Load and unload operations may also use synchronization signal toggles to synchronize read or write operations at the system level. The command completion response from each row controller returned to the triggering CM132 may indicate that the associated action of the group has been completed. The command completion response may be generated by the last row controller 126 of the group to perform an action to indicate that the operation of the group is complete. The CM132 may use NOC146 to transmit a command completion response to the triggering device (eg, the device that requested execution of the batch command).The CM132 may use a shared MID to access a group of row controllers 126 when performing FIFO batch operations. This mode requires the associated row controller 126 in the group to monitor and control the ready / active signal to keep itself in sync. For read operations, this means tracking a share ready signal. The ready signal may support a ready wait time for de-assertion. Showing the deassertion of the signal can help pipeline the ready signal and increase the span of the row controller 126 that can be synchronized. For write operations, the microNOC142 determines that each of the row controllers 126 in the group is ready to transfer a sufficient and specific amount of data. Since several row controllers 126 are ready at different times, verifying each is ready for data transfer, reducing the possibility of batch operations being out of sync in practice. Therefore, the microNOC 142 may transfer the data to the CM 32 and keep the row controllers 126 logically synchronized with each other during the FIFO batch operation. In some cases, the latency savings from data striping may compensate for this intentional delay in moving data due to FIFO batch write operations, making the latency difference negligible.To repeat, each row controller 126 is associated with a portion of programmable logic 66 and a portion of data path 144 through an association to node 200. Each node 200 in the microNOC142 is assigned a different MID, and different microNOC142s may share the same range of MIDs. This allows references to the physical positions of different nodes 200 in relative placement within the microNOC142 column to be the same between the microNOC142s (because the geometry can be constant for the design). ). MicroNOC142s are distinguished from each other by using a traffic identifier (TID). The packet start code and packet end code may be used to distinguish the MID, TID, SID, header and payload in each message. In some cases, the packet start code and packet end code only distinguish between the start / stop of a message, and each header, MID, TID, SID, header and payload are consistently known sizes for each transaction. These codes may be written from data that has different values than the data expected to be stored in the payload. The message (ie, traffic) specified for transmission on the target microNOC142 contains the TID of the target microNOC142. For example, SR block 152 in FIGS. 9 and 10 may refer to the TID to determine whether to direct or pass the message. The TID is to allocate bandwidth for each node 200 if the bandwidth or QoS metric is not directly indicated by the command received (ie, if a particular microNOC142 is designated with lower bandwidth or lower priority). And / or may be used to manage QoS metrics. TIDs may also be used to define striping sets within and / or between columns and to perform the data striping shown in FIG.In fact, the MID may be used for non-striped data write operations to node 200, or to identify which node 200 returned read data after a read operation. The TID may be used for data read operations from nodes and striped data write operations. For example, message 216 in FIG. 12 may include a write command and a MID. When the MID of message 216 matches the MID of node 200 (eg, the one assigned to row controller 126), node 200 receives the data stored in message 216 from message 216 and writes that data to that device. Row controller 126 may then remove the data from message 216, leaving slot 218. For a non-striped read operation, if the TID of message 216 matches the TID assigned to the microNOC142 that contains node 200, node 200 (ie, using row controller 126) will put that data into the slot of message 216. Message 216 may be marked by writing to 218 and writing its MID in the header of message 216. However, for striped read and / or striped write operations, the TID of message 216 identifies which node 200 should respond to the message, and the SID is which slot in message 216 is on that node 200. Define whether it belongs. Following these processes is to follow each bandwidth allocation and QoS metric on the microNOC142 by generating a set of messages where the CM132 allocates resources (eg, bandwidth) as desired and / or as defined by the QoS agreement. May be made possible to control.The user may instantiate the row controller 126 in the RTL directly through a memory configuration available in the RTL, such as an MID (or TID) that references a subset of row controllers 126 (and thus node 200). The instantiation of controller 126 may be estimated. It may include RTL generated from a higher level language such as OPENCL, or a language used during high-level synthesis (HLS) operation. This memory may be considered as a logical memory within the programmable logic 66 in that it has not yet been placed in a physical location. The physical node 200 that is ultimately used for the row controller 126 is the choice made by the design software 14 and / or the compiler 16 when designing during compilation to generate the constituent bitstream.Next, with reference to further details regarding the direct addressing operation, FIG. 15 is a diagram of a first exemplary memory operation. In this example, the target node 200 is addressed by a command generated by a portion of the programmable logic 66A that is different from the one corresponding to the target node 200. The command from the referencing device may address the target node 200 or one or more other target nodes 200 in a similar manner. Further, although these operations are described as being performed by CM132, even if any suitable processor, such as memory controller 238 and / or other CM132, performs some or all of the operations. good.As mentioned above, each enabled row controller 126 (eg, corresponding to each enabled node 200) has an address in the global address space of integrated circuit 12. The address may include or be associated with a combination of MID and TID that identifies the placement of row controller 126 within a particular microNOC 142.Read and / or write operations may follow a process that begins with the device issuing a read or write command on any interface bridge (AXI bridge). In this example, the programmable logic 66A produces a read or write command (eg, action "1"). To facilitate disclosure, read and / or write commands are generalized as "access commands". The CM132 may receive an access command issued by NOC146 and perform the specified action in the issued access command (eg, action "2"). The generated message may include a TID and a MID to guide transmission through the routing network 154 to the target node 200. Once the message is on the microNOC142 (eg, transmitted over the data path 144), the addressed node 200 identifies the message as its own and is based on the type of transaction instructed by the access command. The data may be acquired from the message 216 or the data may be written to the message 216. The addressed node 200 may return the modified message to the microNOC142. When the CM132 receives a message or acknowledgment signal, the CM132 returns the transaction result to the slave interface 248 via NOC146 (eg, operation "3"). The slave interface 248 passes the transaction result to the programmable logic 66, and finally the programmable logic 66 transmits the transaction result, the transaction completion message or confirmation, or both to the requesting master entity, programmable logic 66A (eg,). , Operation "4"). For example, if the requesting master entity is an AXI master, the transaction completion message or confirmation returned to the requesting master entity may include an AXI transaction completion message or be an AXI transaction completion message.The direct addressing action may optionally use a visible handshake between CM132 and node 200 to signal different processes of the direct addressing action (eg, ready signal, acknowledgment signal). .. In addition, the direct addressing operation may optionally use a confirmation signal to toggle for node 200 each time a transaction is completed. This signal toggle may be used, for example, to track system utilization to guide future design decisions. For example, software may overprogram and / or overprogram one area of programmable logic 66 over another by comparing system utilization and making one or more design decisions based on system utilization. It can reduce the possibility of using it. Also note that the transaction size may be defined by the size of the interface requesting the transaction. In this case, the transaction size may be equal to the data width of the master interface 246 and / or the slave interface 248. However, in other cases, other data widths may be used based on the requesting circuit or application (eg, requesting master).Next, with reference to further details regarding the load / unload operation, FIG. 16 is a diagram of an exemplary unload operation and FIG. 17 is an example of a load operation. The load / unload operation may load or unload one or more nodes 200. Examples are in or out of one or more nodes 200, corresponding to weights for AI calculations, constants used for signal processing, marshalled data or non-marshalled data for OPENCL calculations, etc. Includes data movement. The data loaded or unloaded into node 200 may come from off-chip memory such as memory 236. Memory 236 may include any suitable type of memory, such as memory 64, double data rate (DDR) memory, read-only memory, read / write memory, high bandwidth memory, and the like. The memory 236 may also be a memory located in another component or device rather than a dedicated memory device, as the case may be. The data for unloading may also come from any addressable interface, such as a slave interface (eg, it may be accessed). It may include a soft logic slave, one or more processors, or any suitable data generation component. Note that the microNOC142 may also move data between one or more nodes 200 on the same chip.Since the load and unload operations cause the exchange of blocks of data between the endpoint and one or more designated nodes 200 via one or more CM132s, command and handshake processes may be used. .. The command may be considered complete when other devices can access the moved data, such as the requirements of programmable logic 66, the master device, a device communicatively coupled to NOC 146 or memory 236.Next, referring to FIG. 16, the ongoing transaction between the target node 200 and CM132 of the command from programmable logic 66A may be completed before CM132 issues the message corresponding to the command. In general, when an operation is in progress, CM132 allows the ongoing operation to be completed before issuing a competing operation. Any command from either master or referencing device may address one or more target nodes 200 in a similar manner. Further, although these operations are described as being performed by CM132, any suitable processor such as memory controller 238 and / or other CM132 may perform some or all of the operations. ..To elaborate on the unload operation, the CM132 may write data to the target node 200 using a soft logic transaction according to the direct addressing operation described in FIG. 15 (eg, operation “1”). This may be an ongoing transaction when Programmable Logic 66A issues an unload command to master interface 246 (eg, action "2"). The command generated by the programmable logic 66A may be a write command to control the registers of the slave interface 150, and the command may be a slave device where data is moved from the target node 200, which node is the target of the microNOC142. It may include parameters that describe the address range and the size of the transaction. The CM132 may refer to parameters and any internal message protocol to retrieve data from the target node 200 (eg, operation "3"). As described herein, fetching CM132 may involve retrieving striped data between multiple nodes 200 and / or multiple microNOC142s, and the row controller for node 200 at the end of the transaction. It may be accompanied by toggling the signal from 126 to the programmable logic 66 of node 200, waiting for the confirmation signal to be transmitted to the master interface 246, etc. until the command is completed. The CM132 may issue the result from the read transaction (ie, read the data from the target node 200) as a write transaction to the slave interface 256 based on the parameters. The slave interface 256 may transmit a write transaction for execution to memory 236. If more transactions are received by CM132 than the transaction execution rate by CM132, command queue 174 in each CM132 may queue one or more outstanding transactions.Next, referring to FIG. 17, the same CM132 in FIG. 16 coordinates the load operation. The description from FIG. 16 is relied upon herein to illustrate the load operation. Similar to the unload operation, the user logic and / or function programmed into the programmable logic 66A may generate a command instructing the load operation (eg, operation "1"). The command generated by the programmable logic 66A may be a write command to control the registers of the slave interface 150, and the command may be a slave device where data is moved from the target node 200, which node is the target of the microNOC142. It may include parameters that describe the address range and the size of the transaction. In some cases, the interface circuit of NOC146 may inspect write commands from master interface 246 to determine the CM132 corresponding to target node 200. Based on the determined CM132, NOC146 routes the write command to slave interface 150. This write command may be considered a load command. The CM132 receives the write command on the slave interface 150 and inspects the write command to determine the off-chip memory address. The CM132 may issue an off-chip memory read command to slave interface 256 via NOC146 to request that the data specified by the off-chip memory address indicated by the write command be returned (eg, the operation " 2 ").NOC 146 routes the data returned from slave interface 256 to master interface 148 to provide the data returned to CM 132 (eg, operation "3"). The CM132 transmits the returned data to the target node 200 (eg, operation "4") based on the parameters and / or internal message protocol specified on the slave interface 150 (from the original load command). As described herein, the CM 132 may read data striped between a plurality of target nodes 200 and / or a plurality of microNOC 142s. Reading the striped data may involve toggling the signal from row controller 126 on target node 200 to programmable logic 66 on target node 200 at the end of the transaction, mastering the confirmation signal until the command is complete. It may be accompanied by waiting for transmission to 246. After the load command is completed for each referenced target node 200, other devices or actions may use the load data stored in the target node 200 (eg, action "5"). As mentioned above, the command queue 174 in each CM132 may queue one or more open transactions.Next, with reference to further details regarding FIFO read / write operations, FIG. 18 is a diagram of an exemplary FIFO read operation and FIG. 19 is an exemplary FIFO write operation. FIFO read / write operations load / unload data, except that instead of blocking the data, the data is continuously streamed between one or more target nodes 200 and a slave device such as memory 236. It may be the same as the operation. One difference between the two types of operations is that the target node 200 may behave like an input FIFO and / or an output FIFO with a ready / valid signal.Next, referring to FIG. 18, one or more target nodes 200 are addressed by a message from CM132 before being instructed to read data from memory 236 via CM132. The command from the referencing device may address the target node 200 or another target node 200 (or a group of nodes 200) in a similar manner. Further, some of these operations are described as being performed by CM132, but any suitable processor, such as memory controller 238 and / or other CM132, performs some or all of the operations. You may.To elaborate on the unload operation, programmable logic 66A may issue FIFO read commands to master interface 246. The command generated by the programmable logic 66A may be a write command to control the registers of the slave interface 150, the command is a parameter that describes which node is the target of the microNOC142, the data is moved from the target node 200. It may include the address range of the slave device and the size of the transaction (eg, operation "1"). CM132 may refer to parameters and any internal message protocol to retrieve data from the destination of the target, in this case from the memory address of memory 236. The CM 132 may do so by transmitting the command via NOC 146 to access the command on master interface 148.NOC 146 may pass commands from master interface 148 to slave interface 256 (eg, action "2"). The memory 236 may return the data requested in the slave interface 256, and the NOC 146 may pass the data from the slave interface 256 to the master interface 148 of the CM 132 (eg, operation “3”). The CM 132 may initiate the target node 200 to issue credits (eg, monitoring credit levels) that represent each available space within each of the target nodes 200. The CM 132 may transmit a first portion of the data to the target node 200 as a way to test the transmission before transmitting all of the data from memory 236 (eg, operation "4"). One or more of the target nodes 200 may assert a valid signal to indicate a successful initial transmission of the first partial data to CM132 (eg, operation "5"). Depending on the active signal, CM132 proceeds to exchange data between the target node 200 and memory 236 (eg, operation "6"). This FIFO read mode may continue until the CM132 is instructed to end the operation, the timer finishes tracking the execution of the operation, the target node 200 runs out of credits, and so on. The FIFO read operation is stopped or paused (for example, on the other hand, additional credits are added to the credit level for the target node 200). The command queue 174 in each CM132 may allow continuous and complex data movement patterns.Next, referring to FIG. 19, one or more target nodes 200 are addressed by a command from CM132 before being instructed to load its data into memory 236 via CM132. Most of the operations in FIG. 19 are similar to those in FIG. 18, but in reverse order. In order to illustrate FIG. 19, it relies on the description from FIG. 18 herein.In fact, programmable logic 66A may issue FIFO write commands to master interface 246 (eg, operation "1"). The FIFO write command may include parameters loaded into the slave interface 150, similar to that described in operation "1" of FIG. Upon receiving a FIFO write command on the slave interface 150, the CM 132 may issue credits to the target node 200 corresponding to the available space in the message buffer (eg, message buffer 178 of FIG. 10). For example, increase the credit level, assign a credit level). When ready to receive data, the target node may assert a ready signal (eg, action "2"). User logic and / or the application may write data to the target node by asserting a valid signal while the ready signal is asserted (eg, operation "3"). While doing this, the target node 200 may send credited node data (ie, the data specified via the FIFO write command) to CM132 (eg, operation "4"). CM132 receives the credited node data and passes it to memory 236 via interfaces 148 and 256 (eg, operation "5"). The credited node data may be stored at the target address specified by the original write command from operation "1", but the memory controller 238 translates between the original target address and the actual storage location. Also note that you may redirect. This data exchange continues until the FIFO of the target node 200 is almost full and / or the defined data size in the original write command is exhausted. When the CM132 reaches the transaction threshold count level set by the parameters of the original write command from programmable logic 66A, the CM132 may determine that the defined data size has been exhausted (eg,). Operation "6"). The command queue 174 in each CM 132 may allow continuous and complex data movement patterns.FIGS. 15-19 describe programmable logic 66A that generates transaction requests, but any device hardware such as soft logic, hard logic or hard processor system (HPS, hard processor system) and / or PCIe. It should be understood that the hardware and / or software may generate access commands in some cases. One example is a driver running on HPS, another example is an Accelerator Functional Unit (AFU) that loads neural network weights from one or more memories into the target node 200. Commands generated outside the programmable logic 66A may refer to one or more CM132s, one or more microNOC142s, one or more row controllers 126, etc. using the address space of FIG.Further, although FIGS. 15 to 19 describe the direct addressing and batch addressing operations performed by one or more CM132s, the CM132 may also perform other operations as well. For example, CM132 may pad commands to different word sizes or remove padding from commands to be compatible with the protocol used by row controller 126. The CM132 may operate in response to a command from the memory controller 238, which is an instruction for padding the command with extra bits, a command type, a state and control for the operation instructed by the command. It may include information such as bits, data transfer size (eg, the number of bits allowed to be transmitted during a batch operation), and the destination buffer address for transmitting the operation result. The data transfer size may define the transaction length on a word-by-word basis and may be equal to the number of row controllers 126 in the group when the command commands a batch operation.The systems and methods described herein may be used in conjunction with a single customer application or multiple customer applications. For example, multiple customers may have their respective designs programmed into programmable logic 66. In these cases, devices in integrated circuit 12 such as NOC146, memory controller 238 and CM132 may work balance transactions for each of the plurality of customers. In fact, multiple customers may have equal bandwidth allocations and a transaction scheduling protocol using a round robin scheduling technique that elicits equal transactions for each customer may work.However, different customers may pay for different bandwidths. The CM 132 may include one or more credit levels to manage transaction scheduling for customers with one or more differently allocated bandwidths. The credit level may represent the allocated bandwidth for CM132, microNOC142, row controller 126, or any combination thereof. An integrated circuit 12 controller, such as the memory controller 238, may allocate transaction credits to the CM132, and depending on the allocation, the CM132 may use the allocated credits to increase one or more of the credit levels. good. These credits may be credit instructions such as a digital display of a value indicating the credit level. The CM132 may refer to the credit level when scheduling a transaction with the target node 200 to help control back pressure. This may allow end-to-end flow control to move data between the slave device and the target node 200. As described herein, CM132 indicates that the message buffer 178 has room for the next message to be scheduled and / or the state of the group transaction (eg, "ready" or "completed" state). Note that you may use Message Manager 180 to monitor bandwidth levels and / or expected bandwidth allocations to determine. The bandwidth level monitored by the message manager 180 may include a credit level or may be a credit level. Thus, the credit level value represents the percentage of total bandwidth allocated to that customer in response to the credit level (ie, row controller 126, node 200, microNOC142, CM132 assigned to the customer). May be good. The percentage of total bandwidth allocation may determine how much bandwidth is allocated to the credit level in each scheduling cycle. The percentage of total bandwidth allocation also sets the transmission rate to other customer's other transmission rates so as to increase or decrease the priority of the message corresponding to that customer's application to the other customer's application. May be good.In some cases, message 216 may include broadcast and / or multicast commands. Thus, one or more nodes 200 may respond to the command in message 216. For example, a configuration deployed in a group of nodes 200 under one MID, the entire microNOC142, etc. may be broadcast via the same message 216.Message 216 may include instructions that trigger a cold reset or warm reset (eg, a cold reset and / or warm reset signal is also transmitted with the data in message 216). Cold reset puts the logic at each endpoint addressed via message 216 into a reset state (eg, node 200 via MID, group of nodes via MID, microNOC column via TID). The reset node 200 may be reconfigured after a cold reset. Warm reset resets enough logic in node 200 to verify that message 216 to node 200 is working as desired. Metrics may be monitored during a warm reset to verify performance such as total time to complete operation, latency, backlogging, etc. A warm reset does not have to affect the user data content and / or the content stored in the programmable logic 66 in the memory of the row controller 126, and the cold reset is the data content used by the row controller 126 and / or the node 200. You may clear the data contents of. The worm reset may cause a re-refining and / or reinitialization of the microNOC142, such as adjusting the operation of the microNOC 142 in response to a decision from the worm reset that the microNOC 142 is not working as desired.In some systems, the constituent bitstream may program the microNOC142 and microsector support architecture into programmable logic 66A. Some systems may have the CM132 assign an identifier to a component at power up and / or initialization. This may incorporate changes in the number of row controllers 126 and / or the number of microNOC142s assigned to one or more CM132s between the constituent bitstreams loaded into integrated circuit 12 over time, giving architectural flexibility. Increase and allow redesign. To do this, when the integrated circuit 12 is powered up, each CM132 goes up each node 200 and assigns each node 200 its own MID for its microNOC142. In some cases, an elaborate message may be used to selectively assign MIDs to unlabeled nodes 200. The CM132 may transmit an refined message. Each node 200 may find the refinement message and go past the refinement if the node 200 has already been refined or has already been assigned a MID. Finally, the refinement message is received by the node 200, which has no MID and has not yet been refined. This node assumes the MID indicated by the refinement message. The refinement message indicates the process that CM132 uses to assign the MID to node 200. In fact, at startup, each node may not yet be refined and therefore may not have a MID, but the CM132 outputs each MID by sequentially outputting refinement messages to each of the nodes. It may be assigned to each node.The size of each message buffer 178 may be determined based on the placement of the microNOC 142. In fact, when determining the size of each message buffer 178, design software 14, compiler 16 and / or host 18 may have the maximum number of outstanding transactions between each microNOC 142 and CM 132A that can occur at one time. You may consider the expected number. The size of each message buffer 178 may be chosen to accommodate the maximum or expected number of outstanding transactions.To briefly mention the design and compilation behavior, compiler 16, host 18 and / or design software 14 determine which register transfer level (RTL) software logic is used to implement the circuit application within programmable logic 66. You may recognize it. The compiler 16, host 18 and / or design software 14 may use this information to configure the master bridge of NOC146 with the identifier of the row controller 126 and / or microNOC142 used. The compiler 16, host 18 and / or design software 14 may also use this information to generate the name used to address the include file. At the time the RTL is written, the design software 14 may use, for example, placeholder blocks that have defined data sources and data endpoints but no defined memory and logic allocation. During compilation, an "include file" may be generated containing memory and logical allocation to perform the actions to be performed by the placeholder block. The include file may contain one or more named associations between the inferred (or instantiated in RTL) logical memory and the address. The compiler 16, host 18 and / or design software 14 may generate include files during the RTL analysis phase of the compilation operation. For example, include files may be generated when programmable logic 66 defines a memory map to guide future memory transactions. The NOC146 master bridge, which supports a command interface, may provide conversion to physical CM132. The include file may provide the logical address of CM 132. The compiler 16, the host 18, and / or the design software 14 may generate a NOC logical / physical address translation table after the design conformance operation, or may store the translation table in the master bridge as part of the device configuration.During the design phase, the visualization tools associated with design software 14 may indicate the physical placement of row controller 126 in the design. The visualization tool may also indicate the timing impact of row controller placement on the design, as well as the expected bandwidth or latency. Timing, bandwidth and / or latency metrics may be indicated for the overall design, parts of the design when compared to each other, and so on. The visualization tool may allow the user to perform a manual placement of the row controller 126 to determine the impact of the placement. The impact of placement does not have to be reflected in the presented measurement metrics until after the design is recompiled.The embodiments described in the present disclosure may be subject to various modifications and alternative forms, specific embodiments are shown in the drawings as examples and are described in detail herein. For example, an appropriate combination of any of the embodiments and / or techniques described herein may be implemented. In addition, any suitable combination of numeric formats (eg, single precision floating point, half precision floating point, bfloat16, extended precision, etc.) may be used. In addition, each DSP circuit and / or DSP architecture may include any suitable number of elements (eg, adder, multiplier 64, routing, etc.). Therefore, it should be understood that this disclosure is not intended to be limited to any particular form of disclosure. The present disclosure covers all modifications, equivalents and alternatives that fall within the true meaning and scope of the present disclosure, as defined by the appended claims below.The technical effects of the present disclosure include systems and methods that provide microsector architecture. The microsector architecture described herein allows programmable fabric programming to occur over smaller areas of the fabric, thereby allowing programmable logic devices such as field programmable gate arrays and / or other configurable devices. May benefit the operation of. The systems and methods described herein allow a 1-bit wide data register (eg, a microdata register (μDR)) to transmit data to or from a smaller area of the programmable fabric. You may. The benefits provided by the microsector architecture can be further improved by using a micro network on chip (microNOC) with the microsector. Each microsector corresponds to a row controller, which communicates with the control system over a shared data path. Control systems can improve data transactions within a microsector architecture by coordinating data read and write operations between one or more microNOCs and one or more row controllers. Coordination across microsector architectures allows for large-scale data movement between memory within microsector architecture components and external memory. Further described herein is an addressing process that allows each row controller and / or each microNOC to be addressed individually. These systems and methods that allow individual addressing of microNOCs can improve data processing behavior because data can be stored out of logical order within the microsector architecture. [Exemplary Embodiment] Exemplary Embodiment A first network-on-chip arranged at least partially around a plurality of microsectors arranged in a row and column grid, and the plurality of microsectors. Uses the first network-on-chip and the second network-on-chip under the row column, which contains the first microsector communicably coupled to the first row controller. , A controller configured to transmit commands and first data from the second row controller to the first row controller, the first row controller using the first data to respond to commands. A controller that is configured to perform an operation and, if the operation causes the generation of a second data, is configured to use a second network-on-chip to return the second data to the controller. And integrated circuits including.Exemplary Embodiment 2 The plurality of microsectors includes a second microsector located at a different position from the first microsector in the row and column grid, and the first row controller is a second row. The integrated circuit according to Embodiment 1, wherein the controller is configured to program the first microsector, at least partially in parallel with programming the second microsector.Exemplary Embodiment 3 The integrated circuit according to exemplary embodiment 1, wherein the second network-on-chip comprises a data path characterized by the same data width as the routing block of the controller.Exemplary Embodiment 4 Including a third row controller located under the first row controller, the third row controller and the first row controller are coupled to a shared data path and the first row controller. The integrated circuit according to the first embodiment, wherein is configured to access a command transmitted over a shared data path before the second row controller is allowed to access the command. ..Exemplary Embodiment 1 The integrated circuit of Embodiment 1, wherein the streaming data packet comprises a command and a first piece of data, and the streaming data packet comprises a command as part of a header.Exemplary Embodiment 6 The first row controller determines that the header matches at least a portion of the identifier associated with the first row controller and stops the transmission of streaming data packets over the shared data path. The integrated circuit according to the fifth embodiment, which is configured to shift a streaming data packet from a shared data path so as to do so.The integrated circuit of Embodiment 5, wherein the exemplary embodiment 7 header comprises instructions from a first row controller.Exemplary Embodiment 8 The integrated circuit of Embodiment 1, wherein the first microsector comprises a plurality of logical access blocks, each coupled to a data register.An exemplary embodiment 9 data registers include a 1-bit wide data path, a first flip-flop, and a second flip-flop, and a 1-bit wide data path includes a first flip-flop and a second flip-flop. The integrated circuit according to Embodiment 8, which is coupled to the flip-flop of the above.Exemplary Embodiment 10 A step of receiving an access command from a part of a programmable logic circuit, a step of determining a target node specified by the access command, and a target micro network on chip using the target node. The step of determining the column and the step of generating a message to read or write the data associated with the target node, where the message is the first identifier for the target micro network on chip column containing the target node. A method that includes a step that includes and a step that outputs the message to a routing fabric that is configured to pass or direct the message based on the first identifier.Exemplary Embodiment 11 The method according to exemplary embodiment 10, comprising a step of determining a parameter from an access command and a step of determining a target node from the parameter.Exemplary Embodiment 12 A step of generating a message to include a second identifier for a target node, each node between the target node and the first node in the target micro network on chip column. 10. The method of Embodiment 10, comprising a step of determining whether a second identifier in a message matches its own identifier.Exemplary Embodiment 13 A step of receiving a message from a routing network, wherein the message comprises a step that includes request data previously stored in the target node before the target node inserts the request data into the message. The method according to the exemplary embodiment 11.Exemplary Embodiment 14 The method according to exemplary embodiment 13, wherein a toggled confirmation signal is received in response to the target node inserting the request data into the message.An exemplary Embodiment 15 Containing a first control circuit located between a programmable logic circuit including a configuration memory and a portion of the programmable logic circuit, and a second control circuit located outside the programmable logic circuit. The second control circuit receives an access command from a part of the programmable logic circuit, determines the target node specified by the access command, and uses the target node to determine the target micro network on chip sequence. And generate a message to read or write data related to the target node, the message contains an identifier for the target micro network on chip row containing the target node, and the message is sent to the first control circuit. A system configured to output a message to a routing fabric that is configured to pass or direct the message based on its identifier for routing.Exemplary Embodiment 16 A first control circuit configures a plurality of microsectors of a target node at least in part based on shifting the target data of a message at least once through each 1-bit data register of the microsector. 25. The system according to exemplary embodiment 15, configured to read data from at least some of the memory.Exemplary Embodiment 17 A first control circuit comprises a plurality of configurations of microsectors of a target node, at least partially based on shifting the target data of a message one or less times through each 1-bit data register of the microsector. The system according to exemplary embodiment 15, configured to write data to at least some of the memory.Exemplary Embodiment 18 The system according to exemplary embodiment 15, wherein the target node comprises a scan register used to perform a verification operation.The system according to exemplary embodiment 15, wherein the exemplary embodiment 19 message comprises a header configured to indicate a command executed by the target node.Exemplary Embodiment 20 The row controller of the target node receives a message, verifies that the header contains an identifier that matches the identifier of the row controller, and then generates multiple control signals to execute the command. 19. The system of embodiment 19 configured in.The techniques presented herein and described in the claims are applied with reference to material objects and examples of practical properties that clearly improve the art and are therefore abstract, intangible or Not purely theoretical. In addition, any claim attached to the end of this specification is designated as "means for [performing] [function]" or "means for [performing] [function]" 1 If it contains more than one element, such elements are intended to be construed under 35 U.S.C. 112 (f). However, for claims that include elements specified in other ways, such elements are intended not to be construed under 35 U.S.C. 112 (f).
A system and method to fabricate magnetic random access memory is disclosed. The method includes depositing a cap layer (112) on a magnetic tunnel junction (MTJ) structure, depositing a first spin-on material layer (530) over the cap layer, and etching the first spin-on material layer and at least a portion of the cap layer. The step of depositing and etching a spin-on material can be repeated several times. The spin-on materials protects the interlayer dielectric layer that surrounds the MTJ.
WHAT IS CLAIMED IS: 1. A method comprising: depositing a cap layer on a magnetic tunnel junction (MTJ) structure; depositing a first spin-on material layer over the cap layer; and etching the first spin-on material layer and at least a portion of the cap layer. 2. The method of claim 1, further comprising depositing an Interlay er Dielectric (ILD) layer over the cap layer prior to depositing the first spin-on material layer over the cap layer. 3. The method of claim 2, further comprising performing a chemical- mechanical polishing operation on at least a portion of the ILD layer prior to depositing the first spin-on material layer. 4. The method of claim 1, wherein after etching the first spin-on material layer and the cap layer a portion of the MTJ structure is exposed. 5. The method of claim 4, wherein the exposed portion includes a top electrode contact layer portion of the MTJ structure. 6. The method of claim 1, further comprising depositing a second spin-on material layer over the cap layer after depositing the first spin-on material layer and after etching the first spin-on material layer and the cap layer. 7. The method of claim 6, further comprising etching the second spin-on material layer to expose a portion of the MTJ structure. 8. The method of claim 1, further comprising, after etching, detecting that a top electrode contact layer of the MTJ structure is exposed. 9. The method of claim 1, further comprising executing multiple SOM depositing and etching cycles to open a top portion of the MTJ. 10. The method of claim 1, wherein the spin-on material is a spin-on glass. 11. The method of claim 1 , wherein the spin-on material is a photoresist material. 12. The method of claim 1, wherein the spin-on material is an organic anti- reflection coating (ARC) material. 13. The method of claim 1, further comprising controlling a profile of the spin- on material so that a center thickness of the first spin-on material layer differs from an outer thickness of the first spin-on material layer. 14. A device comprising: a magnetic tunnel junction (MTJ) structure; a cap layer in contact with the MTJ structure; a spin-on material layer in contact with a sidewall portion of the cap layer; and a conducting layer in contact with at least the spin-on material layer and a portion of the MTJ structure; wherein the cap layer has been etched to expose a portion of an electrode contact layer of the MTJ structure, and wherein the conducting layer is in electrical contact with the exposed portion of the electrode contact layer of the MTJ structure. 15. The device of claim 14, wherein the spin-on material layer comprises an inorganic material. 16. The device of claim 14, wherein the spin-on material layer comprises an organic material. 17. The device of claim 14, further comprising an interlayer dielectric (ILD) layer, wherein the ILD layer is situated between a bottom portion of the cap layer and the spin-on material layer. 18. The device of claim 14, wherein prior to etching of the cap layer, the spin- on material layer covers a portion of the cap layer. 19. A system comprising: means for depositing a spin-on material layer on a cap layer that is deposited on a magnetic tunnel junction (MTJ) structure; and means for depositing an Interlayer Dielectric (ILD) layer on the cap layer prior to depositing the spin-on material layer. 20. The system of claim 19, wherein the means for depositing a spin-on material layer is configured to permit adjustment of a profile of the spin-on material layer prior to depositing the spin-on material layer on the cap layer.
SYSTEM AND METHOD TO FABRICATE MAGNETIC RANDOM ACCESS MEMORY /. Field [0001] The present disclosure is generally related to a system and method to fabricate magnetic random access memory. II. Description of Related Art [0002] Advances in technology have resulted in smaller and more powerful computing devices. For example, there currently exist a variety of portable personal computing devices, including wireless computing devices, such as portable wireless telephones, personal digital assistants (PDAs), and paging devices that are small, lightweight, and easily carried by users. More specifically, portable wireless telephones, such as cellular telephones and Internet Protocol (IP) telephones, can communicate voice and data packets over wireless networks. Further, many such wireless telephones include other types of devices that are incorporated therein. For example, a wireless telephone can also include a digital still camera, a digital video camera, a digital recorder, and an audio file player. Also, such wireless telephones can process executable instructions, including software applications, such as a web browser application, that can be used to access the Internet. As such, these wireless telephones can include significant computing capabilities. [0003] Reducing power consumption has led to smaller circuitry feature sizes and operating voltages within such portable devices. Reduction of feature size and operating voltages, while reducing power consumption, also increases sensitivity to manufacturing process variations. Fabrication techniques that increase reliability of memory devices with reduced feature size are therefore desirable.III. Summary [0004] In a particular embodiment, a method is disclosed. The method includes depositing a cap layer on a magnetic tunnel junction (MTJ) structure. The method further includes depositing a first spin-on material layer over the cap layer, and etching the first spin-on material layer and at least a portion of the cap layer. [0005] In another particular embodiment, a device is disclosed. The device includes a magnetic tunnel junction (MTJ) structure and a cap layer in contact with the MTJ structure. The device also includes a spin-on material layer in contact with a side wall portion of the cap layer, and a conducting layer in contact with at least the spin-on material layer and a portion of the MTJ structure. The cap layer has been etched to expose a portion of an electrode contact layer of the MTJ structure, and the conducting layer is in electrical contact with the exposed portion of the electrode contact layer of the MTJ structure. [0006] In another particular embodiment, a system is disclosed. The system includes means for depositing a spin-on material layer on a cap layer that is deposited on a magnetic tunnel junction (MTJ) structure, where an Interlayer Dielectric (ILD) layer has been deposited on the cap layer prior to depositing the spin-on material layer. [0007] One particular advantage provided by at least one of the disclosed embodiments of the system and method to fabricate magnetic random access memory is an improved yield. Another particular advantage provided by at least one of the disclosed embodiments of the system and method to fabricate magnetic random access memory is improved reliability of the magnetic random access memory. [0008] Other aspects, advantages, and features of the present disclosure will become apparent after review of the entire application, including the following sections: Brief Description of the Drawings, Detailed Description, and the Claims. IV. Brief Description of the Drawings [0009] FIG. 1 is a cross-sectional diagram of a particular illustrative embodiment depicting deposition of a cap layer of a magnetic random access memory in fabrication;[0010] FIG. 2 is a cross-sectional diagram of a particular illustrative embodiment depicting deposition of an interlayer dielectric (ILD) of a magnetic random access memory in fabrication; [0011] FIG. 3 is a cross-sectional diagram of a particular illustrative embodiment depicting a chemical-mechanical planarization (CMP) of a magnetic random access memory in fabrication; [0012] FIG. 4 is a cross-sectional diagram of a particular illustrative embodiment of a magnetic random access memory in fabrication; [0013] FIG. 5 is a cross-sectional diagram of a particular illustrative embodiment depicting deposition of a spin-on material (SOM) layer of a magnetic random access memory in fabrication; [0014] FIG. 6 is a cross-sectional diagram of a particular illustrative embodiment depicting etching of a spin-on material layer and a cap layer of a magnetic random access memory in fabrication; [0015] FIG. 7 is a cross-sectional diagram of a particular illustrative embodiment depicting a random access memory after etching of the spin-on material layer and the cap layer during fabrication; [0016] FIG. 8 is a cross-sectional diagram of a particular illustrative embodiment depicting deposition of a second spin-on material (SOM) layer of a magnetic random access memory during fabrication; [0017] FIG. 9 is a cross-sectional diagram of a particular illustrative embodiment depicting etch of a second spin-on material layer and a cap layer of a magnetic random access memory in fabrication; [0018] FIG. 10 is a cross-sectional diagram of a particular illustrative embodiment depicting a magnetic random access memory after etch of the second spin-on material layer, the first spin-on material and the cap layer during fabrication of the magnetic random access memory;[0019] FIG. 11 is a cross-sectional diagram of a particular illustrative embodiment of a magnetic random access memory including a conducting layer in contact with corresponding contact electrodes of magnetic tunneling junctions of the magnetic random access memory in fabrication; [0020] FIG. 12 is a diagram of a particular illustrative embodiment depicting nonuniform deposition of a spin-on material layer to fabricate a magnetic random access memory; and [0021] FIG. 13 is a flow chart of a particular illustrative embodiment of a method of fabricating a magnetic random access memory. V. Detailed Description [0022] Referring to FIG. 1, a particular illustrative embodiment depicting deposition of a cap layer of a magnetic random access memory in fabrication is generally designated 100. A magnetic random access memory 102 is fabricated on a substrate 103 and includes a plurality of magnetic tunnel junction structures (MTJs) including representative MTJs 104, 130, 140, 150, 160, 170, and 180. A material 120 being deposited on the magnetic random access memory 102 forms a cap layer 112. In a particular illustrative embodiment, the material 120 is silicon nitride, silicon carbide, or another electrically insulating material, or a combination of materials. In fabricating the magnetic random access memory 102, deposition of the material 120 typically occurs prior to deposition of an interlayer dielectric layer and a protective spin-on material layer. [0023] Referring to FIG. 2, a cross-sectional diagram of a particular illustrative embodiment depicting deposition of an interlayer dielectric (ILD) of a magnetic random access memory in fabrication is depicted and generally designated 200. An MRAM 202 has been partially formed on a substrate 203. The MRAM 202 includes a plurality of magnetic tunnel junction (MTJ) cells including MTJ 204. The MTJ 204 includes a lower ferromagnetic layer 206 (also called a "fixed layer" or a "pinned layer" herein), a tunneling barrier 208, and a top electrode contact layer 210 (also "ferromagnetic free layer" or "free layer" herein). The MTJ 204 is substantially surrounded by a cap layer 212 that may cover the substrate 203. The MTJ 204 may be surrounded by an interlayerdielectric (ILD) layer 214 formed by depositing an interlayer dielectric (ILD) material 224 over the cap layer 212. The deposition of the interlayer dielectric material 224 may be accomplished by e.g., chemical vapor deposition, physical vapor deposition, or by another deposition technique. In a particular illustrative example the interlayer dielectric material 224 may be silicon oxide or another electrically insulating material. [0024] Referring to FIG. 3, a cross-sectional diagram of a particular illustrative embodiment depicting a chemical-mechanical planarization (CMP) of a magnetic random access memory in fabrication is shown and generally designated 300. In a particular embodiment, FIG. 3 depicts a CMP stage of fabrication of the MRAM 202 of FIG. 2. A magnetic random access memory 302 includes a plurality of MTJ cells, such as MTJ cell 304. Each of the MTJ cells is surrounded by ILD material forming an ILD layer 314, and each of the MTJ cells is covered by the ILD layer 314. The MRAM 302, fabricated on a substrate 303, is placed on a mounting apparatus 301 that is rotatable. A chemical dispenser 322 may dispense a chemical 324 to be used in a planarizing process. A mechanical planarizing apparatus 320 that is rotatable may be used in conjunction with the dispensed chemical 324 to planarize an upper portion of the MRAM 302 including the ILD layer 314. [0025] Referring to FIG. 4, a cross-sectional diagram of a particular illustrative embodiment of a magnetic random access memory in fabrication is depicted and generally designated 400. In a particular embodiment, FIG. 4 depicts a post- planarization stage of fabrication of the MRAM 202 of FIG. 2. A magnetic random access memory 402 in fabrication includes a plurality of MTJs, such as MTJ 404. The MRAM 402 has been planarized, through, e.g., chemical-mechanical planarization as depicted in FIG. 3, or via another planarization technique. A cap layer 412 protects internal portions of the MTJ 404, including a top electrode contact layer portion 410, a tunneling barrier 408, and a pinned layer 406. As a result of planarization, an ILD layer 414 surrounding the cap layer 412 has been partially removed, exposing an uppermost cap layer portion 416 of the cap layer 412. During a subsequent etch procedure the cap layer 412, which is typically made of a different material than the ILD layer 414, may etch at a slower rate than the ILD layer 414. Without an additional protective layer, recession of the ILD layer 414 around each MTJ may occur during the etch procedure.[0026] Referring to FIG. 5, a cross-sectional diagram of a particular illustrative embodiment depicting deposition of a spin-on material (SOM) layer of a magnetic random access memory in fabrication is generally designated 500. In a particular embodiment, FIG. 5 depicts an SOM deposition stage of fabrication of the MRAM 202 of FIG. 2. An MRAM 502 includes a plurality of MT Js, such as MT J 504. The MRAM 502 may be disposed on a support structure 501 that is rotatable. An SOM dispenser 540 may be positioned radially relative to the support structure 501, measured by a radial distance 544 from an axis of rotation 546 of the support structure 501. The SOM dispenser 540 may dispense spin-on material (SOM) 542 while the support structure 501 is rotating. Rotational angular acceleration and angular speed of rotation of the support structure 501 can be adjusted. The radial distance 544 may be varied during deposition, enabling the spin-on material to be deposited across an upper portion of the MRAM 502. Adjusting the rotational angular acceleration and the angular speed of rotation of the support structure 501 can change a thickness profile (also called radial profile herein) and a thickness uniformity of an SOM layer 530 deposited. By varying the radial distance 544 during deposition of the SOM 542 the SOM layer 530 may be formed that covers an ILD layer 514 and a cap layer 512, both of which have been previously deposited with the ILD layer 514 situated between a bottom portion of the cap layer 513 and the SOM layer 530. [0027] A radial profile of the SOM layer 530, i.e., thickness of the SOM layer 530 as a function of distance from a center of the MRAM 502, may be pre-determined by selecting a profile of dispenser radial speed as a function of time. For example, selecting a uniform radial speed of the SOM dispenser 540 to dispense SOM at a substantially constant rate as the support structure 501 rotates may produce an SOM layer 530 with substantially uniform thickness. In another particular illustrative embodiment, selecting a non-uniform radial speed of the dispenser 540 during rotation of the support structure 501 may produce a non-uniform profile, such as a concave- shaped profile or a convex-shaped profile, as illustrative, non- limiting examples. After dispensing of the SOM542 is complete, adjusting the angular speed of rotation and the rotational angular acceleration of the support structure 501 can also modify the radial profile and thickness uniformity of the SOM layer 530.[0028] Referring to FIG. 6, a cross-sectional diagram of a particular illustrative embodiment depicting etching of a spin-on material layer and a cap layer of a magnetic random access memory in fabrication is shown and generally designated 600. In a particular embodiment, FIG. 6 depicts an etching stage of fabrication of the MRAM 202 of FIG. 2. An MRAM 602 includes a plurality of MTJs, such as MTJ 604, and may be subjected to an etch procedure, such as a dry etch or a wet etch, which may be performed by immersing of a portion of the MRAM 602 in a chemical etching chamber 650. Prior to the etch procedure a cap layer 612, an ILD layer 614, and an SOM layer 630 have been deposited over a top electrode contact layer portion 610. A top electrode contact layer portion 610 of the MTJ 604 may be protected from the chemical etching chamber 650 during the etch procedure by the cap layer 612. During the etch procedure, portions of the SOM layer 630 may be etched, and when portions of the cap layer 612 become exposed to the chemical etching chamber 650 the exposed portions of the cap layer 612 may be etched as well. The SOM layer 630 may serve to protect the ILD layer 614 from being etched by the chemical etching chamber 650. The SOM layer 630 may also protect the cap layer 612 from being etched. After etching a portion of the SOM layer 630, a portion of the cap layer 612 may be exposed to the chemical etching chamber 650. Unetched portions of the SOM layer 630 may continue to protect the ILD layer 614 from the chemical etching chamber 650. [0029] Referring to FIG. 7, a cross-sectional diagram of a particular illustrative embodiment depicting a random access memory after etching of the spin-on material layer and the cap layer during fabrication is depicted. In a particular embodiment, FIG. 7 depicts a post-etching stage of fabrication of the MRAM 202 of FIG. 2. An MRAM 702 includes a plurality of MTJs, such as MTJ 704. A portion of a cap layer 712 has been etched away to expose a top electrode contact portion 710 of the MTJ 704. An ILD layer 714 may be protected by portions of an SOM layer 730 during an etching stage of fabrication. The ILD layer 714 can remain intact and may provide structural support to a sidewall portion 713 of the cap layer 712. The top electrode contact portion 710 of the MTJ 704 may subsequently be placed in contact with a conducting layer (not shown). In similar fashion, each of the MTJs of the MRAM 702 may have a portion of the cap layer 712 removed to expose a top electrode contact portion (also "top electrode contact window" herein) of the corresponding MTJ. Opening of the top electrodecontact portion of MTJ can be detected by a visual inspection or by use of an electrical probe, such as an electrical probe 760, making electrical contact with the top electrode portion 710 of the MTJ 704. [0030] A corresponding top electrode contact portion of each MTJ may subsequently be placed in contact with the conducting layer. The conducting layer (not shown) can be patterned to separate each MTJ from neighboring MTJs. A particular top electrode contact portion of an MTJ may be connected to a conducting layer. The conducting layer may be connected to routing metal wire to make a connection (not shown). [0031] Referring to FIG. 8, a cross-sectional diagram of a particular illustrative embodiment depicting deposition of a second spin-on material (SOM) layer of a magnetic random access memory during fabrication is generally designated 800. An MRAM 802 in fabrication includes a plurality of MTJs, such as an MTJ 804. The MRAM 802 is fabricated on a substrate 803, which is disposed on a rotatable support structure 801. A SOM dispenser 840 may dispense SOM 842 onto a portion of the MRAM 802. The dispenser 840 may be positioned at a radial distance 844 with respect to an axis of rotation 846 of the rotatable support structure 801, and the radial distance 844 may be varied in time. The MRAM 802 includes a first layer of SOM 830 that has been previously deposited and etched, which covers and protects a cap layer 812 and an ILD layer 814, each of which surrounds each MTJ. Portions of the first layer of SOM 830 may have been removed through an etching process. The SOM 842 deposited on the MRAM 802 forms a second SOM layer 832, which further covers and protects the cap layer 812 and the ILD layer 814. In another particular illustrative embodiment, the first layer of SOM 830 may be stripped before second SOM 842 is deposited. The support structure 801 may have a selectable rotational speed that may be varied over time or may be substantially constant over time. The radial distance 844 of the dispenser may be changed over time with a constant radial speed profile or a non-linear radial speed profile. [0032] The second SOM layer 832 may be deposited above the first SOM layer 830 with a selectable layer thickness profile. In a particular illustrative example, the radial speed profile of the dispenser 840 is non-linear and a profile of the resultant second SOM layer 832 deposited on the first SOM layer 830 may be convex or concave,depending on the radial speed profile of the dispenser 840 as the support structure 801 is rotated. In another particular illustrative example, the radial speed of the dispenser 840 is constant and the second SOM layer 832 will have an approximately constant thickness across the first SOM layer 830. The second SOM layer 832 may provide additional protection against a second etch procedure during fabrication of the MRAM 802. [0033] Referring to FIG. 9, a cross-sectional diagram of a particular illustrative embodiment depicting etch of a second spin-on material layer and a cap layer of a magnetic random access memory in fabrication is shown. An MRAM 902 including a plurality of MTJs, such as MTJ 904, is subjected to an etch procedure by immersing a portion of the MRAM 902 into an etching chamber 950. The MRAM 902 includes a second SOM layer 932 that has been previously deposited above a first SOM layer 930. The second SOM layer 932 provides additional protection to a cap layer 912 that may also be protected from the etching chamber 950 by the first SOM layer 930. The second SOM layer 932 and the first SOM layer 930 may also protect an ILD layer 914 during the etch procedure. By depositing multiple SOM layers on the MRAM 902 etching may be controlled so that an upper electrode contact layer portion of one of the MTJs, such as an upper electrode contact layer portion 910, may be exposed without etching a significant amount of the ILD layer 914 surrounding each MTJ. In a particular illustrative embodiment, etching may occur after each deposition of an SOM layer on the MRAM 902. By reducing etching of the ILD layer 914 that surrounds each MTJ, the ILD layer 914 may enhance structural integrity of the MRAM 902 by supporting sidewall portions 913 of the cap layer 912 that surround each MTJ. By adding the second SOM layer 932, MRAM device yield may be improved through a larger acceptable window of process parameters such as etch duration. [0034] Referring to FIG. 10, a cross-sectional diagram of a particular illustrative embodiment depicting a magnetic random access memory (MRAM) after etch during fabrication of the magnetic random access memory is depicted. An MRAM 1002 has been subjected to one or more etch procedures and includes a first SOM layer 1030 and a second SOM layer 1032. An uppermost portion of a cap layer 1012 has been removed via the etch procedures, exposing a top electrode contact layer portion 1010. The upper electrode contact layer portion 1010 may be subsequently connected to an electricalcontact layer (not shown). Sidewall portions of the cap layer 1012, including a sidewall portion 1013, have been protected from etching by the first SOM layer 1030 and the second SOM layer 1032. Portions of the first SOM layer 1030 and the second SOM layer 1032 have been removed. Remaining portions of the first SOM layer 1030 and the second SOM layer 1032 may protect the ILD layer 1014 that surrounds each MTJ, and the ILD layer 1014 may increase structural integrity of each MTJ. Through multiple SOM deposition-etching cycles that open the upper electrode contact layer portion 910, structural integrity of the MRAM 1002 may be improved and a manufacturing process window of process parameters, such as etch uniformity and selectivity, may be relaxed. Both structural integrity of the MRAM 1002 and an expanded manufacturing process window can enhance manufacturing yield. [0035] Referring to FIG. 11 , a particular illustrative embodiment of a magnetic random access memory (MRAM ) is depicted and generally designated 1100. An MRAM 1100 in fabrication, which includes a plurality of MTJs, such as MTJ 1104, has been subjected to etching, exposing a top electrode contact layer portion 1110. The top electrode contact layer portion 1110 is shown to be in contact with an electrical conducting layer 1170 that has been deposited subsequent to etching. A cap layer 1112 includes side wall portions such as a side wall portion 1113, which protects electrically active portions of the MTJ 1104. An SOM layer 1130 protects an ILD layer 1114 that surrounds each MTJ. The top electrode contact layer portion 1110 may be patterned to separate each MTJ from other MTJs. Each MTJ may be connected to the outside world via the electrical conducting layer 1170. By depositing one or more SOM layers 1130 that protect the ILD layer 1114 and protect portions of the cap layer 1112 such as the sidewall portion 1113 during etching, an MRAM manufacturing process parameter window may be increased and a greater yield may be achieved. [0036] Referring to FIG. 12, a diagram of a particular illustrative embodiment depicting non-uniform deposition of a spin-on material layer to fabricate a magnetic random access memory (MRAM ) is generally designated 1200. A substrate 1203 is disposed on a rotatable mounting structure 1201. The substrate 1203 has been partially patterned to form an MRAM including one or more MTJs such as the MTJ 404 of FIG. 4. An SOM dispenser 1240 may be positioned at an adjustable radial distance 1244 from an axis of rotation 1246 of the mounting structure 1201, and the radial distance 1244 maychange over time. The dispenser 1240 may dispense SOM material 1242 that is deposited on the substrate 1201. A rate of dispensing SOM material through the dispenser 1240 may be selectable. In a particular illustrative embodiment, the dispensing rate may be selected to be substantially constant over time. In another particular illustrative embodiment, the dispensing rate may be selected to be variable over time. A rate of radial speed of the dispenser 1240 may be selectable. Through selection of a radial speed profile (radial distance over time) and selection of the dispensing rate of SOM liquid via the SOM dispenser 1240, a pre-determined thickness profile of a deposited SOM layer 1230 may be produced. In one particular non-limiting illustrative example, a uniform dispensing rate and a constant radial speed of the dispenser 1240 may result in a deposited SOM layer having a substantially constant thickness across the substrate 1201. In another particular illustrative example, a nonuniform dispensing rate of the SOM liquid 1242 and a uniform or non-uniform radial velocity profile of the dispenser 1240 may result in a non-uniform thickness profile of the SOM layer 1230. In yet another particular illustrative example, a uniform dispensing of the SOM liquid and a non-uniform radial velocity profile of the dispenser 1240 can result in a non-uniform thickness profile of the SOM layer 1230. The thickness profile may be constant, convex or concave, depending on factors that may include the dispensing rate profile of the SOM liquid and the radial speed profile of the dispenser 1240. In a particular illustrative embodiment outer portions, i.e., circumferential portions of a substrate may experience greater etch rates than centermost portions, and a concave-shaped SOM layer 1230 shape may be advantageous during etching to protect outer portions of the substrate 1201 and on MRAM (not shown) in fabrication on the substrate 1201. In a particular illustrative example, a non-uniform SOM layer thickness profile 1280 may be employed to afford greater protection to circumferential portions of the substrate 1201 during an etching process. For instance, the SOM layer thickness profile 1280 shows thickness varying substantially linearly with a radial distance from a center of the substrate 1201, producing a concave-shaped SOM layer 1230. A nonuniform SOM thickness profile such as a concave shaped thickness profile may provide protection to outer portions of a MRAM structure in fabrication on a wafer against over- etching during fabrication of the MRAM. The non-uniform SOM thickness profile cancompensate for substrate non-uniformity and make a top portion of each of the MTJ structures more uniform. [0038] Referring to FIG. 13, a flow chart of a particular illustrative embodiment of a method of fabricating a magnetic random access memory is depicted. At block 1302, an interlayer dielectric (ILD) film is deposited onto a Magnetic Tunnel Junction (MTJ) cap layer of a Magnetic Random Access Memory (MRAM). Proceeding to block 1304, a chemical-mechanical planarization (CMP) process is applied to the ILD layer. Advancing to block 1306, a spin-on material (SOM), e.g., spin-on glass, photoresist, anti-reflective coating, organic material, or inorganic material, is deposited over the MTJ cap layer and the ILD layer, and may serve to protect the cap layer and the ILD layer during an etch procedure. (A densification heating process may also be applied if inorganic SOM materials are used.) Moving to block 1308, an etch procedure is performed, etching the SOM layer and portions of the cap layer that may become exposed. (If the SOM is organic material, the SOM may be stripped after etching.) Proceeding to decision block 1310, a determination is made as to whether a top electrode contact layer window (also "top electrode contact layer portion" herein) is open. When the top electrode contact layer window is open at each of the MTJs, the method terminates at 1314. When the top electrode contact window is not open at each of the MTJs, the method proceeds to decision block 1312, where a determination is made as to whether an additional SOM layer should be deposited to further protect the ILD and portions of the cap layer during a subsequent etch procedure. When the determination is made to deposit the additional SOM layer prior to the subsequent etch procedure, returning to block 1306 the additional SOM layer is deposited prior to the subsequent etch at block 1308. When it is determined, at block 1312, not to add another SOM layer, processing continues with an additional etch procedure performed on the previously deposited SOM layer and portions of the cap layer, at block 1308. [0039] The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the disclosed embodiments. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the principles defined herein may be applied to other embodiments without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scopepossible consistent with the principles and novel features as defined by the following claims.
In a semiconductor device including a first conductive layer, the first conductive layer is treated with a nitrogen/hydrogen plasma before an additional layer is deposited thereover. The treatment stuffs the surface with nitrogen, thereby preventing oxygen from being adsorbed onto the surface of the first conductive layer. In one embodiment, a second conductive layer is deposited onto the first conductive layer, and the plasma treatment lessens if not eliminates an oxide formed between the two layers as a result of subsequent thermal treatments. In another embodiment, a dielectric layer is deposited onto the first conductive layer, and the plasma treatment lessens if not eliminates the ability of the first conductive layer to incorporate oxygen from the dielectric.
What is claimed is: 1. A method of passivating a conductive material, comprising:providing a first conductive material, wherein said first conductive material has an ability to associate with oxygen; and exposing said first conductive material to methylsilane; and forming a second conductive layer that is capacitively coupled to the first conductive material. 2. The method of claim 1 wherein the first conductive layer comprises at least one of tungsten nitride, polysilicon, tungsten, copper, and aluminum.3. The method of claim 1 wherein exposing said first conductive material comprises exposing the first conductive material to at least one material in the recited group under process conditions comprising:a flow rate of the material of about 2 sccm to about 400 sccm; a flow rate of about 50 sccm to about 100 sccm for an inert carrier gas; a temperature ranging from about 150 to about 600 degrees Celsius; a pressure ranging from about 50 millitorr to about 760 torr; and a process time ranging from about 50 to about 500 seconds. 4. The method of claim 3 wherein the inert carrier gas comprises He or Ar.5. A method of passivating a conductive material, comprising:providing a tungsten nitride layer that is capacitively coupled to a conductive layer; exposing the tungsten nitride layer to methylsilane; and providing a polysilicon layer on the tungsten nitride layer. 6. The method in claim 5 wherein exposing the tungsten nitride layer comprises exposing the tungsten nitride layer to at least methylsilane under process conditions comprising:a flow rate of the material of about 2 sccm to about 400 sccm; a flow rate of about 50 sccm to about 100 sccm for an inert carrier gas; a temperature ranging from about 150 to about 600 degrees Celsius; a pressure ranging from about 50 millitorr to about 760 torr; and a process time ranging from about 50 to about 500 seconds. 7. The method of claim 6 wherein the inert carrier gas comprises He or Ar.8. A method of passivating a conductive layer, comprising:providing a first conductive plug on a semiconductor substrate; providing a first conductive layer on the plug; exposing the first conductive layer to methylsilane; and after exposing the first conductive layer, forming a second conductive layer on the first conductive layer. 9. The method of claim 8 wherein the plug comprises at least one of polysilicon, tungsten, copper, and aluminum.10. The method of claim 8 wherein the first conductive layer comprises tungsten nitride.11. The method of claim 8 wherein the second conductive layer comprises copper.12. The method of claim 8 wherein exposing the first conductive layer reduces an ability of the first conductive layer to associate with oxygen.
RELATED APPLICATIONThis application is a divisional of U.S. application Ser. No. 09/200,253, filed Nov. 25, 1998 U.S. Pat. No. 6,303,972.TECHNICAL FIELDThe present invention relates generally to a method of protecting against a conductive layer incorporating oxygen and a device including that layer. More specifically, the present invention relates to an in situ treatment of tungsten nitride.BACKGROUND OF THE INVENTIONThere is a constant need in the semiconductor industry to increase the number of dies that can be produced per silicon wafer. This need, in turn, encourages the formation of smaller die. Accordingly, it would be beneficial to be able to form smaller structures and devices on each die without losing performance. For example, as capacitors are designed to take an ever decreasing amount of die space, those skilled in the relevant art have sought new materials with which to maintain or even increase capacitance despite the smaller size.One such material is tantalum pentoxide (Ta2O5), which can be used as the dielectric in the capacitor. Oftentimes, an electrically conductive layer, such as one made of hemispherical silicon grain (HSG), underlies the tantalum pentoxide and serves as the capacitor's bottom conductive plate. With other dielectrics, it is preferable to have a layer of polycrystalline silicon (polysilicon) deposited over the dielectric to serve as the capacitor's top conductive plate. If polysilicon is deposited directly onto tantalum pentoxide, however, several problems will occur. First, silicon may diffuse into the tantalum pentoxide, thus degrading it. Second, oxygen will migrate from the tantalum pentoxide, resulting in a capacitor that leaks charge too easily. Further, the oxygen migrates to the polysilicon, creating a layer of non-conductive oxide, which decreases the capacitance. This can also be a problem when using barium strontium titanate ((Ba, Sr)TiO3, or BST) as the dielectric.In order to avoid these problems, it is known to deposit a top plate comprising two conductive layers. Polysilicon serves as the upper layer of the plate, with a non-polysilicon conductive material interfacing between the tantalum pentoxide and polysilicon. One such material often used is tungsten nitride (WNx, wherein X is a number greater than zero). However, other problems arise with this process. Specifically, by the end of the capacitor formation process, a layer of non-conductive oxide often forms between the two conductive layers of the top plate. For ease of explanation, this non-conductive oxide will be assumed to be silicon dioxide (SiO2), although other non-conductive oxides, either alone or in combination, may be present.Without limiting the current invention, it is theorized that the tungsten nitride is exposed to an ambient containing oxygen. The tungsten nitride adsorbs this oxygen due to bonds located on the grain boundaries of the tungsten nitride surface. Once the polysilicon layer is deposited, the device is then exposed to a thermal process. For example, the capacitor may be blanketed with an insulator, such as borophosphosilicate glass (BPSG). The BPSG layer may not be planar, especially if it is used to fill a trench in which the capacitor is constructed. Heat is applied to the die to cause the BPSG to reflow and thereby planarize. The heat can cause the oxygen at the tungsten nitride surface to diffuse into the polysilicon, wherein the oxygen and silicon react to form silicon dioxide.Regardless of the exact manner in which the silicon dioxide layer is formed, the result is that the HSG/Ta2O5/WNx/SiO2/polysilicon layers form a pair of capacitors coupled in series, wherein the HSG/Ta2O5/WNx layers serve as one capacitor and the WNx/SiO2/polysilicon layers serve as the second capacitor in the series. This pair of capacitors has less capacitance combined than the single HSG/Ta2O5/WNx/polysilicon capacitor that was intended to be formed.Other problems can occur with the association of WNx and Ta2O5. For example, it is possible for the WNx to serve as the bottom plate of a capacitor, underlying the Ta2O5 dielectric. In that case, the deposition of the Ta2O5 or a subsequent reoxidation of that layer may cause the WNx layer to incorporate oxygen, thereby reducing capacitance.It should be further noted that capacitor formation is not the only circumstance in which such problems can occur. There are many situations in which an in-process multi-layer conductive structure is exposed to oxygen and is subjected to conditions that encourage oxidation. Another example can be seen in the formation of metal lines. A layer of tungsten nitride, or perhaps tantalum nitride, may serve as an interface between the conductive material of a via and the metal line. If the interface is exposed to an ambient containing oxygen, then a thermal process involving the alloying or flowing of the metal in the metal line could cause a similar problem with oxidation, thereby hindering electrical contact.As a result, there is a specific need in the art to prevent or at least decrease the degradation of capacitance in capacitors and of electrical communication in metal lines. There is also a more general need to prevent or at least protect against or minimize the migration of oxygen in relation to a conductive layer of a semiconductor device.SUMMARY OF THE INVENTIONAccordingly, the current invention provides a method for protecting a conductive layer from oxygen. At least one exemplary embodiment concerns preventing or at least limiting a first conductive layer from incorporating oxygen beneath the layer's surface. Other exemplary embodiments address methods of limiting the first conductive layer's ability to adsorb oxygen. In doing so, such embodiments can help prevent the diffusion of oxygen into a second conductive layer, thereby protecting against oxidation between conductive layers. One such method serving as an exemplary embodiment involves exposing one of the conductive layers to an N2/H2 plasma before another conductive layer is provided thereon. In a preferred embodiment, this step is performed in situ relative to the environment or ambient atmosphere in which the one conductive layer was provided.Other exemplary embodiments include the use of other nitrogen-containing plasmas, as well as the use of nitrogen-containing gases that are not in plasma form. Still other exemplary embodiments use gases that do not contain nitrogen.Further, alternate embodiments protect against oxidation between conductive layers with a step performed ex situ relative to the environment or ambient atmosphere in which the one conductive layer was provided. In one specific exemplary embodiment of this type, silane gas is flowed over the one conductive layer.In preferred exemplary embodiments, at least one of the processes described above is performed on a conductive material that has the ability to adsorb or otherwise associate with oxygen. In a more specific embodiment, this material is a non-polysilicon material. Still more specific exemplary embodiments perform one of the processes on tungsten nitride or on tantalum nitride. In an even more specific exemplary embodiment, a tungsten nitride layer is treated before providing a polysilicon layer thereover.In yet another exemplary embodiment, a treatment such as the ones described above occurs in the context of capacitor formation and, more specifically, occurs in between depositing two conductive layers serving as the capacitor's top plate. In another exemplary embodiment, the treatment occurs between depositing the bottom plate and the dielectric of a capacitor. In yet another exemplary embodiment involves treating a conductive layer as part of the formation of a conductive line.In preferred embodiments, the method completely prevents the formation of the oxidation layer, although other exemplary embodiments allow for the restriction of the oxidation layer. In some embodiments, this oxidation layer is less than 10 angstroms thick. These methods also apply to embodiments concerning limiting a first conductive layer from incorporating oxygen beneath the layer's surface. In addition, the current invention also includes apparatus embodiments exhibiting these characteristics.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 depicts an in-process device as known in the prior art.FIG. 2 depicts an in-process device having undergone an additional step known in the prior art.FIG. 3 depicts an in-process device having undergone yet more steps known in the prior art.FIG. 4 depicts one exemplary embodiment of the current invention.FIG. 5 depicts a second exemplary embodiment of the current invention.FIG. 6 depicts an in-process device as known in the prior art.FIG. 7 depicts another in-process device as known in the prior art.FIG. 8 depicts the in-process device in FIG. 7 having undergone an additional step known in the prior art.FIG. 9 depicts a third exemplary embodiment of the current invention.FIG. 10 depicts a fourth exemplary embodiment of the current invention.DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTFIG. 1 depicts an "in-process" device 20-one that is in the process of being constructed-having undergone processes known in the art. First, a substrate 22 has been provided. In the current application, the term "substrate" or "semiconductor substrate" will be understood to mean any construction comprising semiconductor material, including but not limited to bulk semiconductive materials such as a semiconductor wafer (either alone or in assemblies comprising other materials thereon), and semiconductive material layers (either alone or in assemblies comprising other materials). Further, the term "substrate" also refers to any supporting structure including, but not limited to, the semiconductive substrates described above. Over the substrate 22, a first conductive layer 24 is provided. It is assumed for purposes of explanation only that the in-process device is a capacitor in the process of being built. Accordingly, the first conductive layer 24 serves as one of the capacitor's conductive plates 25 (see FIG. 2) and may be made of HSG. Returning to FIG. 1, a dielectric 26 is provided which, in this case, is tantalum pentoxide. Subsequently, a second conductive layer is provided, which is intended to serve as part of the other conductive plate for the capacitor. Because the dielectric 26 is tantalum pentoxide, the second conductive layer should not be polysilicon. Rather, in this case, the second conductive layer is assumed to be a tungsten nitride layer 28. Once the tungsten nitride layer 28 is provided, however, there may be a tendency for oxygen to be adsorbed onto the surface of that layer 28.Further, this adsorption may occur before a third conductive layer is provided. This layer can be a polysilicon layer 30 illustrated in FIG. 2. Ideally, the tungsten nitride layer 28 and the polysilicon layer 30 define the other conductive plate 32.However, if the third conductive layer is oxidizable, then further process steps may cause other results. For example, as seen in FIG. 3, a subsequent thermal process may cause a reaction between the polysilicon layer 30 and the oxygen that had been adsorbed onto the surface of the tungsten nitride layer 28. In building a capacitor, this thermal process can be the reflowing of a BPSG layer 34 that is deposited over the polysilicon layer 30. The heat may cause the formation of a silicon dioxide layer 36 between the tungsten nitride layer 28 and the polysilicon layer 30, essentially creating two capacitors 38 and 40 connected in series and having less combined capacitance than the one capacitor originally intended.One preferred exemplary embodiment of the current invention is a method for protecting against the formation of the silicon dioxide layer 36 during the formation of the capacitor. Once the prior art steps depicted in FIG. 1 are carried out, this exemplary embodiment has the tungsten nitride layer 28 exposed in situ to an N2 and H2 plasma. The term in situ indicates that the plasma process takes place in the same chamber, or at least within the same general atmosphere, as the process used to provide the tungsten nitride layer. At the very least, the term in situ indicates that the plasma process takes place before exposing the in-process device 20 to the atmosphere associated with providing the polysilicon layer 30. Exemplary process parameters include a temperature ranging from about 150 to about 600 degrees Celsius; gas flows including H2 at about 50 to about 2000 sccm, N2 at about 5 to about 1000 sccm, and Ar at about 200 to about 2000 sccm; a radio frequency (RF) power ranging from about 50 to about 1000 W; a pressure ranging from about 1 millitorr to about 10 torr; and a process time ranging from about 10 seconds to about 240 seconds. One of ordinary skill in the art, however, can appreciate that these parameters can be altered to achieve the same or a similar process.Without limiting the current invention, it is theorized that this treatment stuffs the tungsten nitride grain boundaries with nitrogen or otherwise passivates the layer, thereby making the bonds at the grain boundaries less active. As a result, oxygen will be less likely to be adsorbed or otherwise become associated with the tungsten nitride layer, if at all. For example, without this treatment, a silicon dioxide layer 36 about 10 to 40 angstroms thick will form between the tungsten nitride layer 28 and the polysilicon layer 30 (see FIG. 3). The exemplary process described above can result in a silicon dioxide layer 36 that is less than 10 angstroms thick, as seen in FIG. 4, and is preferably non-existent, as illustrated in FIG. 5.Moreover, the current invention is not limited to the process described above. There are other methods of providing nitrogen to the tungsten nitride that are within the scope of this invention. For example, another such plasma treatment involves the use of ammonia (NH3) in place of the nitrogen and hydrogen. In using ammonia for the plasma, parameters such as the ones previously described can be used, except that it is preferred to have a flow rate of ammonia ranging from about 5 sccm to about 1000 sccm and a process time of up to 500 seconds. Yet another embodiment includes a plasma treatment using N2 without H2. In that case, the exemplary process parameters are generally the same as those used with N2/H2 plasma except that the flow rate of N2 is 50-2000 sccm.Alternatively, ultraviolet light could be provided in place of RF energy. For example, in using N2 and H2 or in using NH3, the process parameters would be similar to the ones described above for those gases, except the RF energy would be replaced with UV light at a power ranging from 50 W to 3 kW.Further, the current invention also includes within its scope other methods of providing nitrogen without using electromagnetic energy to affect the gas. One such exemplary embodiment still involves introducing ammonia gas into the process chamber at the same flow rate and time as mentioned in the previous ammonia example, but at a pressure ranging from about 50 millitorr to about 1 atmosphere (760 torr).In addition, the current invention is not limited to providing nitrogen to the tungsten nitride. Other gases may provide a reducer, passivator material, or some non-oxygen stuffing agent to the tungsten nitride surface; or otherwise cause the tungsten nitride to associate with an oxygen-free material. A plasma treatment using H2 without N2 serves as one such embodiment. Exemplary parameters include a temperature ranging from about 150 to about 600 degrees Celsius; gas flows including H2 at about 50 to about 2000 sccm, and Ar at about 200 to about 2000 sccm; an RF power ranging from about 50 to about 1000 W; a pressure ranging from about 1 millitorr to about 10 torr; and a process time ranging from about 10 seconds to about 240 seconds.Still other gases include diborane (B2H6); phosphine (PH3); and carbon-silicon compounds such as methylsilane (CH3SiH3) and hexamethyldisilane (CH3)3Si-Si(CH3)3; and hexamethyldisilazane (HMDS). Additional alternate embodiments of the current invention use hydrazine (N2H4), monomethylhydrazine, carbon tetrafluoride (CF4), CHF3, HCl, and boron trichloride (BCl3), which are also useful in passivating dielectrics, as addressed in copending application 09/114,847, now issued as U.S. Pat. No. 6,201,276 B1. Also included are mixtures of any of the gases or types of gases described above. Exemplary non-plasma process parameters using these other gases include a flow rate of about 2 sccm to about 400 sccm for these gases; a flow rate of about 50 sccm to about 100 sccm for an inert carrier gas such as He or Ar; a temperature ranging from about 150 to about 600 degrees Celsius, a pressure ranging from about 50 millitorr to about 1 atmosphere (760 torr); and a process time ranging from about 50 to about 500 seconds. Again, one skilled in the art is aware that these parameters can be altered to achieve the same or a similar process.It is preferred that at least one of the processes described above occur between providing the tungsten nitride layer 28 and providing the polysilicon layer 30. It is more preferable that one of the inventive processes be carried out in a reducing atmosphere or at least before the tungsten nitride layer 28 is exposed to oxygen. Though such exposure is undesirable in many circumstances, it may be unavoidable. For example, the tungsten nitride layer 28 may be exposed to the cleanroom air at some point during processing. Thus, it is even more preferable to treat the tungsten nitride layer 28 in situ relative to the environment or ambient atmosphere used to provide the tungsten nitride layer 28. It is still more preferable to cover the treated tungsten nitride layer 28 before the in-process device 20 is exposed, even unintentionally, to oxygen. This is preferable because any exposure may allow at least some oxygen to associate with the tungsten nitride layer 28, even after one of the inventive treatments disclosed herein. Nevertheless, it is not necessary under the current invention to discourage oxygen adsorption before exposing the in-process device to the atmosphere associated with providing the polysilicon layer 30. If the in-process capacitor 20 is removed from the environment used to provide the tungsten nitride layer 28 and one of the inventive processes described has not been performed, then another option within the scope of the current invention is to expose the tungsten nitride layer 28 to a reducing atmosphere before providing the polysilicon layer 30. This can be done by flowing silane gas (SiH4) into the environment of the in-process device 20. Process parameters include a silane flow ranging from 50 to 1,000 sccm, a pressure of 10 torr to 1 atmosphere, a temperature ranging from 300 to 700 degrees Celsius, and a process time ranging from 10 to 300 seconds. Moreover, this silane treatment, if chosen, is not limited to ex situ situations. Silane gas may be used in place of or in combination with the in situ treatments described herein. Accordingly, any combination of the individual processes covered by the current invention are also within its scope.As mentioned in the background section, oxygen diffusing away from the tungsten nitride is not the only concern when using that layer along with tantalum pentoxide. As seen in FIG. 6, a tungsten nitride layer 128 is deposited over the substrate 122. A dielectric layer 126, assumed to be tantalum pentoxide, is deposited over the tungsten nitride layer 128. Assuming the in-process device of FIG. 6 represents the early stage of a capacitor, the tungsten nitride layer 128 will serve as the bottom plate rather than part of the top plate as depicted in previous figures. The process of depositing the tantalum pentoxide dielectric layer 126 may cause the tungsten nitride layer 128 to incorporate oxygen. In addition, further processing, such as a reoxidation of the tantalum pentoxide dielectric layer 126 may cause the tungsten nitride layer 128 to incorporate still more oxygen. This incorporation of oxygen will reduce the capacitance of the finished device. Under these circumstances, a preferred embodiment of the current invention calls for exposing the tungsten nitride layer 128 to an N2/H2 plasma before depositing tantalum pentoxide dielectric layer 126. This plasma is created under the parameters already disclosed above. Although using an N2 and H2 plasma is preferred, the alternatives presented earlier-such as a non-plasma process, the use of another nitrogen-containing gas, or the use of a nitrogen-free gas, may also be used under these circumstances, and such alternatives fall within the scope of the invention. Further, it is not required to use tungsten nitride and tantalum pentoxide as the two layers, as embodiments of the current invention will work on other conductive layers and dielectric layers as well.Thus, embodiments of the current invention protect against a conductive layer associating with oxygen in at least two circumstances. First, where a dielectric is deposited over a conductive layer, the disclosed methods help prevent oxygen from being incorporated within the conductive layer. Second, when a second conductive layer is deposited over the initial conductive layer, the disclosed methods inhibit oxygen from being incorporated by the second conductive layer and forming an oxide.It should be further noted that embodiments of the current invention are not limited to the circumstances related to the formation of capacitors. As further mentioned in the background section, a similar risk of oxidation between two conductive materials can occur during the formation of metal lines in a semiconductor device. As seen in FIG. 7, insulation 42 has been deposited over the substrate 22 and subsequently etched to define a via 44. The via is filled with a conductive material, such as polysilicon, tungsten, copper, or aluminum. In this configuration, the conductive material may be referred to as a "plug" 46. The plug 46 will allow electrical communication between the underlying substrate 22, which may be doped to serve as part of a transistor, and the overlying line material 48. The line material 48 may be copper or some other conductive material, including an alloy. The line material 48 is often deposited within a container 50, also defined by etching insulation 42. (One skilled in the art can appreciate that different layers of insulation may define the via 44 and the container 50.)As a part of this process, it may also be preferred to include an interposing layer 52 between the line material 48 and the plug 46. For purposes of explaining the current invention, it is assumed that the interposing layer 52 comprises tungsten nitride. This interposing layer 52 may enhance electrical contact between the line material 48 and the plug 46, promote adhesion of the line material 48 within the container 50, prevent or slow the diffusion of material across its boundaries, or serve some other purpose.Regardless of the intended or inherent purpose, this interposing layer may adsorb oxygen after it is formed. Moreover, there may be thermal processes involved with or occurring subsequent to providing the line material 48. Such a thermal process could be used to deposit, flow, or alloy the line material 48. As a result of this or any other thermal process, it is believed that the oxygen adsorbed by the tungsten nitride interposing layer 52 will react with the line material 48, thereby forming an oxide layer 54 between the interposing layer 52 and the line material 48 (FIG. 8). This oxide layer 54, being an insulator, will hinder the ability to allow electrical communication between the line material 48 and the plug 46. Accordingly, the exemplary methods described above may be used to reduce the oxide layer 54 to a thickness of less than 10 angstroms and preferably down to 0 angstroms, as seen respectively in FIGS. 9 and 10.One skilled in the art can appreciate that, although specific embodiments of this invention have been described for purposes of illustration, various modifications can be made without departing from the spirit and scope of the invention. For example, it is not necessary to use an exemplary treatment of the current invention on a tungsten nitride layer. The invention's embodiments will also be effective on tantalum nitride surfaces, as well as other surfaces that may adsorb or otherwise associate or interact with oxygen.Further, it should also be noted that the general process described above for providing a metal line could be considered a damascene process, wherein a hole in insulation is filled with metal. This type of process is contrasted to processes wherein a continuous layer of metal is etched to a desired configuration and then surrounded with insulation. More specifically, the metal line process describe above is an example of a dual damascene process. It follows, then, that the current invention may be applied in any type of damascene process. Moreover, one skilled in the art will now be able to appreciate that that exemplary methods embodying the current invention apply to any situation involving the prevention, minimization, or change in a factor affecting the association of oxygen with a conductive layer. As a result, the current invention also includes within its scope devices that comprise two conductive layers and a minimal amount of oxide, if any, therebetween. Accordingly, the invention is not limited except as stated in the claims.
The present disclosure includes apparatuses and methods related to a memory system with cache line data. An example apparatus can store data in a number of cache lines in the cache, wherein each of the number of lines includes a number of chunks of data that are individually accessible.
What is claimed is:1. An apparatus, comprising:a cache controller; anda cache and a memory device coupled to the cache controller, wherein the cache controller is configured to issue commands to cause the cache to:store data in a number of cache lines in the cache, wherein each of the number of cache lines includes a number of chunks of data that are individually accessible.2. The apparatus of claim 1, wherein each of the number of cache lines includes metadata, chunk metadata, tag information, and the number of chunks of data,3. The apparatus of any one of claims 1 -2, wherein the cache controller is configured to cause the cache to access a portion of the number of chunks of data in a particular cache entry while executing a command.4. The apparatus of claim 1, wherein each of the number of cache lines includes metadata that is managed using a buffer on the cache controller.5. The apparatus of any one of claims 1, 2, and 4, wherein each of the number of cache lines includes chunk metadata that is managed and updated by the cache controller as commands are executed.6. An apparatus, comprising:a cache controller; anda cache and a memory device coupled to the cache controller, wherein the cache controller is configured to issue commands to cause the cache to:access a number of chunks of data in a cache line of the cache in response to receiving a request, wherein the cache controller manages the request using a buffer on the cache controller and the cache controller services the request by returning a portion of the number of chunks of data in the cache line corresponding to the request.7. The apparatus of claim 6, wherein the cache controller is configured to issue commands to cause the cache to return the portion of the number of chunks of data corresponding to the request that were in the cache line when the request was received in response to cache controller determining the request is a hit.8. The apparatus of any one of claims 6-7, wherein the cache controller is configured to issue commands to cause the cache to retrieve the portion of the number of chunks of data corresponding to the request from the memory device in response to cache controller determining the request is a miss.9. An apparatus, comprising:a cache controller; anda cache and a memory device coupled to the cache controller, wherein the cache controller is configured to:receive requests from a host;manage the requests using a buffer on the cache controller; and sendee commands by returning chunks of data from cache lines of the cache to the host, wherein the chunks of data are a portion of the data from the cache lines.10. The apparatus of claim 10, wherein the cache controller is configured to prioritize particular chunks of data that will not be evicted from the cache lines.11. The apparatus of any one of claims 9-10, wherein the cache controller is configured to write the chunks of data from the memory device to the cache prior to receiving a request for the chunks of data. 2. The apparatus of claim 9, wherein the cache controller is configured to write dirty chunks of data to the memory device when not servicing commands.13. The apparatus of any one of claims 9, 10, and 12, wherein the cache controller is configured to select chunks of data to remain in the cache based on a command from the host.14. A method, comprising;receiving a request for data at a cache controller;determining whether data associated with the request is in a cache using a buffer on the cache controller; andservicing the request, in response to determining the request is a hit, by returning a number of chunks of data from a cache line indicated by the buffer, wherein the number of chunks of data is a portion of the data on the cache line.15. The method of claim 14, further including servicing the request, in response to determining the request is a miss, by writing a number of chunks of data associated with the request from a memory device to a cache line indicated by the buffer,16. The method of claim 15, further including servicing the request, in response to determining the request is a miss, by returning a number of chunks of data from a cache line indicated by the buffer.17. The method of any one of claims 14-16, further including servicing the request, in response to determining the request is a miss, by writing a number of dirty chunks of data in the cache line to a memory device coupled to the cache and the cache controller that were in the cache line when the request was received.18. The method of any one of claims 14-16, further including servicing the request, in response to determining the request is a miss, by selecting the cache line based upon the cache line having fewer dirty chunks than other cache lines in the cache.19. The method of any one of claims 14-16, further including servicing the request, in response to determining the request is a hit, by replacing a number of chunks of data in the cache line that are not associated with the request and were invalid when the request was received.iz20. The method of claim 14, wherein servicing the request includes updating chunk metadata associated with the number of chunks of data.21. The method of any one of claims 14-16 and 20, further including accessing a portion of the number of chunks of data in a parti cular cache entry while executing a command.22. The method of any one of claims 14-16 and 20, further including managing the data in the cache using the buffer on the cache controller.23. The method of any one of claims 14-16 and 20, further including managing chunk metadata by the cache controller as commands are executed.
CACHE LINE DATA Technical Field[0001] The present disclosure relates generally to memory devices, and more particularly, to method and apparatuses of memory system with cache line data.Background[0002] Memory devices are typically provided as internal,semiconductor, integrated circuits in computing devices or other electronic devices. There are many different types of memory including volatile and nonvolatile memory. Volatile memory can require power to maintain its data (e.g., user data, error data, etc.) and includes random-access memory (RAM), dynamic random access memory (DRAM), and synchronous dynamic random access memory (SDRAM), among others. Non-volatile memory can provide persistent data by retaining stored data when not powered and can include NAND flash memory, NOR flash memory, read only memory (ROM), Electrically Erasable Programmable ROM (EEPROM), Erasable Programmable ROM (EPROM), and resistance variable memory such as phase change random access memory (PCRAM), resistive random access memory (RRAM), and magnet oresi stive random access memory (MRAM), among others.[0003] A memory system can include a cache memory that may be smaller and/or faster than other memory of the system (e.g., DRAM, NAND, disk storage, solid state drives (SSD), etc., which may be referred to as main memory). As an example, cache memory may comprise DRAM memory. A memory system can cache data to improve performance of the memory system. Therefore providing cache memory that delivers improved performance for the memory system is desirable. Improving the latency and hit rate of the cache memory are performance characteristics that can provide improved performance of the memory system.Brief Description of the Drawings[0004] Figure 1 is a block diagram of a computing system including an apparatus in the form of a host and an apparatus in the form of memory system in accordance with one or more embodiments of the present disclosure. [0005] Figure 2 is a block diagram of an apparatus in the form of a memory system in accordance with a number of embodiments of the present disclosure,[0006] Figure 3 is a block diagram of an apparatus in the form of a cache including a number of cache lines in accordance with a number of embodiments of the present disclosure.[0007] Figure 4 is a diagram of a cache line in accordance with a number of embodiments of the present disclosure.Detailed Description[0008] The present disclosure includes apparatuses and methods related to a memory system with cache line data. An example apparatus can store data in a number of cache lines in the cache, wherein each of the number of lines includes a number of chunks of data that are individually accessible.[0009] In a number of embodiments, a cache line (e.g., cache entry) can include metadata, chunk metadata, tag information, and a number of chunks of data. The cache can be managed on a cache line level. For example, data transfer action determinations are made on the cache line and/or chunk level. A buffer on a cache controller can include address data and/or metadata associated with the data in the cache. The cache controller can use the address data and/or metadata in the buffer to manage the cache. The data in a cache line can be managed on the chunk level. For example, chunks of data in a cache line can be read and/or written to a cache line to service a request. The cache lines can include chunk metadata and chunks of data and the cache controller can manage the cache lines on the chunk level. For example, the cache controller can read, write, write-back, and/or fetch, among other operations, a portion of a cache line that includes a number of chunks of data that is less than a total amount of data on a cache line. Also, a cache line can be considered evicted once each of the dirty chunks of data on the cache line have been written back to the backing store in one or more operations.[0010] In a number of embodiments, a cache line can be configured to store 4KB of data in 32 128B chunks, for example. Embodiments are not limited to particular cache line and/or chunk sizes can include cache lines of any size and chunks of any size, A cache controller can manage the 4KB of data in a? cache line that corresponds to 4KB of data at a particular location in a memory device (e.g., backing store). The 32 128B chunks of data in the 4KB cache line can be accessed on an individual chunk level such that each chunk can be read and/or written when servicing requests.[0011] The cache controller can access a number of chunks of data in a cache line of the cache in response to receiving a request for data (e.g., to read and/or write data to the cache). The cache controller can manage the request using a buffer on the cache controller and the cache controller can sendee the request by returning a portion of the number of chunks of data in the cache line corresponding to the request. The cache controller can be configured to issue commands to cause the cache to return the portion of the number of chunks of data corresponding to the request that were in the cache line when the request was received in response to cache controller determining the request is a hit. The cache controller can determine whether data corresponding to a request is a hit or a miss by using metadata for the cache that is stored in a buffer (e.g., SRAM, among other type of memory) on the cache controller.[0012] In a number of embodiments, the cache controller issue commands to cause the cache to retrieve a portion of the number of chunks of data corresponding to the request from the memory device in response to cache controller determining the request is a miss. The cache controller can be configured to issue commands to cause the cache to, in response to determining the request is a miss, write dirty chunks of data in the cache line to the memory device that were in the cache line when the request was received. The cache controller is configured to issue commands to cause the cache to, in response to determining the request is a miss, select the cache line based upon the cache line having fewer dirty chunks than other cache lines in the cache.[0013] The cache controller can be configured to issue commands to cause the cache to, in response to determining the request is a hit, write dirty chunks of data in the cache line to the memory device. The cache controller can be configured to issue commands to cause the cache to, in response to determining the request is a hit, replace chunks of data in the cache line that are not associated with the request and were invalid when the request was received, [0014] The cache controller can prioritize particular chunks of data that will not be evicted from the cache lines. The chunks of data can be prioritized based on how often the data will be accessed and/or the type of data. The cache controller can write the chunks of data from the memory device to the cache prior to receiving a request for the chunks of data (e.g., pre-fetch). Chunks of data from a portion of a memory device can be pre-fetched and stored in the cache to at least partially fill a cache line that corresponds to the portion of the memory device.[0015] In a number of embodiments, the cache controller can write dirty chunks of data to the memory device when not servicing commands. Also, the cache controller can select chunks of data to remain in the cache based on a command from the host. The host can identify portions of data that it would like to have in the cache and the cache controller can pin those portions of data in the cache so that they are never evicted from the cache.[0016] In the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a pari hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure. As used herein, the designators "M", "N", and "X", particularly with respect to reference numerals in the drawings, indicates that a number of the particular feature so designated can be included. As used herein, "a number of a particular thing can refer to one or more of such things (e.g., a number of memory devices can refer to one or more memory devices).[0017] The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 120 may reference element "20" in Figure 1, and a similar element may be referenced as 220 in Figure 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure. [0018] Figure 1 is a functional block diagram of a computing system 100 including an apparatus in the form of a host 102 and an apparatus in the form of memory system 104, in accordance with one or more embodiments of the present disclosure. As used herein, an "apparatus" can refer to, but is not limited to, any of a variety of structures or combinations of structures, such as a circuit or circuitry, a die or dice, a module or modules, a device or devices, or a system or systems, for example. In the embodiment illustrated in Figure 1 A, memory system 104 can include a controller 108, a cache controller 120, cache 110, and a number of memory devices 1 1 1-1, . . ., 11 1-X. The cache 120 and/or memory devices 111-1, . . ., 1 -X can include volatile memory and/or non-volatile memory.[0019] As illustrated in Figure 1 , host 02 can be coupled to the memory system 104. In a number of embodiments, memory system 104 can be coupled to host 102 via a channel. Host 102 can be a laptop computer, personal computers, digital camera, digital recording and playback device, mobile telephone, PDA, memory card reader, interface hub, among other host systems, and can include a memory access device, e.g., a processor. One of ordinary skill in the art will appreciate that "a processor" can intend one or more processors, such as a parallel processing system, a number of coprocessors, etc.[0020] Host 102 can includes a host controller to communicate with memor system 104. The host 102 can send commands to the memor system 104 via a channel. The host 102 can communicate with memory system 104 and/or the controller 08 on memory system 04 to read, write, and erase data, among other operations. A physical host interface can provide an interface for passing control, address, data, and other signals between the memory system 104 and host 02 having compatible receptors for the physical host interface. The signals can be communicated between host 102 and memory system 104 on a number of buses, such as a data bus and/or an address bus, for example, via channels.[0021] Controller 108, a host controller, a controller on cache 1 10, and/or a controller on can include control circuitry, e.g., hardware, firmware, and/or software. In one or more embodiments, controller 108, a host controller, a controller on cache 110, and/or a controller can be an application specific integrated circuit (ASIC) coupled to a printed circuit board including a physical interface. Memory system can include cache controller 120 and cache 110. Cache controller 120 and cache 1 10 can be used to buffer and/or cache data that is used during execution of read commands and/or write commands,[0022] Cache controller 120 can include buffer 122. Buffer 122 can include a number of arrays of volatile memory (e.g., SRAM). Buffer 122 can be configured to store signals, address signals (e.g., read and/or write commands), and/or data (e.g., metadata and/or write data). The buffer 122 can temporarily store signals and/or data while commands are executed. Cache 110 can include arrays of memory cell s (e.g., DRAM memory cells) that are used as cache and can be configured to store data that is also stored in a memory device. The data stored in cache and in the memory device is addressed by the controller and can be located in cache and/or the memory device during execution of a command.[0023] Memory devices 1 1 1-1, . . ., 111-X can provide main memory for the memory system or could be used as additional memory or storage throughout the memory system 104. Each memory device 111-1, . . ., 111-X can include one or more arrays of memory cells, e.g., non-volatile and/or volatile memory ceils. The arrays can be flash arrays with a NAND architecture, for example. Embodiments are not limi ted to a particul ar type of memory device. For instance, the memory device can include RAM, ROM:, DRAM, SDRAM, PCRAM, RRAM, and flash memory, among others.[0024] The embodiment of Figure 1 can include additional circuitry that is not illustrated so as not to obscure embodiments of the present disclosure. For example, the memory system 104 can include address circuitry to latch address signals provided over TO connections through I/O circuitry. Address signals can be received and decoded by a row decoder and a column decoder to access the memory devices 1 11-1, . . ., 11 -X. It will be appreciated by those skilled in the art that the number of address input connections can depend on the density and architecture of the memory devices 111-1 , . . ., 111-X.[0025] Figure 2 is a block diagram of an apparatus in the form of a memory system in accordance with a number of embodiments of the present disclosure. In Figure 2, the memory system can be configured to cache data and service requests from a host and/or memory system controller. The memory system can include cache controller 220 with buffer 222. Buffer 222 can include SRAM memory, for example. Buffer 222 can include information about the data in cache 210, including metadata and/or address information for the data in the cache. The memory system can include a memory device 21 1 coupled to the cache controller 220. Memory device 21 1 can include non-volatile memory arrays and/or volatile memory arrays and can serve as the backing store for the memory system,[0026] Cache controller 220, cache 210, and/or memory device 21 1 can each include a controller and/or control circuitry (e.g., hardware, firmware, and/or software) which can be used to execute commands on the cache controller 220, cache 210, and/or memory device 211. The control circuitry can receive commands from a host controller, a memory system controller, and/or cache controller 220. The control circuitry can be configured to execute commands to read and/or write data in the memory device 2 1 ,[0027] Figure 3 is a block diagram of an apparatus in the form of a cache including a number of cache lines in accordance with a number of embodiments of the present disclosure. In Figure 3, cache 310 can include a number of cache entries, such as cache lines 330-1, ... , 330-N. The cache lines 330-1, ... , 330-N can include metadata 332- 1 , .. , 332-N, chunk metadata 332-1 , .. , 332-N, tag data 336-1, . . . , 336-N, and a number of chunks of data 338-1-1, ... , 338-M-N. Each cache line 330-1 , ... , 330-N can include metadata 332-1, .. , 332-N for a corresponding cache line. The metadata 332-1, ...332-N can also be stored in a buffer (e.g., buffer 122 in Figure 1) and used by the cache controller to manage the cache. For example, the metadata 332-1, ...332-N can be used and updated by the cache controller to make hit/miss determinations for requests from the host.[0028] Each cache line can include chunk metadata 332-1 , . . .332-N for a corresponding cache line. Chunk metadata 332-1, ...332-N can be used to execute commands. For example, a request for a portion of data on a cache line can be serviced by using the chunk metadata 332-1, .. , 332-N to determine if the portion of data in the request is valid and/or dirty, to determine the location of the portion of data in the cache line, and/or to retrieve the portion of data from the cache line. The cache controller can access the chuck metadata 332-1, .. , 332-N for servicing a request to read and/or write data to the cache,[0029] Each cache line can include chunks of data 338-1-1, ... , 338-M-N for a corresponding cache line. Chunks of data chunks of data 338-1-1, ... , 338-"7 M-N can be accessed on a chunk by chunk basis by the cache controller when servicing a request. Each chunk of data 338-1-1, ... , 338-M-N can include 128B of data and a cache line can include 128 chunks to store 4KB of data, for example.[0030] Figure 4 is a diagram of a cache line in accordance with a number of embodiments of the present disclosure. The cache line 430 can include metadata 432, chunk metadata 434, tag data 436, and a number of chunks of data 438-1 438-N.[0031] The cache controller can access the chunks of data 438-1 ... ,438-N in cache line 430 in response to receiving a request for data (e.g., to read and/or write data to the cache). A portion of number of chunks of data 438-1 438-N corresponding to a request that were in the cache line when a request was received can be read and returned to the cache controller and/or host. For example, a request for data can be serviced by returning chunks of 438-2, 438-3, 438-4, and 438-5. A cache controller can determine whether chunks of data 438-1 ... , 438-N correspond to a request by using metadata for the cache that is stored in the buffer on the cache controller.[0032] In a number of embodiments, the cache can write a portion of chunks of data 438-1 ... , 438-N that are dirty. Also, when selecting a cache line to evict from the cache, a cache line with the fewest dirty chunks can be selected so that fewer chunks of data are written to the memory device when evicting a cache line from the cache.[0033] The cache controller issue commands to cause the cache to, in response to determining the request is a hit, write dirty chunks of data in the cache line to the memory device. The cache controller can issue commands to cause the cache to, in response to determining the request is a hit, replace chunks of data in the cache line that are not associated with the request and were invalid when the request was received.[0034] The cache controller can pri oritize particular chunks of data that will not be evicted from the cache lines. The chunks of data can be prioritized based on how often the data will be accessed and/or the type of data. The cache controller can write the chunks of data from the memory device to the cache prior to receiving a request for the chunks of data (e.g., pre-fetch). Chunks of data from a portion of a memory device can be pre-f etched and stored in the cache to at least partially fill a cache line that corresponds to the portion of the memory device.[0035] In a number of embodiments, the cache controller can write dirty chunks of data to the memory device when not servicing commands. Also, the cache controller can select chunks of data to remain in the cache based on a command from the host. The host can identify portions of data that it would like to have in the cache and the cache controller can pin those portions of data in the cache so that they are never evicted from the cache.[0036] Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or variations of various embodiments of the present disclosure. It is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skill in the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present discl osure should be determined with reference to the appended claims, along with the full range of equivalents to which such claims are entitled.[0037] In the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as refl ecting an intention that the disclosed embodiments of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
A search pattern is generated based on an input search word including a first bit sequence. The search pattern includes a representation of the input search word and an inverted representation of the input search word. The search pattern is provided as input to a search line of a ternary content addressable memory (TCAM) block. A subset of the search lines is set to a logical high state based on a first portion of the input search word being indicated as an independent bit. The search pattern conducts at least one string in the CAM block and provides a signal in response to a data entry stored on the string including a second portion of the input search word excluding the unrelated bits. And determining and outputting the position of the data entry.
1. A system comprising:A memory device comprising a ternary content addressable memory (TCAM) block comprising an array of memory cells organized into a plurality of strings, a string in the plurality of strings storing a data entry, the string comprising a plurality of memory cells connected in series between a precharge match line and a page buffer, each of the memory cells connected to one of a plurality of search lines; anda processing device coupled to the memory device, the processing device to perform operations comprising:receiving an input search word comprising a sequence of bits;receiving an indicator of one or more don't care bits in the sequence of bits, the one or more don't care bits corresponding to a first portion of the input search word;setting a subset of the plurality of search lines to a logic high state, the subset of the plurality of search lines corresponding to the one or more don't care bits;generating a search pattern based on the input search word;providing the search pattern as input to the plurality of search lines, the search pattern causing the strings to conduct and excluding all in response to the data entries stored on the strings matching the input search word. at least a second portion of the one or more don't care bits to provide a signal to the page buffer, the signal resulting from discharging the precharged match line via the string, the page buffer responding to said signal to store data; andThe location of the data entry within the TCAM block is output based on the data read by the page buffer.2. The system of claim 1, wherein:the plurality of memory cells configured as a plurality of complementary memory cell pairs, andBit values of the data entry are mapped to complementary memory cell pairs of the plurality of complementary memory cell pairs.3. The system of claim 2, wherein:The complementary memory cell pair includes:a first memory unit for storing bit values of the data entry, anda second memory unit connected in series with the first memory unit, the second memory unit configured to store an inversion of the bit value,A first search line of the plurality of search lines is connected to the first memory cell; andA second search line of the plurality of search lines is connected to the second memory cell.4. The system of claim 3, wherein:the first search line receives a first signal representing a search bit value from the bit sequence, andThe second search line receives a second signal representing an inversion of the search bit value.5. The system of claim 4, wherein the processing means comprises:an inverter for generating an inversion of the sequence of bits; andA level shifter for generating the first signal based on the bit sequence and generating the second signal based on the inversion of the bit sequence.6. The system of claim 1, wherein the operations further comprise:determining whether matching data is stored by the TCAM block based on the data read by the page buffer, the matching data including at least the second portion of the input search word; andAn indication of whether the matching data is stored by the TCAM block is output.7. The system of claim 1, wherein the search pattern comprises a first set of signals representing the input search word and a second set of signals representing an inverse of the input search word.8. The system of claim 1, wherein:the one or more don't care bits are a first set of don't care bits;the subset of the plurality of search lines is a first subset; andThe operations further include:receiving a second set of don't care bits prior to receiving the first set of don't care bits;setting a second subset of the plurality of search lines to a logic high state, the second subset of the plurality of search lines corresponding to a second set of don't care bits;determine that matching data is not stored by the TCAM; andwherein the first set of don't care bits is provided based on determining that matching data is not stored by the TCAM.9. The system of claim 1, wherein the location of the data entry comprises a memory address of the string within the TCAM block.10. The system of claim 1, wherein the memory device comprises a NAND (NAND) type flash memory device.11. A method comprising:receiving an input search word comprising a first sequence of digits;receiving an indicator of one or more don't care bits in the first sequence of bits, the one or more don't care bits corresponding to a first portion of the input search word;setting a subset of a plurality of search lines of a ternary content addressable memory (TCAM) block to a logic high state, the subset of the plurality of search lines corresponding to the one or more don't care bits;generating a search pattern based on the input search word;providing the search pattern to the plurality of search lines of the TCAM, the search pattern making strings in the TCAM conductive and providing a signal to a page buffer in response to the strings storing matching data, the match data comprising a second portion of said input search word excluding said don't care bits, said signal resulting from discharging a precharged match line via said string line, said page buffer storing data in response to said signal ;andA location of a data entry within the TCAM block is output based on the data read by the page buffer.12. The method of claim 11, wherein:the plurality of memory cells configured as a plurality of complementary memory cell pairs, andBit values of the data entry are mapped to complementary memory cell pairs of the plurality of complementary memory cell pairs.13. The method of claim 12, wherein:The complementary memory cell pair includes:a first memory unit for storing bit values of the data entry, anda second memory unit connected in series with the first memory unit, the second memory unit configured to store an inversion of the bit value,A first search line of the plurality of search lines is connected to the first memory cell; andA second search line of the plurality of search lines is connected to the second memory cell.14. The method of claim 13, wherein said providing said search pattern as input comprises:providing a first search signal representing a search bit value from a bit sequence to said first search line, andA second search signal representing an inversion of the search bit value is provided to the second search line.15. The method of claim 11, further comprising:determining whether matching data is stored by the TCAM block based on the data read by the page buffer, the matching data including at least the second portion of the input search word; andAn indication of whether the input search word is stored by the TCAM block is output.16. The method of claim 11, wherein said outputting said location of said data entry comprises reading said data from said page buffer, said data indicating a location of said string.17. The method of claim 11, wherein the search pattern comprises a first set of signals representing the input search word and a second set of signals representing an inversion of the sequence of bits.18. The method of claim 11, wherein:the sequence of bits is a first sequence of bits; andThe generating the search pattern includes:reversing the sequence of bits to produce a second sequence of bits;generating a first voltage signal representing the first bit sequence; andA second voltage signal representative of the second sequence of bits is generated.19. The method of claim 11, wherein the location of the data entry comprises a memory address of the string within the TCAM block.20. A non-transitory computer-readable storage medium comprising instructions that, when executed by a memory subsystem controller, configure the memory subsystem controller to perform operations comprising:receiving an input search word comprising a first sequence of digits;receiving an indicator of one or more don't care bits in the first sequence of bits, the one or more don't care bits corresponding to a first portion of the input search word;setting a subset of a plurality of search lines of a ternary content addressable memory (TCAM) block to a logic high state, the subset of the plurality of search lines corresponding to the one or more don't care bits;generating a search pattern based on the input search word;providing as input to the plurality of search lines of the TCAM, the search pattern making the strings conductive and providing a signal to a page buffer in response to the strings storing matching data, the matching data comprising a second portion of the input search word excluding the don't care bits, the signal resulting from discharging a precharged match line through the string, the page buffer storing data in response to the signal; andA location of a data entry within the TCAM block is output based on the data read by the page buffer.
Architecture for Ternary Content Addressable Memory Searchpriority applicationThis application claims priority to US Application Serial No. 16/811,574, filed March 6, 2020, which is incorporated herein by reference in its entirety.technical fieldEmbodiments of the present disclosure relate generally to memory subsystems, and more particularly to memory component architectures to facilitate ternary content addressable memory (TCAM) searches.Background techniqueA memory subsystem may include one or more memory devices that store data. The memory devices can be, for example, non-volatile memory devices and volatile memory devices. In general, a host system can utilize a memory subsystem to store data at and retrieve data from a memory device.Description of drawingsThe disclosure will be more fully understood from the detailed description given below and the accompanying drawings of various embodiments of the disclosure.Figure 1 illustrates an example computing system including a ternary content addressable memory (TCAM) architecture implemented within a memory subsystem according to some embodiments of the disclosure.Figure 2 is a block diagram illustrating additional details of a TCAM architecture implemented within a memory subsystem according to some embodiments of the present disclosure.Figure 3 illustrates components of a TCAM block implemented within a memory device in the form of an example of a NAND-type flash memory component, according to some embodiments of the present disclosure.Figure 4 illustrates a single TCAM cell of a TCAM block implemented within a NAND flash memory device according to some embodiments of the present disclosure.Figure 5 is a block diagram illustrating a shift register that may be included as part of a TCAM architecture according to some embodiments of the present disclosure.6 is a flowchart illustrating example operation of a memory subsystem in performing imprecise search matches on a TCAM, according to some embodiments of the present disclosure.7 is a block diagram of an example computer system in which embodiments of the disclosure may operate.detailed descriptionAspects of the present disclosure relate to ternary content addressable memory (TCAM) architectures for memory subsystems. A memory subsystem may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices and memory modules are described below in connection with FIG. 1 . In general, a host system may utilize a memory subsystem that includes one or more components, such as memory devices to store data. A host system can provide data to be stored at a memory subsystem and can request data to be retrieved from the memory subsystem. The memory subsystem controller receives commands or operations from the host system and converts the commands or operations into instructions or appropriate commands to achieve the desired accesses to the memory devices of the memory subsystem.Content-addressable memory (CAM) is a special type of memory component used in certain very high-speed search applications, such as identifiers (IDs) and pattern matching. In general, a CAM is searched by comparing input search data against a table storing data items, and returning memory addresses that match data in the table. CAM is frequently implemented in dynamic random access memory (DRAM) or synchronous random access memory (SRAM). However, both DRAM and SRAM have limited memory capacity, which limits the amount of data that can be stored and searched with conventional CAM implementations.In some artificial intelligence applications, "inexact" matching is also required, where certain bits considered "don't care" bits are ignored during the matching process, thereby allowing data items to be searched for partial matches. That is, the search process may ignore a first portion of the search word (ie, corresponding to don't care bits) and identify matching data corresponding to a second portion of the search word (ie, the remaining bits). Traditionally, such imprecise matching has been facilitated using a ternary CAM (TCAM) implemented in SRAM. Such implementations typically require a large number of transistors per data bit (eg, up to 16 transistors per data bit) and are therefore very limited in capacity and expensive in die size.A conventional NAND-type flash memory device may include one or more blocks. A NAND block includes a two-dimensional (2-D) array that includes pages (rows) and strings (columns). Three-dimensional (3D) NAND-type flash memory devices include multiple planes, each of which includes one or more blocks. The string includes a plurality of single cells (hereinafter also simply referred to as "memory cells") connected in series, such as NAND flash cells. A single NAND flash cell includes transistors that store charge on memory layers insulated by upper and lower oxide insulating layers. In general, a memory cell is programmed and recognized by the memory subsystem as a binary value of zero when charge is present on the memory layer of the memory cell. When a memory cell has no charge in its memory layer, it is erased and identified as a binary value of one.Strings are the cells used for reading in NAND-type flash memory devices. NAND-type flash devices typically have 32 or 64 or more memory cells. Conventionally, each memory cell is used to represent a bit value (0 or 1). Thus, in a conventional implementation, a chord with 32 memory cells can represent 32 bits of data and a chord with 64 memory cells can represent 64 bits of data.In a NAND-type flash memory block, individual strings are connected to allow storage and retrieval of data from selected cells. Typically, the strings in a block are connected at one end to a common source line and at the other end to a bit line. Each string also contains two control mechanisms in series with the memory cells. The string and ground selection transistors are connected to the string selection line and the ground selection line. Memory cells in a NAND-type flash device are connected horizontally at their control gates to word lines to form a page. A page is a collection of connected memory cells that share the same word line and are the smallest cell to be programmed. NAND-type flash memory devices can have a page size of 64K or 128K cells. Although conventional NAND-type flash memory has a larger capacity than DRAM and SRAM, it is generally too slow for serial data search and access.Aspects of the present disclosure address the foregoing and other problems using a TCAM architecture implemented in NAND-type flash memory devices to provide fast and high volume imprecise search capabilities. According to this architecture, data entries are stored on the strings of a NAND-type flash memory array. In contrast to conventional NAND implementations, each bit of a data entry is mapped to a pair of memory cells configured as complementary. That is, the first memory cell of the pair stores the bit value and the second memory cell of the pair stores the inverse of the bit value. A search pattern representing an input search word is entered vertically on each word line corresponding to a chord line in the array block. A single read operation compares the input search word to all strings in the array block and identifies the storage address of matching data.To allow for imprecise matches, an indication of one or more don't care bits is provided as input to the TCAM architecture. A search component configured to search the TCAM block forces a search line of the TCAM block corresponding to the don't care bit to a logic high state. In this way, the memory cell corresponding to the don't care bit is forced to return a positive match regardless of the base value. Basically, this allows the base value to be ignored, thereby allowing the search component to identify stored data that matches the remainder of the search word.As described herein, the NAND-based TCAM architecture enables new applications where high speed and high density imprecise pattern matching are required, such as those related to artificial intelligence, machine vision, and large genetic databases. The TCAM architecture also improves existing database imprecise search systems and algorithms, such as cloud networking and index storage in servers.Figure 1 illustrates an example computing system 100 including a memory subsystem 110, according to some embodiments of the present disclosure. Memory subsystem 110 may include media such as one or more volatile memory devices (eg, memory device 140 ), one or more non-volatile memory devices (eg, memory device 130 ), or a combination of such devices.The memory subsystem 110 may be a storage device, a memory module, or a mixture of storage devices and memory modules. Examples of storage devices include SSDs, flash drives, universal serial bus (USB) flash drives, embedded multimedia controller (eMMC) drives, universal flash storage (UFS) drives, secure digital (SD) cards, and hard disk drives (HDD) ). Examples of memory modules include dual inline memory modules (DIMMs), small outline DIMMs (SO-DIMMs), and various types of non-volatile dual inline memory modules (NVDIMMs).The computing system 100 can be a computing device, such as a desktop computer, a laptop computer, a web server, a mobile device, a vehicle (such as an airplane, drone, train, automobile, or other means of transportation), an Internet of Things (IoT) enabled ) functional device, an embedded computer (eg, an embedded computer included in a vehicle, industrial equipment, or networked business device), or such a computing device including a memory and a processing device (eg, a processor).Computing system 100 may include a host system 120 coupled to one or more memory subsystems 110 . In some embodiments, host system 120 is coupled to a different type of memory subsystem 110 . FIG. 1 shows an example of a host system 120 coupled to a memory subsystem 110 . As used herein, "coupled to" or "coupled with" generally refers to a connection between components, which may be an indirect communication connection or a direct communication connection (eg, without intervening components), whether wired or wireless , including connections such as electrical connections, optical connections, magnetic connections, and the like.Host system 120 may include a processor chipset and a software stack executed by the processor chipset. A processor chipset may include one or more cores, one or more cache memories, memory controllers (eg, NVDIMM controllers), and storage protocol controllers (eg, PCIe controllers, SATA controllers). Host system 120 uses memory subsystem 110 to, for example, write data to and read data from memory subsystem 110 .Host system 120 may be coupled to memory subsystem 110 via a physical host interface. Examples of physical host interfaces include, but are not limited to, Serial Advanced Attachment Technology (SATA) interfaces, Peripheral Component Interconnect Express (PCIe) interfaces, Universal Serial Bus (USB) interfaces, Fiber Channel interfaces, Serial Attached SCSI (SAS) interface, double data rate (DDR) memory bus, small computer system interface (SCSI), dual inline memory module (DIMM) interface (for example, DIMM socket interface that supports double data rate (DDR)), open NAND Flash Interface (ONFI), Double Data Rate (DDR), Low Power Double Data Rate (LPDDR), etc. A physical host interface may be used to transfer data between host system 120 and memory subsystem 110 . When the memory subsystem 110 is coupled with the host system 120 through a PCIe interface, the host system 120 can further utilize an express NVM (NVMe) interface to access components (eg, the memory device 130 ). A physical host interface may provide an interface for passing control, address, data, and other signals between memory subsystem 110 and host system 120 . FIG. 1 shows memory subsystem 110 as an example. In general, host system 120 can access multiple memory subsystems via the same communication connection, multiple separate communication connections, and/or a combination of communication connections.The memory devices 130, 140 may include any combination of different types of non-volatile memory components and/or volatile memory components. A volatile memory element such as memory device 140 can be, but is not limited to, random access memory (RAM), such as dynamic random access memory (DRAM) and synchronous dynamic random access memory (SDRAM).Some examples of non-volatile memory devices (e.g., memory device 130) include NAND-type flash memory and write-in-place memory, such as three-dimensional cross-point ("3D cross-point") memory devices, which are non-volatile memory Intersection array of cells. A cross-point array of non-volatile memory can perform bit storage based on a change in bulk resistance in combination with a stackable cross-grid data access array. Additionally, in contrast to many flash-based memories, cross-point non-volatile memory can perform write-in-place operations, where non-volatile memory cells can be programmed without being previously erased. The NAND flash memory includes, for example, two-dimensional NAND (2D NAND) and three-dimensional NAND (3D NAND).Each of memory devices 130 may include one or more arrays of memory cells. One type of memory cell, such as single-level cell (SLC), can store one bit per cell. Other types of memory cells, such as multi-level cells (MLC), triple-level cells (TLC), and quad-level cells (QLC), can store multiple bits per cell. In some embodiments, each of memory devices 130 can store One or more arrays comprising memory cells such as SLC, MLC, TLC, QLC, or any combination of these. In some embodiments, a particular memory device may comprise an SLC portion and an MLC portion, a TLC portion, or a QLC portion of memory cells The memory cells of memory device 130 may be grouped into pages, which may refer to logical units of the memory device used to store data. In the case of some types of memory (eg, NAND), pages may be grouped to form blocks.Although a nonvolatile memory device such as a NAND-type flash memory (e.g., 2D NAND, 3D NAND) and a 3D cross-point array of nonvolatile memory cells is described, the memory device 130 may be based on any other type of nonvolatile memory. Non-volatile memory, such as read-only memory (ROM), phase-change memory (PCM), self-selection memory, other chalcogenide-based memories, ferroelectric transistor random access memory (FeTRAM), ferroelectric random access memory (FeRAM) , Magnetic Random Access Memory (MRAM), Spin Transfer Torque (STT)-MRAM, Conductive Bridge RAM (CBRAM), Resistive Random Access Memory (RRAM), Oxide-based RRAM (OxRAM), NOR Flash Memory And electrically erasable programmable read-only memory (EEPROM).As shown, any one or more of memory devices 130 may be configured to include one or more ternary content addressable memory (TCAM) blocks 113 . TCAM block 112 includes one or more arrays of memory cells organized as strings. Each string stores a data entry and includes memory cells connected in series between a match line and a page buffer. That is, TCAM block 112 includes a plurality of match lines and each match line is connected to one of the plurality of chord lines in the array. The match lines of the TCAM block 112 correspond to the bit lines of the NAND blocks on which the TCAM block 112 is implemented. Within a given chord, memory cells are organized into pairs of complementary memory cells. Each bit value of a data entry stored by a string is mapped to one of a pair of complementary memory cells in the string.The TCAM block 112 may be searched by providing a search pattern as an input to a search line of the TCAM block 112 . The search lines of the TCAM block 112 correspond to the word lines of the NAND blocks on which the TCAM block 112 is implemented. The match lines of the CAM block 112 are precharged to facilitate searching. That is, the voltage signal is applied to the match line of the CAM block 112 before the search is input. During a search operation, if any matching data is stored by the TCAM 112, one or more matched lines (e.g., the match line corresponding to the string storing the matching data) become conductive and respond to the search pattern entered at the search line and discharge the signal. If no match data is stored, then all match lines are non-conductive. Each match line is further connected to a page buffer (eg, including one or more latches) that receives the discharge signal and stores data indicative of the stored matched data along the connected match line.Memory subsystem controller 115 (or, for simplicity, controller 115) may communicate with memory device 130 to perform operations such as: reading data, writing data, or erasing data at memory device 130, and other such operations. Controller 115 may include hardware such as one or more integrated circuits and/or discrete components, buffer memory, or a combination thereof. The hardware may include digital circuitry with dedicated (ie, hard-coded) logic to perform the operations described herein. Controller 115 may be a microcontroller, dedicated logic circuitry (eg, field programmable gate array (FPGA), application specific integrated circuit (ASIC), etc.), or another suitable processor.Controller 115 may include a processor 117 (eg, a processing device) configured to execute instructions stored in local memory 119 . In the example shown, local memory 119 of controller 115 includes embedded memory configured to store instructions for performing various processes, operations, logic flows, and routines that control the operation of memory subsystem 110, including Handles communications between memory subsystem 110 and host system 120 .In some embodiments, local memory 119 may include memory registers that store memory pointers, fetched data, and the like. Local memory 119 may also include ROM for storing microcode. Although the example memory subsystem 110 in FIG. 1 has been shown as including a controller 115, in another embodiment of the present disclosure, the memory subsystem 110 does not include a controller 115 and may instead rely on external control (e.g. provided by an external host or by a processor or controller separate from the memory subsystem).In general, the controller 115 can receive commands or operations from the host system 120 and can translate the commands or operations into instructions or appropriate commands to achieve the desired access to the memory device 130 . Controller 115 may be responsible for other operations, such as wear leveling operations, garbage collection operations, error detection and error correction code (ECC) operations, encryption operations, cache operations, and logical addresses associated with memory device 130 (e.g., logical Address translation between block address (LBA, namespace) and physical address (eg, physical block address). Controller 115 may further include host interface circuitry to communicate with host system 120 via a physical host interface. Host interface circuitry may translate commands received from host system 120 into command instructions to access memory device 130 and responses associated with memory device 130 into information for host system 120 .Memory subsystem 110 may also include additional circuitry or components not shown. In some embodiments, memory subsystem 110 may include a cache or buffer (eg, DRAM), and address circuitry that may receive an address from controller 115 and decode the address to access memory device 130 ( For example, Row Decoder and Column Decoder).In some embodiments, memory device 130 includes a local media controller 135 that cooperates with memory subsystem controller 115 to perform operations on one or more memory units of memory device 130 . An external controller (eg, memory subsystem controller 115 ) may manage memory device 130 externally (eg, perform media management operations on memory device 130 ). In some embodiments, memory device 130 is a managed memory device, which is a raw memory device combined with a local controller (eg, local controller 135 ) for media management within the same memory device package. An example of a managed memory device is a managed NAND (MNAND) device.Memory subsystem 110 also includes a search component 113 that facilitates searching of one or more TCAM blocks 112 . Although shown as part of memory device 130 , in some embodiments, search component 113 may be included in controller 115 or memory device 140 . In some embodiments, controller 115 includes at least a portion of search component 113 . For example, controller 115 may include processor 117 (processing device) configured to execute instructions stored in local memory 119 for performing the operations of search component 113 described herein. In some embodiments, search component 113 is part of host system 120, an application program, or an operating system. In some embodiments, local media controller 135 includes search component 113 .The search component 113 generates a search pattern based on the received input search word and enters the search pattern vertically along the search line of the TCAM block 112 . As mentioned above, if matching data is stored by the TCAM block 112, the search mode causes the match line (also referred to as the "matched line") storing the data entry to become conductive; since the match line is precharged, the matched line A signal is provided to the connected page buffer, the signal indicating that the search word is stored on the connected page buffer. The location (eg, memory address) of any matching data entry can be identified based on the signal provided via the match wire since the string wire is conductive. More specifically, a page buffer connected to any matched line stores data in response to detecting a discharge signal indicating that matched data is stored along the matched line. Components of the search component 113 (eg, readout circuitry) may read data from the page buffer. Based on the data read from the page buffer, the search component 113 outputs an indication of whether the search word is stored by the TCAM block 112 and an indicator of the location of the match line.To facilitate imprecise matches, the search component 113 forces a portion of the search lines of the TCAM block 112 to a logic high state during a search operation. The search line forced into a high state corresponds to one or more bits in the search word designated as don't care bits. The one or more don't care bits may be specified in input received from host system 120 . Host system 120 may select one or more don't care bits programmatically or based on user input. Forcing the search line to a logic high state causes connected memory cells to be ignored for the purpose of finding matching data because they become conductive regardless of the underlying value stored by the memory cell or indicated by the search pattern. What is the corresponding value for the search word. In this manner, the matching data stored in the TCAM block 112 identified by the search component 113 corresponds to the remainder of the input search word. That is, the matching data corresponds to a portion of the input search word excluding extraneous bits.FIG. 2 is a block diagram illustrating additional details of the TCAM architecture implemented within memory subsystem 110 according to some embodiments of the present disclosure. As shown in Figure 2, memory device 200 may be organized into a plurality of planes 201-1 through 201-4. Memory device 200 is an example of memory device 130 . Although FIG. 2 shows memory device 200 as including four planes, it should be understood that memory device 200 is not limited to four planes and may include more or fewer planes in other embodiments. Each of the planes 200 - 1 through 200 - 4 is configured to include one or more TCAM blocks 112 . The number of TCAM blocks 112 per plane can be configured via software or hardware.As shown, the search component 113 receives an input search word 206 and generates a search pattern 208 based on the input search word 206 . The input search word 206 includes a first sequence of bits (eg, "1011"). The search pattern 208 generated by the search component 113 includes a first set 209A (SL0-M) of voltage signals representing the input search word and a second set of bits representing the inversion of the first sequence comprising bits (e.g., "0100"). The second set search component 113 of the sequence of voltage signals includes an inverter 210 for generating an inversion of the input search word and a level selector 211 for generating the first and second voltage signals. In generating the first and second voltage signals, the level selector 211 can use the voltage Vhigh to represent the binary value "1" and the voltage Vlow to represent the binary value "0", wherein Vhigh is higher than the threshold voltage (Vt) and Vlow below the threshold voltage.As shown, the search component 113 also includes a mask register 212 to facilitate imprecise matches. Search component 113 receives an indicator of don't care bit 213 (eg, from host system 120 ) and writes don't care bit 213 to mask register 212 . In the example shown in FIG. 2, mask register 212 is M bits long, and an entry has been written to mask register 212 at D2, which indicates that the third bit in the data entry stored in TCAM block 112 is a don't care bit.The search component 113 uses the mask register 212 during a search operation to select the search line of the TCAM block 112 that is placed in a logic high state to force the connected memory cell to return a positive match regardless of whether the underlying stored data value matches the corresponding bit of the search word 206 value. The line of TCAM block 112 that is set to a logic high state by search component 113 corresponds to don't care bit 213 . The search component 113 sets the search line to a logic high state by connecting the search line to a voltage source that supplies a voltage signal corresponding to the logic high state. Following the above notation, the search component 113 can set the search line to a logic high state by connecting the search line to Vhigh.To search for one of the TCAM blocks 112, the search component 113 enters the search pattern 208 vertically along the search line of the TCAM block 112 being searched. Input to search mode 208 causes any pair of complementary memory cells representing a matching stored bit value to become conductive. If the string is storing matching data, the entire string becomes conductive. The match line in the TCAM block 112 is precharged (e.g., connected to Vhigh), and because the match line is precharged, the input of the search pattern 208 on the search line causes the storage in the block to match the data (e.g., the data entry matches the search Any match line that is at least a portion of word 206) outputs a discharge signal because the corresponding string is conductive. The discharge signal provides an indication that match data is stored on the string connected to the match line.Placing the don't care bit search line in a logic high state causes connected memory cells to be ignored for the purpose of finding matching data because they become conductive regardless of the underlying value stored by the memory cell or by the search What is the corresponding value of the search word represented by the pattern. In this way, the matched data in the case of an imprecise match corresponds to the remainder of the input search word 206 . That is, the matching data corresponds to a portion of the input search word 206 excluding don't care bits 213 .Each string is connected between a match line and a page buffer (e.g., including one or more latch circuits) and the page buffer store indication for the match line is responsive to a signal provided as a result of the discharge of the match line along the string. Data matching data is stored along the matched line. As shown, plane 201 - 4 includes page buffer 214 . Page buffer 214 may include one or more latch circuits. Physically, the page buffer 214 may reside under or adjacent to the array of memory cells in which the TCAM block 112 is implemented.Page buffer 214 latches the data based on the signal provided by the match line when the matching data is stored by the connected string that conducts the signal to page buffer 214 . The search component 113 reads data from the page buffer 214 and provides as output an indicator of whether the input search word 206 is stored in the TCAM block 112 being searched and the location of matching data (e.g., the memory address of the string in the array ).In some embodiments, search component 113 may sequentially search for matching data in TCAM blocks 112 of planes 201-1 through 201-4. That is, the search component 113 may initially search the TCAM blocks 112 of plane 201-1, thereafter search the TCAM blocks 112 of plane 200-2, thereafter search the TCAM blocks 112 of plane 201-3, and finally search the TCAM blocks of plane 201-4 112.In some embodiments, search component 113 may search for matching data in TCAM blocks 112 of planes 201-1 through 201-4 in parallel. That is, the search component 113 can search all of the TCAM blocks 112 of the planes 201-1 through 201-4 simultaneously for matching data. Parallel searches of planes 201-1 through 201-4 allow all data entries stored in all TCAM blocks 112 of planes 201-1 through 201-4 to be searched in a single search operation rather than all in four separate search operations. Search for data entries. Thus, as utilized in the embodiments described above, parallel searching can allow search component 113 to achieve an increase in search speed relative to embodiments in which sequential searching is utilized.In some embodiments, data entries may be stored across two or more of the planes 201-1 through 201-4. In these cases, the search component 113 may traverse both or more of the planes 201-1 through 201-4 while searching for portions of matching data. Allocating data entries across planes allows for larger word sizes when compared to embodiments where the data entries are stored within a single plane. For example, if each of the TCAM blocks 112 supports 64-bit words, allocating data entries among all four planes would allow the memory device 200 to support 256-bit words (4*64=256).To avoid obscuring the inventive subject matter with unnecessary detail, various functional components not closely related to conveying an understanding of the inventive subject matter have been omitted from FIG. 2 . However, those skilled in the art will readily recognize that various additional functional components may be included as part of memory subsystem 110 to facilitate additional functionality not specifically described herein. For example, memory subsystem 110 may include additional circuitry (eg, one or more multiplexers) that allows conventional read and write operations to be performed on one or more of memory devices 130 and 140 .FIG. 3 illustrates components of a TCAM block 300 implemented within a memory device 130 , which is an example of a NAND-type flash memory device, according to some embodiments of the present disclosure. TCAM block 300 is an example of TCAM block 112 .As shown, TCAM block 300 includes match lines 302-0 through 302-N, search lines 304-0 through 304-M, and inverted search lines 306-0 through 306-M. In this implementation, the match lines 302-0 to 302-N of the TCAM block 300 correspond to the bit lines of a NAND-type flash memory device, and the search lines 304-0 to 304-M and inverted search lines of the TCAM block 300 306-0 to 306-M correspond to word lines of a NAND type flash memory device.Each of match lines 302-0 through 302-N is connected to a string comprising a plurality of memory cells connected in series. For example, match line 302-0 is connected to a string that includes memory cells 308-0 through 308-X (where X=2M). The memory cells in each string of TCAM block 300 are configured as complementary pairs. For example, with a chordal connection to match line 302-0, memory cells 308-0 through 308-X are programmed as complementary memory cell pairs 310-0 through 310-M.Pairs of memory cells are configured to be complementary in that one memory cell of the pair stores a data value ("0") and the other memory cell of the pair stores the inverse of the data value ("1"). For example, as shown in FIG. 4, memory cell pair 310-0 includes memory cells 308-0 and 308-1. Memory cell 308-0 stores the data bit value DATA, and memory cell 308-1 stores this as the inverse of the data bit value DATA. Furthermore, as shown in Figure 3, search line 304-0 is connected to the control gate of memory cell 308-0 and inverted search line 306-0 is connected to the control gate of memory cell 308-1.Search line 304-0 receives a first signal SL representing the search bit value from the input search word and inverted search line 306-0 receives a second signal representing the inversion of the search bit value. If SL matches DATA and matches then memory cell pair 310-0 will be conductive. For example, Table 1 provided below is a truth table defining the performance of any given pair of memory cells 310-0 through 310-M.Table 1In Table 1, a "DATA" value of "0" indicates an erased state and a "DATA" value of "1" indicates a programmed state of a memory cell. "SL" is the search bit value, is the inverse of the search bit value, "DATA" is the stored bit value, and is the inverse of the stored bit value. As shown, the complementary cell pair is conductive when the search data value matches the stored data value and the inversion of the search data value matches the inversion of the stored data value. To facilitate an imprecise match, if SL corresponds to a don't care bit, then SL and are set to "1" (logic high value). As shown, in these cases the complementary cell pair conducts independently of the actual search bit or data bit.TCAM block 300 is also capable of storing data comprising one or more don't care (X) bits. For example, as shown in Table 1, when DATA indicates a don't care, a binary value of zero (logic low state) is stored at both memory cells of the complementary cell pair to which DATA is mapped (e.g., complementary memory cell pair 310-0 memory cells 308-0 and 308-1). In these cases, the complementary memory cell pair will be conductive.Returning to FIG. 3, each string in TCAM block 300 stores a data entry and each data bit value in the data entry maps to one of memory cell pairs 310-0 through 310-N in the string. In this way, within each of the complementary memory cell pairs in the string, unless the bit value is indicated as a don't care bit (in which case both the first memory cell and the second memory cell store a binary value of zero), Otherwise the first memory cell stores the bit value from the data entry and the second memory cell stores the inverse of the bit value from the data entry.In the example where a NAND-type flash memory device supports 128-bit words (i.e., N is 128), match line 302-0 is connected to memory cells 308-0 through 308-63, which store bit values including D0,0 through D63 , 63 for 64-bit data entries. In this example, bit value D0,0 maps to memory cell pair 310-0, which includes memory cells 308-0 and 308-1. More specifically, memory cell 308-0 stores bit value D0,0 and complementary memory cell 308-1 stores this as the inverse of bit value D0,0.Search patterns 312 may be entered vertically along search lines 304-0 through 304-M and reverse search lines 306-0 through 306-M. More specifically, search lines 304-0 through 304-M receive a first set of voltage signals SL0-M representing the search word, and inverted search lines 306-0 through 306-M receive voltages representing inversions of the search word The input of the second set of search patterns 312 of signals along the search lines makes any string storing matching data conductive because, as discussed above, each individual pair of memory cells in the string will be conductive. Because the match line is pre-charged, the conductive string allows the match line to discharge. The page buffers connected to the conductive strings latch data indicating the location of matching data (ie, search words) in the TCAM block 300 .As mentioned above, to facilitate an imprecise match, one or more of the search lines 304-0 through 304-M and a corresponding one or more of the inverted search lines 306-0 through 306-M may be in the search chord is set to a logic high state prior to or in parallel with the input of the search string. The search component 113 can set either search line to a logic high state by connecting the search line to a voltage source that provides a voltage signal corresponding to a logic high state (eg, Vhigh). As an example, assume that the second bit of the search word is the don't care bit 213 . Search line 304-1 and inverted search line 306-1 correspond to the second bit of search word SL1 and the second bit of stored data entry D1. Therefore, as part of performing this example search, search component 113 sets search line 304-1 and inverted search line 306-1 to logic Vhigh by connecting search line 304-1 and inverted search line 306-1 to Vhigh high. As a result, each of the complementary memory cell pairs connected to search line 304-1 and inverted search line 306-1 becomes conductive, thereby indicating a positive match regardless of the actual correspondence in the stored base data or the search pattern. What is the value.Search component 113 outputs an indication of whether matching data is stored by TCAM block 300 and an indicator of the location (eg, memory address) of the matching data. In some embodiments, the search component 113 includes readout circuitry that reads data from the page buffers of the TCAM block 300 to identify the location of matching data.In some embodiments, two or more page buffers in TCAM block 300 may be associated together to form a serial shift register. According to these embodiments, the search component 113 shifts the data in the first page buffer connected to the match line to the second page buffer, and the search component 113 includes an output compare and count component to track the transition from one page buffer to the other. The number of shifts of a page buffer to identify the location of matching data stored by TCAM block 300 .Two page buffers can be associated together using a single transistor to form a shift register. For example, as shown in FIG. 5 , shift register 500 includes page buffer 502 and page buffer 504 connected by transistor 506 .FIG. 6 is a flowchart illustrating an example method 600 for imprecisely searching a TCAM component in a memory subsystem, according to some embodiments of the present disclosure. Method 600 may be performed by processing logic that may comprise hardware (e.g., a processing device, circuitry, dedicated logic, programmable logic, microcode, hardware of a device, integrated circuits, etc.), software (e.g., running on a processing device or instructions executed), or a combination thereof. In some embodiments, method 600 is performed by search component 113 of FIG. 1 . Although shown in a particular sequence or order, the order of the processes described can be modified unless otherwise specified. Therefore, the illustrated embodiments should be understood as examples only, and the illustrated processes may be performed in a different order, and some processes may be performed in parallel. Additionally, in various embodiments, one or more procedures may be omitted. Therefore, not all procedures are required in every embodiment. Other processing flows are possible.At operation 605, the processing device receives an input search word. The input search word may be received from a host system (eg, host system 120 ) in communication with the processing device. The input search word contains a first sequence of M bits (eg, "100110101011").At operation 610, the processing device receives an indicator of one or more bits in the first sequence of M bits designated as don't care bits. Don't care bits of the search word may be ignored for the purpose of identifying matching data. An indicator of don't care bits may be received from the host system. The host system can select one or more don't care bits programmatically or based on user input. The one or more don't care bits correspond to a first portion of the input word and the remaining portion of the search word (eg, the portion not designated as a don't care) is hereinafter referred to as the "second portion."At operation 615, the processing device sets a subset of the search lines of the TCAM block to a logic high state based on the don't care bits. A logic high state represents a binary value of one. The processing means sets the search lines to a logic high state by connecting the search lines to a voltage signal representing a logic high state. Thus, the voltage signal has a corresponding voltage that is greater than a threshold voltage, where a voltage below the threshold represents a logic low state and a voltage above the threshold represents a logic high state.A TCAM block contains an array of memory cells (eg, a NAND-type flash memory array). The memory cells of the array are arranged as chords and each of the chords stores an entry of data. The string includes a plurality of memory cells connected in series between the precharge match line and the page buffer. The match line is precharged because it is connected to a voltage signal (eg, representing a logic high state). The TCAM block further includes a plurality of search lines, and each of the memory cells in each string is connected to one of the plurality of search lines.The subset of search lines of the TCAM block that are set to a logic high state correspond to don't care bits of the input search word. That is, a subset of the search lines of the TCAM block set to a logic high state are connected to memory cells that store data values corresponding to don't care bits in the input search word. For example, as mentioned above, the memory cells in each string are organized as pairs of complementary memory cells. Each bit value of a data entry stored by a string is mapped to a complementary pair of memory cells in the string. Specifically, the first memory cell stores the bit value and the second memory cell stores the inversion of the bit value. More specifically, a first memory cell stores a first charge representing a bit value and a second memory cell stores a second charge representing an inversion of the bit value. Thus, for each don't care bit, the search line corresponding to the bit value at the location of the don't care bit is set to a logic high state. That is, for each of the complementary memory cell pairs in the array representing the bit value at the site of the don't care bit, the first search line connected to the first memory cell storing the data value is set to a logic high state and connected to A second line to a second memory cell storing an inverted version of the data value is also set to a logic high state.At operation 620, the processing device generates a search pattern based on the input search word. A search pattern includes a first set of voltage signals representing a search word. That is, said first set of voltage signals represents a first sequence of M bits. The search pattern further includes a second set of voltage signals representing a second sequence of M bits comprising an inversion of the first sequence of bits (eg, "0110 0101 0100"). Thus, when generating the search pattern, the processing means generates a second sequence of bits by inverting the input search word and converts the first and second sequences of bits into first and second signals, respectively. The processing device may instead generate the first signal based on the first sequence of bits and generate the second signal by generating an inversion of the first signal. When generating the first and second voltage signals, the processing device may represent a binary value "1" using a voltage Vhigh and a binary value "0" using a voltage Vlow, wherein Vhigh is higher than a threshold voltage (Vt) and Vlow is lower than Vt .The processing device provides a search mode to search for lines of the TCAM block (at operation 625). Regardless of the value represented by the search mode, a subset of the search lines of the TCAM block that are set to a logic high state remain in a logic high state. In providing a search pattern to a search line of the TCAM, the processing means provides a first signal representing a search bit value from a first sequence of bits to a first search line connected to a first memory cell of a pair of complementary memory cells and sends A second search signal representing an inversion of the search bit value is provided to a second search line connected to a second memory cell of the complementary memory cell pair.If a second portion of the input search word (eg, the portion excluding don't care bits) is stored in the TCAM block, then input of the search mode causes the string on which the second portion of the input search word is stored to become conductive. Because the matched wire is pre-charged, the conductive string allows the matched wire to discharge. That is, the string conducts the signal resulting from the match line discharge based on the data entry stored on the string connected to the match line matching the second portion of the input search word. The conductive string provides a signal to a page buffer connected at the other end of the string. The page buffer latches data in response to a signal supplied due to discharge of the match line. The latched data indicates that the match lines connected to the page buffer store data entries that match at least a second portion of the input search word.At operation 630, the processing device determines whether any matching data is stored by the TCAM block. For purposes of inexact matching, the matched data includes the second part of the entered search term but may exclude the first part. The processing device may determine whether any matching data is stored by the TCAM block by reading data from the TCAM block's page buffer.If at operation 630 the processing device determines that no matching data is stored by the TCAM block, the processing device may return to operation 610 and repeat the described process again using at least one new don't care bit. An indicator of at least one new don't care bit may again be provided by the host system at operation 610 . That is, a first set of don't care bits may be provided, and if no matching data is identified based on the first set of don't care bits, a second set of don't care bits may be provided, and the processing device searches for a method based on the second set of don't care bits for TCAM block of matching data.If the processing device determines that matching data is stored by the TCAM block, then the processing device determines the location of any matching data stored in the TCAM block at operation 635 . That is, the processing means determines the location of the stored data item matching the second part of the input search word. The processing device may determine the location of matching data based on the data read from the page buffer. The location of the matching data may include one or more memory addresses corresponding to one or more strings within the array.At operation 640, the processing device outputs an indication of whether matching data is stored by the TCAM block and the location of the matching data. The location of the matching data can be used, for example, to retrieve additional data associated with the input search word stored by the memory subsystem. The associated data may be stored in a different portion of the memory device on which the TCAM block is implemented or on another memory device of the memory subsystem.Example 1 is a system comprising: a memory device comprising a ternary content addressable memory (TCAM) block comprising an array of memory cells organized into a plurality of strings, Strings of the plurality of strings store data entries, the strings comprising a plurality of memory cells connected in series between a precharge match line and a page buffer, each of the memory cells connected to multiple one of the search lines; and processing means, coupled to the memory means, the processing means to perform operations comprising: receiving an input search word comprising a bit sequence; receiving an input search word in the bit sequence an indicator of one or more don't care bits, the one or more don't care bits corresponding to a first portion of the input search word; setting a subset of the plurality of search lines to a logic high state, the plurality of The subset of search lines corresponds to the one or more don't care bits; generating a search pattern based on the input search word; providing the search pattern as input to the plurality of search lines, the search pattern causing all The string conducts and provides a signal to the page buffer in response to the data entry stored on the string matching at least a second portion of the input search word excluding the one or more don't care bits , the signal is generated by discharging the precharged match line through the string, the page buffer stores data in response to the signal; and based on the data read by the page buffer The location of the data entry within the TCAM block is output.In Example 2, the plurality of memory cells of Example 1 is optionally configured as a plurality of pairs of complementary memory cells and bit values of the data entry are mapped to complementary pairs of memory cells of the plurality of pairs of complementary memory cells .In Example 3, the subject matter of any one of Examples 1 and 2 optionally includes a complementary memory cell pair comprising: a first memory cell to store a bit value of the data entry; and The first memory unit is connected in series to a second memory unit for storing the inversion of the bit value, wherein a first search line among the plurality of search lines is connected to the first a memory cell; and wherein a second word line of the plurality of word lines is connected to the second memory cell.In Example 4, the subject matter of any one of Examples 1 to 3 optionally includes a first search line for receiving a first search signal representing a search bit value from said input search word, and for receiving a first search signal representing said input search word The second search line of the second search signal for the inversion of the search bit value.In Example 5, the search pattern of any of Examples 1-4 optionally includes a first set of signals representing the input search word and a second set of signals representing an inverse of the input search word.In Example 6, the subject matter of any one of Examples 1 to 5 optionally includes an inverter to generate an inversion of said sequence of bits; and to generate a first set of signals based on said first sequence of bits and A level shifter generating a second set of signals based on the inversion of the sequence of bits.In Example 7, the subject matter of any one of Examples 1 to 6 optionally includes determining whether matching data is stored by the TCAM block based on data read by the page buffer, the matching data including the input searching for at least the second portion of a word; and outputting an indication of whether matching data is stored by the TCAM block.Example 8 includes the subject matter of Examples 1-7, wherein the one or more don't care bits are a first set of don't care bits; the subset of the plurality of search lines is a first subset; and the operation further any Optionally comprising: receiving a second set of don't care bits prior to receiving said first set of don't care bits; setting a second subset of said plurality of search lines to a logic high state, all of said plurality of search lines The second subset corresponds to a second set of don't care bits; matching data is determined not to be stored by the TCAM; and wherein the first set of don't care bits is provided based on the determination that matching data is not stored by the TCAM.In Example 9, the location of the data entry of any of Examples 1-8 optionally includes a memory address of the string within the CAM block.In Example 10, the memory device of any of Examples 1-9 optionally includes a NAND type flash memory device.Example 11 is a method comprising: receiving an input search word comprising a first sequence of bits; receiving an indicator of one or more don't care bits in the first bit sequence, the one or more don't care bits corresponding to a first portion of the input search word; setting a subset of a plurality of search lines of a ternary content addressable memory (TCAM) block to a logic high state, the subset of the plurality of search lines corresponding to the the one or more don't care bits; generate a search pattern based on the input search word; provide the search pattern to the plurality of search lines of the TCAM, the search pattern makes strings in the TCAM conductive and providing a signal to the page buffer in response to the string storing match data comprising a second portion of the input search word excluding the don't care bits, the signal being generated via the precharged match line via the generating the string discharge, the page buffer storing data in response to the signal; and outputting the location of the data entry within the TCAM block based on the data read by the page buffer.In Example 12, the plurality of memory cells of Example 11 is optionally configured as a plurality of pairs of complementary memory cells and bit values of the data entry are mapped to complementary pairs of memory cells of the plurality of pairs of complementary memory cells .In Example 13, the subject matter of any one of Examples 11 and 12 optionally includes a complementary memory cell pair comprising: a first memory cell to store a bit value of the data entry; and The first memory unit is connected in series to a second memory unit for storing the inversion of the bit value, wherein a first search line among the plurality of search lines is connected to the first a memory cell; and wherein a second word line of the plurality of word lines is connected to the second memory cell.In Example 14, the providing a search pattern as input of any of Examples 11 to 13 optionally includes providing a first search signal representing a search bit value from the input search word to the first search line, and A second search signal representing an inversion of the search bit value is provided to the second search line.In Example 15, the subject matter of any one of Examples 11 to 14 optionally includes determining whether matching data is stored by the TCAM block based on data read by the page buffer, the matching data including the input searching for at least the second portion of a word; and outputting an indication of whether matching data is stored by the TCAM block.In Example 16, the subject matter of any one of Examples 11 to 15 optionally includes determining said location of said data entry by reading said data from said page buffer, said data indicating said string of the described location.In Example 17, the search pattern of any of Examples 11-16 optionally includes a first set of signals representing the input search word and a second set of signals representing an inversion of the bit sequence.In Example 18, the sequence of bits is a first sequence of bits, and said generating a search pattern in any of Examples 11 to 17 optionally comprises: inverting said first sequence of bits to generate said second bit sequence; generating a first voltage signal representing the first bit sequence; and generating a second voltage signal representing the second bit sequence.In Example 19, the location of the data entry of any of Examples 11-18 optionally includes a memory address of the string within the TCAM block.Instance 20 is a non-transitory computer-readable storage medium comprising instructions that, when executed by a processing device, configure the processing device to perform operations comprising: receiving an input search word comprising a first bit sequence; receiving the an indicator of one or more don't care bits in the first sequence of bits, the one or more don't care bits corresponding to the first part of the input search word; A subset of search lines is set to a logic high state, the subset of the plurality of search lines corresponding to the one or more don't care bits; generating a search pattern based on the input search word; setting the search pattern Provided as input to the plurality of search lines of the TCAM, the search mode makes the strings conductive and provides a signal to a page buffer in response to the strings storing matching data, the matching data including the input a second portion of the search word excluding the don't care bits, the signal resulting from the discharge of a precharged match line through the string, the page buffer storing data in response to the signal; and based on the The page buffer reads the data to output the location of the data entry within the TCAM block.7 illustrates an example machine in the form of a computer system 700 within which a set of instructions may be executed for causing the machine to perform any one or more of the methods discussed herein. In some embodiments, computer system 700 may correspond to a host system (e.g., computer system 120 of FIG. 1 ) that includes, is coupled to, or utilizes a memory subsystem (e.g., memory subsystem 110 of FIG. 1 ), Or it can be used to execute the operations of the controller (eg, execute the operating system to execute the operations corresponding to the search component 113 of FIG. 1 ). In alternative embodiments, the machine may be connected (eg, networked) to other machines in a local area network (LAN), intranet, extranet, and/or the Internet. The machine may operate as a server or client machine in a host-slave network environment, as a peer machine in a peer-to-peer (or distributed) network environment, or as a server or client in a cloud computing infrastructure or environment machine operation.The machine may be a personal computer (PC), tablet PC, set-top box (STB), personal digital assistant (PDA), cellular phone, network appliance, server, network router, switch, or bridge, or be capable of executing An instruction set (sequential or otherwise) for the actions of any machine. Further, while a single machine is illustrated, the term "machine" shall also be taken to encompass any instruction set (or sets of instructions) that individually or jointly execute a set (or sets of instructions) to perform any one or more of the methodologies discussed herein Any collection of machines.The example computer system 700 includes a processing device 702, a main memory 704 (e.g., ROM, flash memory, DRAM such as SDRAM or Rambus DRAM (RDRAM), etc.), a static memory 707 (e.g., flash memory, static random access memory ( SRAM), etc.) and data storage system 718 , all of which communicate with each other via bus 730 .Processing device 702 represents one or more general-purpose processing devices, such as microprocessors, central processing units, or the like. More specifically, processing device 702 may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor implementing other instruction sets , or a processor implementing a combination of instruction sets. Processing device 702 may also be one or more special purpose processing devices such as ASICs, FPGAs, digital signal processors (DSPs), network processors, or the like. The processing device 702 is configured to execute instructions 726 for performing the operations and steps discussed herein. The computer system 700 may further include a network interface device 708 to communicate via a network 720 .Data storage system 718 may include a machine-readable storage medium 724 (also referred to as a computer-readable medium) on which one or more sets of instructions 727 are stored or embodied in methods or functions described herein any one or more of the software. During their execution by computer system 700, instructions 727 may also reside, completely or at least partially, within main memory 704 and/or within processing device 702, which also constitute machine-readable storage media. Machine-readable storage medium 724, data storage system 718, and/or main memory 704 may correspond to memory subsystem 110 of FIG. 1 .In one embodiment, instructions 727 include instructions to implement functionality corresponding to a security component (eg, search component 113 of FIG. 1 ). Although machine-readable storage medium 724 is shown in an example embodiment as a single medium, the term "machine-readable storage medium" shall refer to a single medium or multiple media that contain one or more sets of instructions 727 . The term "machine-readable storage medium" shall also be taken to encompass any medium capable of storing or encoding a set of instructions for execution by a machine and causing the machine to perform any one or more of the methods of the present disclosure. The term "machine-readable storage medium" should therefore be taken to include, but not be limited to, solid-state memory, optical media, and magnetic media.Some portions of the foregoing detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations producing a desired result. The operations described are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.It should be borne in mind, however, that all of these and similar terms are to be to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The present disclosure may relate to manipulating and transforming data represented as physical (electronic) quantities within computer system registers and memory into other data similarly represented as physical quantities within computer system memory or registers or other such information storage systems The actions and processes of a computer system or similar electronic computing device.The present disclosure also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the intended purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. The computer program may be stored on a computer readable storage medium such as, but not limited to, any type of disk, including floppy disk, compact disk, CD-ROM, and magneto-optical disk; ROM; RAM; erasable programmable read-only memory (EPROM); ); EEPROM; magnetic or optical card; or any type of medium suitable for storing electronic instructions, each coupled to a computer system bus.The algorithms and displays presented herein are not inherently related to any particular computer or other device. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the methods. The structure for a variety of these systems will be as set forth in the description above. Additionally, the present disclosure has not been described with reference to any particular programming language. It should be appreciated that various programming languages can be used to implement the teachings of the present disclosure as described herein.The present disclosure may be provided as a computer program product or software, which may include a machine-readable medium storing instructions that can be used to program a computer system (or other electronic device) to perform processes in accordance with the present disclosure. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (eg, a computer). In some embodiments, a machine-readable (eg, computer-readable) medium includes a machine-readable (eg, computer-readable) storage medium, such as ROM, RAM, magnetic disk storage media, optical storage media, flash memory components, and the like.In the foregoing specification, embodiments of the disclosure have been described with reference to specific example embodiments thereof. It will be apparent that various modifications may be made thereto without departing from the broader scope of embodiments of the present disclosure as set forth in the appended claims. Accordingly, the specification and drawings should be interpreted in an illustrative sense rather than a restrictive sense.
A method of manufacturing a metal gate structure includes providing a substrate (110) having formed thereon a gate dielectric (120), a work function metal (130) adjacent to the gate dielectric, and a gate metal (140) adjacent to the work function metal; selectively forming a sacrificial capping layer (310) centered over the gate metal; forming an electrically insulating layer (161) over the sacrificial capping layer such that the electrically insulating layer at least partially surrounds the sacrificial capping layer; selectively removing the sacrificial capping layer in order to form a trench (410) aligned to the gate metal in the electrically insulating layer; and filling the trench with an electrically insulating material in order to form an electrically insulating cap (150) centered on the gate metal.
CLAIMS What is claimed is: 1. A metal gate structure comprising: a substrate; a gate dielectric over the substrate; a work function metal adjacent to the gate dielectric; a gate metal adjacent to the work function metal; an electrically insulating cap centered on the gate metal; an electrically insulating layer over and at least partially surrounding the electrically insulating cap; spacers adjacent to the gate dielectric; and a dielectric material at least partially surrounding the spacers. 2. The metal gate structure of claim 1 wherein: the gate metal is a substance taken from the group consisting of aluminum, tungsten, and titanium-nitride. 3. The metal gate structure of claim 1 wherein: the electrically insulating cap is a substance taken from the group consisting of silicon nitride and silicon carbide. 4. The metal gate structure of claim 1 wherein: the gate dielectric is a high-k dielectric material. 5. The metal gate structure of claim 1 wherein: the work function metal and the gate metal are the same material. 6. A method of manufacturing a metal gate structure, the method comprising: providing a substrate having formed thereon a gate dielectric, a work function metal adjacent to the gate dielectric, and a gate metal adjacent to the work function metal; selectively forming a sacrificial capping layer centered over the gate metal; forming an electrically insulating layer over the sacrificial capping layer such that the electrically insulating layer at least partially surrounds the sacrificial capping layer; selectively removing the sacrificial capping layer in order to form a trench aligned to the gate metal in the electrically insulating layer; and filling the trench with an electrically insulating material in order to form an electrically insulating cap centered on the gate metal. 7. The method of claim 6 wherein:selectively forming the sacrificial capping layer comprises forming a tungsten capping layer. 8. The method of claim 7 wherein: selectively forming the sacrificial capping is performed at a temperature between approximately 200 degrees Celsius and approximately 275 degrees Celsius. 9. The method of claim 7 further comprising: exposing the gate metal to a buffered hydrofluoric acid solution prior to selectively forming the sacrificial capping layer. 10. The method of claim 9 wherein: the buffered hydrofluoric acid solution comprises a buffering agent; and the buffering agent comprises ammonium fluoride. 11. The method of claim 10 wherein: the gate metal is exposed to the buffered hydrofluoric acid solution for between approximately ten and approximately sixty seconds. 12. The method of claim 7 further comprising: exposing the gate metal to a dilute hydrochloric acid solution prior to selectively forming the sacrificial capping layer. 13. The method of claim 12 wherein: the dilute hydrochloric acid solution comprises one part per volume hydrochloric acid and 10 parts per volume de-ionized water. 14. The method of claim 7 wherein: selectively removing the sacrificial capping layer comprises etching away the sacrificial capping layer using an etchant comprising a base and an oxidizer. 15. The method of claim 14 wherein: the base comprises ammonium hydroxide and the oxidizer comprises one of hydrogen peroxide and ozone. 16. The method of claim 15 wherein: the etchant has a pH between 4 and 10. 17. A method of manufacturing a metal gate structure, the method comprising: providing a substrate having formed thereon a high-k gate dielectric, a work function metal adjacent to the high-k gate dielectric, an aluminum gate electrode adjacent to the work function metal, spacers adjacent to the high-k gate dielectric, and an inter- layer dielectric adjacent to the spacers;selectively forming a sacrificial tungsten capping layer centered over the aluminum gate electrode; forming a silicon oxide film over the sacrificial tungsten capping layer such that the silicon oxide film at least partially surrounds the sacrificial tungsten capping layer; selectively removing the sacrificial tungsten capping layer in order to form a trench aligned to the aluminum gate electrode in the silicon oxide film; and filling the trench with a silicon nitride cap centered on the aluminum gate electrode. 18. The method of claim 17 wherein: selectively forming the sacrificial tungsten capping layer is accomplished using a chemical vapor deposition process. 19. The method of claim 18 wherein: the chemical vapor deposition process uses a molecular hydrogen precursor and a tungsten hexafluoride precursor. 20. The method of claim 19 wherein: the molecular hydrogen precursor is introduced at a first flow rate into a chemical vapor deposition chamber; the tungsten hexafluoride precursor is introduced at a second flow rate into the chemical vapor deposition chamber; and the first flow rate is higher than the second flow rate. 21. The method of claim 20 further comprising: exposing the aluminum gate electrode to a buffered hydrofluoric acid solution for approximately ten seconds prior to selectively forming the sacrificial tungsten capping layer. 22. The method of claim 21 wherein: selectively removing the sacrificial tungsten capping layer comprises etching away the sacrificial tungsten capping layer using an etchant comprising a base and an oxidizer; the base comprises ammonium hydroxide; the oxidizer comprises one of hydrogen peroxide and dissolved ozone; and the etchant has a pH between 6 and 8.
METAL GATE STRUCTURE AND METHOD OF MANUFACTURING SAME FIELD OF THE INVENTION The disclosed embodiments of the invention relate generally to metal gate structures for microelectronic devices, and relate more particularly to protective etch stop layers for such gate structures. BACKGROUND OF THE INVENTION Field-effect transistors (FETs) include source, drain, and gate terminals that are associated with a body terminal. In order to provide the necessary electrical connections within the transistor, contacts structures must be formed that connect various terminals to other structures within and outside of the transistor. As pitch scaling continues to increase the packing density of transistors on computer chips, the space available for forming such electrical contacts is rapidly decreasing. In one FET configuration, source and drain terminals are located within the body and the gate is located above the body such that, in order to form an electrical connection with the source/drain terminal, the source/drain contacts must pass alongside the gate. Given existing pitch scaling trends, the creation of unwanted electrical connections (shorts) between source/drain terminals and the gate will quickly become unavoidable given the limitations of registration and critical dimension (CD) control under existing transistor manufacturing techniques. BRIEF DESCRIPTION OF THE DRAWINGS The disclosed embodiments will be better understood from a reading of the following detailed description, taken in conjunction with the accompanying figures in the drawings in which: FIG. 1 is a cross-sectional view of a metal gate structure according to an embodiment of the invention; FIG. 2 is a flowchart illustrating a method of manufacturing a metal gate structure according to an embodiment of the invention; and FIGs. 3-5 are cross-sectional views of the metal gate structure of FIG. 1 at various particular points in its manufacturing process according to embodiments of the invention. For simplicity and clarity of illustration, the drawing figures illustrate the general manner of construction, and descriptions and details of well-known features andtechniques may be omitted to avoid unnecessarily obscuring the discussion of the described embodiments of the invention. Additionally, elements in the drawing figures are not necessarily drawn to scale. For example, the dimensions of some of the elements in the figures may be exaggerated relative to other elements to help improve understanding of embodiments of the present invention. The same reference numerals in different figures denote the same elements, while similar reference numerals may, but do not necessarily, denote similar elements. The terms "first," "second," "third," "fourth," and the like in the description and in the claims, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Similarly, if a method is described herein as comprising a series of steps, the order of such steps as presented herein is not necessarily the only order in which such steps may be performed, and certain of the stated steps may possibly be omitted and/or certain other steps not described herein may possibly be added to the method. Furthermore, the terms "comprise," "include," "have," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to those elements, but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The terms "left," "right," "front," "back," "top," "bottom," "over," "under," and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein. The term "coupled," as used herein, is defined as directly or indirectly connected in an electrical or nonelectrical manner. Objects described herein as being "adjacent to" each other may be in physical contact with each other, in close proximity to each other, or in the same general region or area as each other, as appropriate for the context in which the phrase is used. Occurrences of the phrase "in one embodiment" herein do not necessarily all refer to the same embodiment.DETAILED DESCRIPTION OF THE DRAWINGS In one embodiment of the invention, a method of manufacturing a metal gate structure comprises: providing a substrate having formed thereon a gate dielectric, a work function metal adjacent to the gate dielectric, and a gate metal adjacent to the work function metal; selectively forming a sacrificial capping layer centered over the gate metal; forming an electrically insulating layer over the sacrificial capping layer such that the electrically insulating layer at least partially surrounds the sacrificial capping layer; selectively removing the sacrificial capping layer in order to form a trench aligned to the gate metal in the electrically insulating layer; and filling the trench with an electrically insulating material in order to form an electrically insulating cap centered on the gate metal. As mentioned above, source/drain to gate contact shorts are projected to become increasingly difficult to avoid in light of the aggressive pitch scaling necessary in order to achieve the high transistor densities that will accompany future process technologies. SeIf- aligned capping structures on copper gate electrodes have been demonstrated and could offer a partial solution to this problem but are not expected to be useful at gate dimensions below 35 nanometers (nm), as copper fill processes become very marginal at those dimensions. Embodiments of the invention provide a method to form a self-aligned protective cap on aluminum and other metal gate transistors even at gate dimensions less than 35 nm because the gate formation is not limited by gate electrode fill. Such protective caps may provide robust margins for contact registration and may also allow contact CD to be larger, thereby lowering contact resistance. Referring now to the drawings, FIG. 1 is a cross-sectional view of a metal gate structure 100 according to an embodiment of the invention. As illustrated in FIG. 1, metal gate structure 100 comprises a substrate 110, a gate dielectric 120 over substrate 110, a work function metal 130 adjacent to gate dielectric 120, and a gate metal 140 adjacent to work function metal 130. Metal gate structure 100 further comprises an electrically insulating cap 150, which, because it only grows on the metal gate, is centered on and self- aligned to gate metal 140, an electrically insulating layer 160 over and at least partially surrounding electrically insulating cap 150, spacers 170 adjacent to the gate dielectric 120, and a dielectric material 180, e.g., an inter-layer dielectric (ILD) such as a first-level ILD(ILDO), at least partially surrounding spacers 170. Electrically insulating layer 160 comprises a lower section 161 and an upper section 162. As an example, gate metal 140 can be a metal or a metal alloy such as aluminum, tungsten, titanium-nitride, or the like, or any metal or alloy (in addition to those already listed) that lends itself to atomic layer deposition (ALD). It should be noted here that work function metal 130 can be the same material as that making up gate metal 140. As another example, electrically insulating cap 150 can comprise silicon nitride (Si3N4), silicon carbide (SiC), or the like, or any non-electrically conducting (dielectric) material that can function as a etch stop layer for a particular etch chemistry used during the manufacture of metal gate structure 100, as will be further discussed below. As another example, gate dielectric 120 can be a material having a relatively high dielectric constant. (As is traditional, such a material is referred to herein as a "high-k material," a "high-k dielectric," or something similar.) Silicon dioxide (SiO2), which was widely used in the past as a gate dielectric, has a dielectric constant K (often written as "k") of approximately 3.9. References in this document to high-k materials mean materials having dielectric constants that are significantly greater than the dielectric constant of SiO2. In practice, such materials typically have dielectric constants of approximately 8-10 or higher (although materials having dielectric constants lower than that may still qualify as high-k materials). Similarly, references herein to a "low-k" material mean materials having a dielectric constant that is low relative to that of SiO2, e.g., materials having dielectric constants less than approximately 3.5. As an example, gate dielectric 120 may be a hafnium-based, a zirconium-based, or a titanium-based dielectric material having a dielectric constant of at least approximately 20. In a particular embodiment the high-k material can be hafnium oxide or zirconium oxide, both of which have a dielectric constant between approximately 20 and approximately 40. As yet another example, lower section 161 of electrically insulating layer 160 can comprise silicon oxide or another dielectric material. In certain embodiments, lower section 161 is a low-k dielectric material. In certain embodiments, upper section 162 of electrically insulating layer 160 comprises dielectric material identical to that in lower section 161 such that any boundary between lower section 161 and upper section 161 is not readily discernible, or disappears entirely. In other embodiments, upper section 162 and lower section 161 may comprise electrically insulating materials of different types.FIG. 2 is a flowchart illustrating a method 200 of manufacturing a metal gate structure according to an embodiment of the invention. As an example, method 200 may result in a transistor having a self-aligned protective cap on an aluminum or other gate metal that provides advantages such as those discussed herein. A step 210 of method 200 is to provide a substrate having formed thereon a gate dielectric, a work function metal adjacent to the gate dielectric, and a gate metal adjacent to the work function metal. As an example, the substrate, the gate dielectric, the work function metal, and the gate metal can be similar to, respectively, substrate 110, gate dielectric 120, work function metal 130, and gate metal 140, all of which are shown in FIG. 1. Also as part of step 210, or in another step, spacers may be formed adjacent to the high-k gate dielectric and an ILD may be formed adjacent to the spacers. As an example, the spacers can be similar to spacers 170 and the ILD can be similar to dielectric material 180, both of which are first shown in FIG. 1. In one embodiment, step 210 or a subsequent step may comprise exposing the gate metal to a buffered hydrofluoric acid solution or a dilute hydrochloric acid solution. As an example, the buffered hydrofluoric acid solution may comprise hydrofluoric acid, de- ionized wafer, and a buffering agent such as ammonium fluoride or the like. The buffering agent maintains the hydrofluoric acid solution at an appropriate pH level, which in at least one embodiment is a pH between 4 and 6. As another example, the dilute hydrochloric acid solution may comprise one part per volume hydrochloric acid (29% aqueous solution) and 10 parts per volume de-ionized water. In one embodiment, the gate metal is exposed to the buffered hydrofluoric acid solution for a period of time lasting between approximately ten and approximately sixty seconds. (Longer exposure times may begin to etch or otherwise negatively affect other parts of metal gate structure 100, such as ILDO.) A step 220 of method 200 is to selectively form a sacrificial capping layer centered over the gate metal. (As further discussed below, the phrase "selective formation" and similar phrases herein refer to processes that allow a first material to be formed on a second material or material type but not on a third material or material type.) As an example, the sacrificial capping layer can be similar to a sacrificial capping layer 310 that is first shown in FIG. 3, which is a cross-sectional view of metal gate structure 100 at a particular point in its manufacturing process according to an embodiment of the invention. It should be noted that FIG. 3 depicts metal gate structure 100 at an earlier point in its manufacturing process than does FIG. 1.As an example, sacrificial capping layer 310 can comprise tungsten or another material that can form a self-aligned structure on top of gate metal 140. Described below is an embodiment in which sacrificial capping layer 310 comprises tungsten and gate metal 140 comprises aluminum. Chemical vapor deposition of tungsten (CVD-W) is an important metallization technique for various applications. In ultra-large-scale integrated circuit (ULSI) applications CVD-W is often used to fill contact vias due to its ability to fill high aspect ratio structures with no voiding. Another aspect of CVD-W deposition is its ability, under certain deposition conditions, to selectively deposit onto silicon or other metals but not onto SiO2 or other insulators. Embodiments of the invention exploit this selective deposition capability to form sacrificial capping layer 310 self-aligned to (i.e., centered on) the aluminum of gate metal 140. In one embodiment, for example, tungsten is selectively deposited using a CVD technique in which high flow (e.g., approximately 1 Torr) hydrogen (H2) and low flow (e.g., approximately 30 mTorr) tungsten hexafluoride (WF6) precursors are introduced into a CVD chamber at a temperature between approximately 200 degrees Celsius (° C) and approximately 300° C with approximately 5-10 CVD cycles. A sequence of chemical reactions for this embodiment are shown below, where Al is aluminum, AIF3 is aluminium trifluoride, AlF2 is aluminum difluoride, and HF is hydrofluoric acid. WF6+ 2Al^ W + AlF3 2AlF3^ 3AlF2 (Heated above 300° C) WF6+ 3H2^ W + 6 HF In a particular embodiment, the reaction of step 220 is performed at a temperature between approximately 200° C and approximately 275° C, with lower temperatures in this range preferred. If the temperature is too high (e.g., above approximately 300° C) then the tungsten may begin to alloy with the aluminum, compromising the gate structure. On the other hand, if the temperature is too low (e.g., below approximately 200° C) then the desired selectivity begins to be lost. A step 230 of method 200 is to form an electrically insulating layer over the sacrificial capping layer such that the electrically insulating layer at least partially surrounds the sacrificial capping layer. As an example, the electrically insulating layer can be similar to lower section 161 of electrically insulating layer 160 that is shown in FIG. 1.After its deposition the electrically insulating layer is planarized and polished back so as to expose the tungsten (or other) sacrificial capping layer. A step 240 of method 200 is to selectively remove the sacrificial capping layer in order to form a trench aligned to the gate metal in the electrically insulating layer. It should be understood that the word "trench" as used in this context herein is used broadly such that it can indicate any type of an opening, a gap, a cavity, a hole, an empty space, or the like that can later be filled with a material, as discussed below. As an example, the trench can be similar to a trench 410 that is first shown in FIG. 4, which is a cross- sectional view of metal gate structure 100 at a particular point in its manufacturing process according to an embodiment of the invention. It should be noted that FIG. 4, like FIG. 3, depicts metal gate structure 100 at an earlier point in its manufacturing process than does FIG. 1. As illustrated in FIG. 4, trench 410 is located above, and aligned to, gate metal 140. In one embodiment, step 240 comprises etching away the sacrificial capping layer using an etchant comprising a base and an oxidizer. As an example, the base can comprise ammonium hydroxide (NH4OH), tetra methyl ammonium hydroxide (TMAH), or the like. As another example, the oxidizer can comprise hydrogen peroxide (H2O2), dissolved ozone (O3), or the like. As yet another example, the etchant can have a pH between 4 and 10. In a particular embodiment, the pH of the etchant is between 6 and 8. With conditions and compositions such as those given above, an etchant used in connection with embodiments of the invention selectively dissolves tungsten, i.e., dissolves tungsten and not aluminum or the work function metal, thereby allowing the formation of a self-aligned protective cap over an aluminum gate (or gates made of other materials), as further discussed below. A step 250 of method 200 is to fill the trench with an electrically insulating material in order to form an electrically insulating cap centered on the gate metal. As an example, the electrically insulating cap can be similar to electrically insulating cap 150 that is shown in FIG. 1. This electrically insulating cap completely covers and protects the underlying gate electrode by, for example, acting as an etch stop layer during source/drain contact etch. As an example, the composition of the electrically insulating cap and/or the etch chemistry of the source/drain contact etch may be chosen such that the electrically insulating cap is substantially impervious to the contact etch chemistry in order that thecontact etch may proceed without raising gate metal damage issues. This in turn leads to increased contact registration margins and other advantages as discussed above. Electrically insulating cap 150 is also shown in FIG. 5, which is a cross-sectional view of metal gate structure 100 at a particular point in its manufacturing process according to an embodiment of the invention. It should be noted that FIG. 5, like FIGs. 3 and 4, depicts metal gate structure 100 at an earlier point in its manufacturing process than does FIG. 1. FIG. 5 illustrates electrically insulating cap 150 immediately after its deposition, at which time it has a rounded top; the substantially flat top it is subsequently given (see FIG. 1) is produced by polishing back electrically insulating cap 150 so that it is flush with a surface 565 of lower section 161 of electrically insulating layer 160. A dotted line 555 in FIG. 5 indicates a level to which electrically insulating cap may, in at least one embodiment, be polished back. Following the polishing back of electrically insulating cap 150, upper section 162 of electrically insulating layer 160 may be deposited over lower section 161. As an example, metal gate structure 100 may then take the form that it is depicted in FIG. 1 and electrically insulating cap 150 will entirely protect gate metal 140 during source/drain contact etch, as has been described herein. Although the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes may be made without departing from the spirit or scope of the invention. Accordingly, the disclosure of embodiments of the invention is intended to be illustrative of the scope of the invention and is not intended to be limiting. It is intended that the scope of the invention shall be limited only to the extent required by the appended claims. For example, to one of ordinary skill in the art, it will be readily apparent that the metal gate structures and related methods discussed herein may be implemented in a variety of embodiments, and that the foregoing discussion of certain of these embodiments does not necessarily represent a complete description of all possible embodiments. Additionally, benefits, other advantages, and solutions to problems have been described with regard to specific embodiments. The benefits, advantages, solutions to problems, and any element or elements that may cause any benefit, advantage, or solution to occur or become more pronounced, however, are not to be construed as critical, required, or essential features or elements of any or all of the claims.Moreover, embodiments and limitations disclosed herein are not dedicated to the public under the doctrine of dedication if the embodiments and/or limitations: (1) are not expressly claimed in the claims; and (2) are or are potentially equivalents of express elements and/or limitations in the claims under the doctrine of equivalents.
Instructions and logic provide for a Single Instruction Multiple Data (SIMD) SM4 round slice operation. Embodiments of an instruction specify a first and a second source data operand set, and substitution function indicators, e.g. in an immediate operand. Embodiments of a processor may include encryption units, responsive to the first instruction, to: perform a slice of SM4-round exchanges on a portion of the first source data operand set with a corresponding keys from the second source data operand set in response to a substitution function indicator that indicates a first substitution function, perform a slice of SM4 key generations using another portion of the first source data operand set with corresponding constants from the second source data operand set in response to a substitution function indicator that indicates a second substitution function, and store a set of result elements of the first instruction in a SIMD destination register.
THE CLAIMSWhat is claimed is:1. A processor comprising:a decode stage to decode a first instruction for a Single Instruction Multiple Data (SIMD) SM4 operation, the first instruction specifying a first source data operand set, a second source data operand set, and one or more substitution function indicators; andone or more execution units, responsive to the decoded first instruction, to:perform one or more SM4-round exchange of a portion of the first source data operand set with a corresponding first one or more keys from the second source data operand set if a first indicator of said one or more substitution function indicators indicates a first substitution function;perform one or more SM4 key generation using said portion of the first source data operand set with a corresponding first one or more constants from the second source data operand set if a second indicator of said one or more substitution function indicators indicates a second substitution function; andstore a result of the first instruction in a SIMD destination register.2. The processor of claim 1, wherein said first substitution function is the SM4 mixer- substitution function, T.3. The processor of claim 2, wherein said second substitution function is the SM4 keyexpansion substitution function, T'.4. The processor of claim 1, wherein the first instruction specifies said SIMD destinationregister as a destination operand.5. The processor of claim 1, wherein the first instruction specifies a SIMD register to hold four 32-bit elements as the first source data operand set.6. The processor of claim 1, wherein the first instruction specifies a SIMD register to hold eight 32-bit elements as the first source data operand set.7. The processor of claim 1, wherein the first instruction specifies a SIMD register to holdsixteen 32-bit elements as the first source data operand set.8. The processor of claim 1, wherein the first instruction specifies said one or more substitution function indicators as an immediate byte operand.9. The processor of claim 8, wherein the first instruction specifies said one or more substitution function indicators by setting one bit in the immediate byte operand for each corresponding lane of four 32-bit elements in the first source data operand set.10. The processor of claim 1, wherein the first instruction specifies said one or more substitution function indicators in the first instruction mnemonic.11. The processor of claim 1, wherein said one or more execution units, responsive to thedecoded first instruction, performs four SM4-round exchanges of, or four SM4 key generation using said portion of the first source data operand set.12. A method comprising:decoding a first instruction for a Single Instruction Multiple Data (SIMD) SM4 round slice operation, the first instruction specifying a first source data operand set, a second source data operand set, and one or more substitution function indicators; andresponsive to the first instruction,accessing the first source data operand set,accessing the second source data operand set,performing a first plurality of SM4-round exchanges on a first portion of the first source data operand set with a corresponding first one or more keys from the second source data operand set in response to a first indicator of said one or more substitution function indicators that indicates a first substitution function,performing the first plurality of SM4 key generations using a second portion of the first source data operand set with a corresponding first one or more constants from the second source data operand set in response to a second indicator of said one or more substitution function indicators that indicates a second substitution function, andstoring a set of result elements of the first instruction in a SIMD destination register.13. The method of claim 12 further comprising:generating a plurality of micro-instructions equal to said first plurality responsive to the first instruction.14. The method of claim 12, wherein the first plurality is equal to four.15. The method of claim 12, wherein the first plurality is equal to two.16. The method of claim 12, wherein said first and second portions comprise 128-bit wide SIMD data lanes.17. The method of claim 12, wherein said first substitution function is the SM4 mixer- substitution function, T.18. The method of claim 17, wherein said second substitution function is the SM4 key expansion substitution function, T'.19. A processing system comprising:a memory to store a first instruction for a Single Instruction Multiple Data (SEVID) SM4 round slice operation; anda processor comprising:an instruction fetch stage to fetch the first instruction;a decode stage to decode a first instruction, the first instruction specifying a first source data operand set, a second source data operand set, and one or more substitution function indicators; andone or more execution units, responsive to the decoded first instruction, to:access the first source data operand set,access the second source data operand set,perform a first plurality of SM4-round exchanges on a first portion of the first source data operand set with a corresponding first one or more keys from the second source data operand set in response to a first indicator of said one or more substitution function indicators that indicates a first substitution function,perform the first plurality of SM4 key generations using a second portion of the first source data operand set with a corresponding first one or more constants from the second source data operand set in response to a second indicator of said one or more substitution function indicators that indicates a second substitution function, andstore a set of result elements of the first instruction in a SIMD destination register.20. The processing system of claim 19, wherein said first substitution function is the SM4 mixer- substitution function, T.21. The processing system of claim 20, wherein said second substitution function is the SM4 key expansion substitution function, T'.22. The processing system of claim 19, wherein said decode stage, responsive to decoding the first instruction, is further to: generate a plurality of micro-instructions equal to said first plurality responsive to the first instruction.23. The processing system of claim 19, wherein the first plurality is equal to four.24. The processing system of claim 19, wherein the first plurality is equal to two.25. The processing system of claim 19, wherein said first and second portions comprise 128-bit wide SIMD data lanes.26. A apparatus in a processor comprising:a memory to store a first instruction for a Single Instruction Multiple Data (SIMD) SM4 round slice operation, the first instruction specifying a first source data operand set, a second source data operand set, and one or more substitution function indicators; anda processor comprising:one or more encryption units, responsive to the first instruction, to:perform a first plurality of SM4-round exchanges on a first portion of the first source data operand set with a corresponding first one or more keys from the second source data operand set in response to a first indicator of said one or more substitution function indicators that indicates a first substitution function,perform the first plurality of SM4 key generations using a second portion of the first source data operand set with a corresponding first one or more constants from the second source data operand set in response to a second indicator of said one or more substitution function indicators that indicates a second substitution function, andstore a set of result elements of the first instruction in a SIMD destination register.27. The apparatus of claim 26, wherein the first instruction specifies said SIMD destinationregister as a destination operand.28. The apparatus of claim 26, wherein the first instruction specifies said SIMD destination register as the first source data operand.29. The apparatus of claim 26, wherein the first instruction specifies said one or more substitution function indicators as an immediate data operand.
INSTRUCTIONS AND LOGIC TO PROVIDE SIMP SM4CRYPTOGRAPHIC BLOCK CIPHER FUNCTIONALITYFIELD OF THE DISCLOSURE[0001] The present disclosure pertains to the field of processing logic, microprocessors, and associated instruction set architecture that, when executed by the processor or other processing logic, perform logical, mathematical, or other functional operations. In particular, the disclosure relates to instructions and logic to provide SEVID SM4 cryptographic block cipher functionality.BACKGROUND OF THE DISCLOSURE[0002] Cryptology is a tool that relies on an algorithm and a key to protect information. The algorithm is a complex mathematical algorithm and the key is a string of bits. There are two basic types of cryptology systems: secret key systems and public key systems. A secret key system also referred to as a symmetric system has a single key ("secret key") that is shared by two or more parties. The single key is used to both encrypt and decrypt information.[0003] For example, the Advanced Encryption Standard (AES), also known as Rijndael, is a block cipher developed by two Belgian cryptographers and adopted as an encryption standard by the United States government. AES was announced in November 26, 2001 by the National Institute of Standards and Technology (NIST) as U.S. FIPS PUB 197 (FIPS 197). Other encryption algorithms are also of interest.[0004] Another example is SM4 (also formerly known as SMS4), a block cipher used in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and PrivacyInfrastructure). It processes the plaintext data in rounds (i.e. 32 rounds) as 128-bit blocks in the Galois field 2 , also denoted GF(256), modulo an irreducible polynomial. The SM4 algorithm was invented by Professor LU Shu-wang, and was declassified by the Chinese government and issued in January 2006.[0005] The input, output and key of SM4 are each 128 bits. Each round modifies one of four 32-bit words that make up the 128-bit block by XORing it with a keyed function of the other three words. Encryption and decryption have the same structure except that the round key schedule for decryption is the reverse of the round key schedule for encryption. A software implementation of SM4 (in ANSI C) was published online by the Free Software Foundation in December of 2009. One drawback to a software implementation is performance. Software runs orders of magnitude slower than devoted hardware so it is desirable to have the added performance of a hardware/firmware implementation.[0006] Typical straightforward hardware implementations using lookup memories, truth tables, binary decision diagrams or 256 input multiplexers are costly in terms of circuit area. Alternative approaches using finite fields isomorphic to GF(256) may be efficient in area but may also be slower than the straightforward hardware implementations.[0007] Modern processors often include instructions to provide operations that are computationally intensive, but offer a high level of data parallelism that can be exploited through an efficient implementation using various data storage devices, such as for example, single instruction multiple data (SEVID) vector registers. The central processing unit (CPU) may then provide parallel hardware to support processing vectors. A vector is a data structure that holds a number of consecutive data elements. A vector register of size M (where M is 2k, e.g. 512, 256, 128, 64, 32, ... 4 or 2) may contain N vector elements of size O, where N=M/0. For instance, a 64-byte vector register may be partitioned into (a) 64 vector elements, with each element holding a data item that occupies 1 byte, (b) 32 vector elements to hold data items that occupy 2 bytes (or one "word") each, (c) 16 vector elements to hold data items that occupy 4 bytes (or one"doubleword") each, or (d) 8 vector elements to hold data items that occupy 8 bytes (or one "quadword") each. The nature of the parallelism in SIMD vector registers could be well suited for the handling of block cipher algorithms.[0008] To date, options that provide efficient space-time design tradeoffs and potential solutions to such complexities, performance limiting issues, and other bottlenecks have not been fully explored.BRIEF DESCRIPTION OF THE DRAWINGS[0009] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings.[0010] Figure 1A is a block diagram of one embodiment of a system that executes instructions to provide SIMD SM4 cryptographic block cipher functionality.[0011] Figure IB is a block diagram of another embodiment of a system that executes instructions to provide SIMD SM4 cryptographic block cipher functionality.[0012] Figure 1C is a block diagram of another embodiment of a system that executes instructions to provide SIMD SM4 cryptographic block cipher functionality.[0013] Figure 2 is a block diagram of one embodiment of a processor that executes instructions to provide SIMD SM4 cryptographic block cipher functionality.[0014] Figure 3A illustrates packed data types according to one embodiment.[0015] Figure 3B illustrates packed data types according to one embodiment.[0016] Figure 3C illustrates packed data types according to one embodiment.[0017] Figure 3D illustrates an instruction encoding to provide SIMD SM4 cryptographic block cipher functionality according to one embodiment.[0018] Figure 3E illustrates an instruction encoding to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment.[0019] Figure 3F illustrates an instruction encoding to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment.[0020] Figure 3G illustrates an instruction encoding to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment.[0021] Figure 3H illustrates an instruction encoding to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment.[0022] Figure 4A illustrates elements of one embodiment of a processor micro-architecture to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0023] Figure 4B illustrates elements of another embodiment of a processor microarchitecture to execute instructions that provide SIMD SM4 cryptographic block cipher functionality. [0024] Figure 5 is a block diagram of one embodiment of a processor to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0025] Figure 6 is a block diagram of one embodiment of a computer system to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0026] Figure 7 is a block diagram of another embodiment of a computer system to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0027] Figure 8 is a block diagram of another embodiment of a computer system to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0028] Figure 9 is a block diagram of one embodiment of a system-on-a-chip to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0029] Figure 10 is a block diagram of an embodiment of a processor to execute instructions that provide SIMD SM4 cryptographic block cipher functionality.[0030] Figure 11 is a block diagram of one embodiment of an IP core development system that provides SIMD SM4 cryptographic block cipher functionality.[0031] Figure 12 illustrates one embodiment of an architecture emulation system that provides SIMD SM4 cryptographic block cipher functionality.[0032] Figure 13 illustrates one embodiment of a system to translate instructions that provide SIMD SM4 cryptographic block cipher functionality.[0033] Figure 14A illustrates a diagram for one embodiment of an apparatus for execution of an instruction to provide SIMD SM4 cryptographic block cipher functionality.[0034] Figure 14B illustrates a diagram for an alternative embodiment of an apparatus for execution of an instruction to provide SIMD SM4 cryptographic block cipher functionality.[0035] Figure 14C illustrates a diagram for another alternative embodiment of an apparatus for execution of an instruction to provide SIMD SM4 cryptographic block cipher functionality.[0036] Figure 15A illustrates a flow diagram for one embodiment of a process for execution of an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality.[0037] Figure 15B illustrates a flow diagram for an alternative embodiment of a process for execution of an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality. [0038] Figure 15C illustrates a flow diagram for another alternative embodiment of a process for execution of an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality.[0039] Figure 16A illustrates a flow diagram for one embodiment of a process for efficiently implementing the SM4 cryptographic block cipher using an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality.[0040] Figure 16B illustrates a flow diagram for an alternative embodiment of a process for efficiently implementing the SM4 cryptographic block cipher using an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality.DETAILED DESCRIPTION[0041] The following description discloses Instructions and logic provide for a Single Instruction Multiple Data (SEVID) SM4 round slice operation. Embodiments of an instruction specify a first and a second source data operand set, and substitution function indicators, e.g. in an immediate operand. Embodiments of a processor may include encryption units, responsive to the first instruction, to: perform a slice of SM4-round exchanges on a portion of the first source data operand set with a corresponding keys from the second source data operand set in response to a substitution function indicator that indicates a first substitution function, perform a slice of SM4 key generations using another portion of the first source data operand set withcorresponding constants from the second source data operand set in response to a substitution function indicator that indicates a second substitution function, and store a set of result elements of the first instruction in a SEVID destination register[0042] It will be appreciated that by performing both a slice of SM4-round exchanges and a slice of SM4 key generations with the same SEVID instruction, encryption or decryption may be processed concurrently with key expansion in a small buffer (e.g. 256 bits). In someembodiments a slice may comprise four rounds of SM4-round exchanges and four rounds of SM4 key generations. For such embodiments, thirty-two rounds of SM4-round exchanges and SM4 key generations may be performed using eight (or nine) SM4 round slice operations. In some embodiments each 128-bit lane of a 256-bit data path or of a 512-bit data path may be selected for processing a slice of SM4-round exchanges or for processing a slice of SM4 key generations based on a corresponding value in an immediate operand of the instruction that indicates a particular substitution function (e.g. T or T', or alternatively L or L'). In some alternative embodiments the lanes of a data path for processing a slice of SM4-round exchanges and for processing a slice of SM4 key generations may be predetermined and/or fixed. In some embodiments a slice may be implemented by micro-instructions (or micro-ops or u-ops) and results may be bypassed from one micro-instruction to the next micro-instruction. In some alternative embodiments a slice may be implemented by multiple layers (e.g. two, or four, or eight, etc.) of logic in hardware, or alternatively by some combination of micro-instructions and multiple layers of logic in hardware. In some embodiments a slice may comprise a number of rounds (e.g. one, two, four, eight, sixteen, or thirty-two) of SM4-round exchanges and SM4 key generations indicated by value in an immediate operand of the instruction. In some alternative embodiments the number of rounds in a slice may be indicated by the instruction mnemonic and/or by and operation encoding (or opcode).[0043] In the following description, numerous specific details such as processing logic, processor types, micro-architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of embodiments of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present invention.[0044] Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. Similar techniques and teachings of embodiments of the present invention can be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of embodiments of the present invention are applicable to any processor or machine that performs data manipulations. However, the present invention is not limited to processors or machines that perform 512 bit, 256 bit, 128 bit, 64 bit, 32 bit, 16 bit or 8 bit data operations and can be applied to any processor and machine in which manipulation or management of data is performed. In addition, the following description provides examples, and the accompanying drawings show various examples for the purposes of illustration.However, these examples should not be construed in a limiting sense as they are merely intended to provide examples of embodiments of the present invention rather than to provide an exhaustive list of all possible implementations of embodiments of the present invention.[0045] Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present invention can be accomplished by way of data and/or instructions stored on a machine-readable, tangible medium, which when performed by a machine cause the machine to perform functions consistent with at least one embodiment of the invention. In one embodiment, functions associated with embodiments of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present invention. Embodiments of the present invention may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereon instructions which may be used to program a computer (or other electronic devices) to perform one or more operations according to embodiments of the present invention. Alternatively, steps of embodiments of the present invention might be performed by specific hardware components that contain fixed- function logic for performing the steps, or by any combination of programmed computer components and fixed-function hardware components.[0046] Instructions used to program logic to perform embodiments of the invention can be stored within a memory in the system, such as DRAM, cache, flash memory, or other storage. Furthermore, the instructions can be distributed via a network or by way of other computer readable media. Thus a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto- optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), ErasableProgrammable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, or a tangible, machine-readable storage used in the transmission of information over the Internet via electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.). Accordingly, the computer-readable medium includes any type of tangible machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer).[0047] A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language. Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different mask layers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. A memory or a magnetic or optical storage such as a disc may be the machine readable medium to store information transmitted via optical or electrical wave modulated or otherwise generated to transmit such information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, acommunication provider or a network provider may store on a tangible, machine-readable medium, at least temporarily, an article, such as information encoded into a carrier wave, embodying techniques of embodiments of the present invention.[0048] In modern processors, a number of different execution units are used to process and execute a variety of code and instructions. Not all instructions are created equal as some are quicker to complete while others can take a number of clock cycles to complete. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there are certain instructions that have greater complexity and require more in terms of execution time and processor resources. For example, there are floating point instructions, load/store operations, data moves, etc.[0049] As more computer systems are used in internet, text, and multimedia applications, additional processor support has been introduced over time. In one embodiment, an instruction set may be associated with one or more computer architectures, including data types, instructions, register architecture, addressing modes, memory architecture, interrupt and exception handling, and external input and output (I/O).[0050] In one embodiment, the instruction set architecture (ISA) may be implemented by one or more micro-architectures, which includes processor logic and circuits used to implement one or more instruction sets. Accordingly, processors with different micro-architectures can share at least a portion of a common instruction set. For example, Intel® Pentium 4 processors, Intel® Core™ processors, and processors from Advanced Micro Devices, Inc. of Sunnyvale CA implement nearly identical versions of the x86 instruction set (with some extensions that have been added with newer versions), but have different internal designs. Similarly, processors designed by other processor development companies, such as ARM Holdings, Ltd., MIPS, or their licensees or adopters, may share at least a portion a common instruction set, but may include different processor designs. For example, the same register architecture of the ISA may be implemented in different ways in different micro-architectures using new or well-known techniques, including dedicated physical registers, one or more dynamically allocated physical registers using a register renaming mechanism (e.g., the use of a Register Alias Table (RAT), a Reorder Buffer (ROB) and a retirement register file. In one embodiment, registers may include one or more registers, register architectures, register files, or other register sets that may or may not be addressable by a software programmer.[0051] In one embodiment, an instruction may include one or more instruction formats. In one embodiment, an instruction format may indicate various fields (number of bits, location of bits, etc.) to specify, among other things, the operation to be performed and the operand(s) on which that operation is to be performed. Some instruction formats may be further broken defined by instruction templates (or sub formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields and/or defined to have a given field interpreted differently. In one embodiment, an instruction is expressed using an instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and specifies or indicates the operation and the operands upon which the operation will operate.[0052] Scientific, financial, auto-vectorized general purpose, RMS (recognition, mining, and synthesis), and visual and multimedia applications (e.g., 2D/3D graphics, image processing, video compression/decompression, voice recognition algorithms and audio manipulation) may require the same operation to be performed on a large number of data items. In one embodiment, Single Instruction Multiple Data (SEVID) refers to a type of instruction that causes a processor to perform an operation on multiple data elements. SEVID technology may be used in processors that can logically divide the bits in a register into a number of fixed-sized or variable- sized data elements, each of which represents a separate value. For example, in one embodiment, the bits in a 64-bit register may be organized as a source operand containing four separate 16-bit data elements, each of which represents a separate 16-bit value. This type of data may be referred to as 'packed' data type or 'vector' data type, and operands of this data type are referred to as packed data operands or vector operands. In one embodiment, a packed data item or vector may be a sequence of packed data elements stored within a single register, and a packed data operand or a vector operand may a source or destination operand of a SIMD instruction (or 'packed data instruction' or a 'vector instruction'). In one embodiment, a SIMD instruction specifies a single vector operation to be performed on two source vector operands to generate a destination vector operand (also referred to as a result vector operand) of the same or different size, with the same or different number of data elements, and in the same or different data element order.[0053] SIMD technology, such as that employed by the Intel® Core™ processors having an instruction set including x86, MMX™, Streaming SIMD Extensions (SSE), SSE2, SSE3, SSE4.1, and SSE4.2 instructions, ARM processors, such as the ARM Cortex® family of processors having an instruction set including the Vector Floating Point (VFP) and/or NEON instructions, and MIPS processors, such as the Loongson family of processors developed by the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences, has enabled a significant improvement in application performance (Core™ and MMX™ are registered trademarks or trademarks of Intel Corporation of Santa Clara, Calif.).[0054] In one embodiment, destination and source registers/data are generic terms to represent the source and destination of the corresponding data or operation. In someembodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, "DEST1" may be a temporary storage register or other storage area, whereas "SRCl" and "SRC2" may be a first and second source storage register or other storage area, and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). In one embodiment, one of the source registers may also act as a destination register by, for example, writing back the result of an operation performed on the first and second source data to one of the two source registers serving as a destination registers.[0055] Figure 1A is a block diagram of an exemplary computer system formed with a processor that includes execution units to execute an instruction in accordance with one embodiment of the present invention. System 100 includes a component, such as a processor 102 to employ execution units including logic to perform algorithms for process data, in accordance with the present invention, such as in the embodiment described herein. System 100 is representative of processing systems based on the PENTIUM ΠΙ, PENTIUM 4, Xeon , Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present invention are not limited to any specific combination of hardware circuitry and software.[0056] Embodiments are not limited to computer systems. Alternative embodiments of the present invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that can perform one or more instructions in accordance with at least one embodiment.[0057] Figure 1A is a block diagram of a computer system 100 formed with a processor 102 that includes one or more execution units 108 to perform an algorithm to perform at least one instruction in accordance with one embodiment of the present invention. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments can be included in a multiprocessor system. System 100 is an example of a 'hub' system architecture. The computer system 100 includes a processor 102 to process data signals. The processor 102 can be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLrW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 102 is coupled to a processor bus 110 that can transmit data signals between the processor 102 and other components in the system 100. The elements of system 100 perform their conventional functions that are well known to those familiar with the art.[0058] In one embodiment, the processor 102 includes a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. Alternatively, in another embodiment, the cache memory can reside external to the processor 102. Other embodiments can also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 can store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register.[0059] Execution unit 108, including logic to perform integer and floating point operations, also resides in the processor 102. The processor 102 also includes a microcode (ucode) ROM that stores microcode for certain macroinstructions. For one embodiment, execution unit 108 includes logic to handle a packed instruction set 109. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications can be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This can eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time.[0060] Alternate embodiments of an execution unit 108 can also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 includes a memory 120. Memory 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 can store instructions and/or data represented by data signals that can be executed by the processor 102.[0061] A system logic chip 116 is coupled to the processor bus 110 and memory 120. The system logic chip 116 in the illustrated embodiment is a memory controller hub (MCH). The processor 102 can communicate to the MCH 116 via a processor bus 110. The MCH 116 provides a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH 116 is to direct data signals between the processor 102, memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 can provide a graphics port for coupling to a graphics controller 112. The MCH 116 is coupled to memory 120 through a memory interface 118. The graphics card 112 is coupled to the MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114.[0062] System 100 uses a proprietary hub interface bus 122 to couple the MCH 116 to the I/O controller hub (ICH) 130. The ICH 130 provides direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 120, chipset, and processor 102. Some examples are the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. The data storage device 124 can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.[0063] For another embodiment of a system, an instruction in accordance with one embodiment can be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system is a flash memory. The flash memory can be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller can also be located on a system on a chip.[0064] Figure IB illustrates a data processing system 140 which implements the principles of one embodiment of the present invention. It will be readily appreciated by one of skill in the art that the embodiments described herein can be used with alternative processing systems without departure from the scope of embodiments of the invention.[0065] Computer system 140 comprises a processing core 159 capable of performing at least one instruction in accordance with one embodiment. For one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate said manufacture.[0066] Processing core 159 comprises an execution unit 142, a set of register file(s) 145, and a decoder 144. Processing core 159 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention. Execution unit 142 is used for executing instructions received by processing core 159. In addition to performing typical processor instructions, execution unit 142 can perform instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 includes instructions for performing embodiments of the invention and other packed instructions.Execution unit 142 is coupled to register file 145 by an internal bus. Register file 145 represents a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area used for storing the packed data is not critical. Execution unit 142 is coupled to decoder 144. Decoder 144 is used for decoding instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. In one embodiment, the decoder is used to interpret the opcode of the instruction, which will indicate what operation should be performed on the corresponding data indicated within the instruction.[0067] Processing core 159 is coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 forcommunicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158.[0068] One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 capable of performing SEVID operations including a text string comparison operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), a discrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motioncompensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM).[0069] Figure 1C illustrates another alternative embodiments of a data processing system capable of executing instructions to provide SIMD SM4 cryptographic block cipher functionality. In accordance with one alternative embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. The input/output system 168 may optionally be coupled to a wireless interface 169. SIMD coprocessor 161 is capable of performing operations including instructions in accordance with one embodiment. Processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170.[0070] For one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register file(s) 164. One embodiment of main processor 166 comprises a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment for execution by execution unit 162. For alternative embodiments, SIMDcoprocessor 161 also comprises at least part of decoder 165B to decode instructions of instruction set 163. Processing core 170 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention.[0071] In operation, the main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with the cache memory 167, and the input/output system 168. Embedded within the stream of data processing instructions are SIMD coprocessor instructions. The decoder 165 of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, the main processor 166 issues these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 171 where from they are received by any attached SIMD coprocessors. In this case, the SIMD coprocessor 161 will accept and execute any received SIMD coprocessor instructions intended for it.[0072] Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. For one embodiment of processing core 170, main processor 166, and a SIMD coprocessor 161 are integrated into a single processing core 170 comprising an execution unit 162, a set of register file(s) 164, and a decoder 165 to recognize instructions of instruction set 163 including instructions in accordance with one embodiment.[0073] Figure 2 is a block diagram of the micro-architecture for a processor 200 that includes logic circuits to perform instructions in accordance with one embodiment of the present invention. In some embodiments, an instruction in accordance with one embodiment can be implemented to operate on data elements having sizes of byte, word, doubleword, quadword, etc., as well as datatypes, such as single and double precision integer and floating point datatypes. In one embodiment the in-order front end 201 is the part of the processor 200 that fetches instructions to be executed and prepares them to be used later in the processor pipeline. The front end 201 may include several units. In one embodiment, the instruction pref etcher 226 fetches instructions from memory and feeds them to an instruction decoder 228 which in turn decodes or interprets them. For example, in one embodiment, the decoder decodes a received instruction into one or more operations called "micro-instructions" or "micro-operations" (also called micro op or uops) that the machine can execute. In other embodiments, the decoder parses the instruction into an opcode and corresponding data and control fields that are used by the micro-architecture to perform operations in accordance with one embodiment. In oneembodiment, the trace cache 230 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 234 for execution. When the trace cache 230 encounters a complex instruction, the microcode ROM 232 provides the uops needed to complete the operation.[0074] Some instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete a instruction, the decoder 228 accesses the microcode ROM 232 to do the instruction. For one embodiment, an instruction can be decoded into a small number of micro ops for processing at the instruction decoder 228. In another embodiment, an instruction can be stored within the microcode ROM 232 should a number of micro-ops be needed to accomplish the operation. The trace cache 230 refers to a entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences to complete one or more instructions in accordance with one embodiment from the micro-code ROM 232. After the microcode ROM 232 finishes sequencing micro-ops for an instruction, the front end 201 of the machine resumes fetching micro-ops from the trace cache 230.[0075] The out-of-order execution engine 203 is where the instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. The uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 202 of one embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution.[0076] Register files 208, 210, sit between the schedulers 202, 204, 206, and the execution units 212, 214, 216, 218, 220, 222, 224 in the execution block 211. There is a separate register file 208, 210, for integer and floating point operations, respectively. Each register file 208, 210, of one embodiment also includes a bypass network that can bypass or forward just completed results that have not yet been written into the register file to new dependent uops. The integer register file 208 and the floating point register file 210 are also capable of communicating data with the other. For one embodiment, the integer register file 208 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 210 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width.[0077] The execution block 211 contains the execution units 212, 214, 216, 218, 220, 222, 224, where the instructions are actually executed. This section includes the register files 208, 210, that store the integer and floating point data operand values that the micro-instructions need to execute. The processor 200 of one embodiment is comprised of a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. For one embodiment, the floating point execution blocks 222, 224, execute floating point, MMX, SIMD, and SSE, or other operations. The floating point ALU 222 of one embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present invention, instructions involving a floating point value may be handled with the floating point hardware. In one embodiment, the ALU operations go to the high-speed ALU execution units 216, 218. The fast ALUs 216, 218, of one embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 220 as the slow ALU 220 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 212, 214. For one embodiment, the integer ALUs 216, 218, 220, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 216, 218, 220, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 222, 224, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating point units 222, 224, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions. [0078] In one embodiment, the uops schedulers 202, 204, 206, dispatch dependent operations before the parent load has finished executing. As uops are speculatively scheduled and executed in processor 200, the processor 200 also includes logic to handle memory misses. If a data load misses in the data cache, there can be dependent operations in flight in the pipeline that have left the scheduler with temporarily incorrect data. A replay mechanism tracks and re-executes instructions that use incorrect data. Only the dependent operations need to be replayed and the independent ones are allowed to complete. The schedulers and replay mechanism of one embodiment of a processor are also designed to catch instructions that provide SIMD SM4 cryptographic block cipher functionality.[0079] The term "registers" may refer to the on-board processor storage locations that are used as part of instructions to identify operands. In other words, registers may be those that are usable from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit.Rather, a register of an embodiment is capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty- two bit integer data. A register file of one embodiment also contains eight multimedia SIMD registers for packed data. For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operate with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In one embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, integer and floating point are either contained in the same register file or different register files. Furthermore, in one embodiment, floating point and integer data may be stored in different registers or the same registers.[0080] In the examples of the following figures, a number of data operands are described. Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention. Fig. 3A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128 bits wide operands. The packed byte format 310 of this example is 128 bits long and contains sixteen packed byte data elements. A byte is defined here as 8 bits of data. Information for each byte data element is stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 for byte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits are used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in parallel.[0081] Generally, a data element is an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register is 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register is 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in Fig. 3A are 128 bit long, embodiments of the present invention can also operate with 64 bit wide, 256 bit wide, 512 bit wide, or other sized operands. The packed word format 320 of this example is 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. The packed doubleword format 330 of Fig. 3A is 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty two bits of information. A packed quadword is 128 bits long and contains two packed quad- word data elements.[0082] Figure 3B illustrates alternative in-register data storage formats. Each packed data can include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For an alternative embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One alternative embodiment of packed half 341 is one hundred twenty-eight bits long containing eight 16-bit data elements. One embodiment of packed single 342 is one hundred twenty-eight bits long and contains four 32-bit data elements. One embodiment of packed double 343 is one hundred twenty-eight bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits, 512-bits or more.[0083] Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMD register. Information for each byte data element is stored in bit seven through bit zero for byte zero, bit fifteen through bit eight for byte one, bit twenty-three through bit sixteen for byte two, etc., and finally bit one hundred twenty through bit one hundred twenty- seven for byte fifteen. Thus, all available bits are used in the register. This storage arrangement can increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element is the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero are stored in a SIMD register. Signed packed word representation 347 is similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element is the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 is similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit is the thirty- second bit of each doubleword data element.[0084] Figure 3D is a depiction of one embodiment of an operation encoding (opcode) format 360, having thirty- two or more bits, and register/memory operand addressing modes corresponding with a type of opcode format described in the "Intel® 64 and IA-32 IntelArchitecture Software Developer's Manual Combined Volumes 2A and 2B: Instruction Set Reference A-Z," which is which is available from Intel Corporation, Santa Clara, CA on the world-wide-web (www) at intel.com/products/processor/manuals/. In one embodiment, and instruction may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. For one embodiment, destination operand identifier 366 is the same as source operand identifier 364, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 366 is the same as source operand identifier 365, whereas in other embodiments they are different. In one embodiment, one of the source operands identified by source operand identifiers 364 and 365 is overwritten by the results of the instruction, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. For one embodiment, operand identifiers 364 and 365 may be used to identify 32-bit or 64-bit source and destination operands.[0085] Figure 3E is a depiction of another alternative operation encoding (opcode) format 370, having forty or more bits. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. An instruction according to one embodiment may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. For one embodiment, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. For one embodiment, destination operand identifier 376 is the same as source operand identifier 374, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 376 is the same as source operand identifier 375, whereas in other embodiments they are different. In one embodiment, an instruction operates on one or more of the operands identified by operand identifiers 374 and 375 and one or more operands identified by the operand identifiers 374 and 375 is overwritten by the results of the instruction, whereas in other embodiments, operands identified by identifiers 374 and 375 are written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale-index -base and displacement bytes.[0086] Turning next to Figure 3F, in some alternative embodiments, 64-bit (or 128-bit, or 256-bit, or 512-bit or more) single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for alternative embodiments, operations may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor can operate on 8, 16, 32, and 64 bit values. For one embodiment, an instruction is performed on integer data elements. In some embodiments, an instruction may be executed conditionally, using condition field 381. For some embodiments, source data sizes may be encoded by field 383. In some embodiments, Zero (Z), negative (N), carry (C), and overflow (V) detection can be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384.[0087] Turning next to Figure 3G is a depiction of another alternative operation encoding (opcode) format 397, to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment, corresponding with a type of opcode format described in the "Intel® Advanced Vector Extensions Programming Reference," which is available from Intel Corp., Santa Clara, CA on the world-wide-web (www) at intel.com/products/processor/manuals/.[0088] The original x86 instruction set provided for a 1-byte opcode with various formats of address syllable and immediate operand contained in additional bytes whose presence was known from the first "opcode" byte. Additionally, there were certain byte values that were reserved as modifiers to the opcode (called prefixes, as they had to be placed before the instruction). When the original palette of 256 opcode bytes (including these special prefix values) was exhausted, a single byte was dedicated as an escape to a new set of 256 opcodes. As vector instructions (e.g., SIMD) were added, a need for more opcodes was generated, and the "two byte" opcode map also was insufficient, even when expanded through the use of prefixes. To this end, new instructions were added in additional maps which use 2 bytes plus an optional prefix as an identifier.[0089] Additionally, in order to facilitate additional registers in 64-bit mode, an additional prefix may be used (called "REX") in between the prefixes and the opcode (and any escape bytes necessary to determine the opcode). In one embodiment, the REX may have 4 "payload" bits to indicate use of additional registers in 64-bit mode. In other embodiments it may have fewer or more than 4 bits. The general format of at least one instruction set (which corresponds generally with format 360 and/or format 370) is illustrated generically by the following:[prefixes] [rex] escape [escape2] opcode modrm (etc.)[0090] Opcode format 397 corresponds with opcode format 370 and comprises optional VEX prefix bytes 391 (beginning with C4 hex in one embodiment) to replace most other commonly used legacy instruction prefix bytes and escape codes. For example, the following illustrates an embodiment using two fields to encode an instruction, which may be used when a second escape code is present in the original instruction, or when extra bits (e.g, the XB and W fields) in the REX field need to be used. In the embodiment illustrated below, legacy escape is represented by a new escape value, legacy prefixes are fully compressed as part of the "payload" bytes, legacy prefixes are reclaimed and available for future expansion, the second escape code is compressed in a "map" field, with future map or feature space available, and new features are added (e.g., increased vector length and an additional source register specifier).[ ode modhrm [sib] [disp] [im code inodmi [sib] [di&p] [imm][0091] An instruction according to one embodiment may be encoded by one or more of fields 391 and 392. Up to four operand locations per instruction may be identified by field 391 in combination with source operand identifiers 374 and 375 and in combination with an optional scale-index-base (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. For one embodiment, VEX prefix bytes 391 may be used to identify 32-bit or 64-bit source and destination operands and/or 128-bit or 256-bit SIMD register or memory operands. For one embodiment, the functionality provided by opcode format 397 may be redundant with opcode format 370, whereas in other embodiments they are different. Opcode formats 370 and 397 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD field 373 and by optional (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395.[0092] Turning next to Figure 3H is a depiction of another alternative operation encoding (opcode) format 398, to provide SIMD SM4 cryptographic block cipher functionality according to another embodiment. Opcode format 398 corresponds with opcode formats 370 and 397 and comprises optional EVEX prefix bytes 396 (beginning with 62 hex in one embodiment) to replace most other commonly used legacy instruction prefix bytes and escape codes and provide additional functionality. An instruction according to one embodiment may be encoded by one or more of fields 396 and 392. Up to four operand locations per instruction and a mask may be identified by field 396 in combination with source operand identifiers 374 and 375 and in combination with an optional scale-index-base (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. For one embodiment, EVEX prefix bytes 396 may be used to identify 32-bit or 64-bit source and destination operands and/or 128-bit, 256- bit or 512-bit SIMD register or memory operands. For one embodiment, the functionality provided by opcode format 398 may be redundant with opcode formats 370 or 397, whereas in other embodiments they are different. Opcode format 398 allows register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing, with masks, specified in part by MOD field 373 and by optional (SIB) identifier 393, an optional displacement identifier 394, and an optional immediate byte 395. The general format of at least one instruction set (which corresponds generally with format 360 and/or format 370) is illustrated generically by the following:evexl RXBmmmmm WvvvLpp evex4 opcode modrm [sib] [disp] [imm][0093] For one embodiment an instruction encoded according to the EVEX format 398 may have additional "payload" bits that may be used to provide SIMD SM4 cryptographic block cipher functionality with additional new features such as, for example, a user configurable mask register, or an additional operand, or selections from among 128-bit, 256-bit or 512-bit vector registers, or more registers from which to select, etc.[0094] For example, where VEX format 397 may be used to provide SIMD SM4cryptographic block cipher functionality with an implicit mask, the EVEX format 398 may be used to provide SIMD SM4 cryptographic block cipher functionality with an explicit user configurable mask. Additionally, where VEX format 397 may be used to provide SIMD SM4 cryptographic block cipher functionality on 128-bit or 256-bit vector registers, EVEX format 398 may be used to provide SIMD SM4 cryptographic block cipher functionality on 128-bit, 256-bit, 512-bit or larger (or smaller) vector registers.[0095] Example instructions to provide SIMD SM4 cryptographic block cipher functionality are illustrated by the following examples:Instruction sourcel/ source2 source3 source4 descriptiondestinationSM4 rnd block/key Vmml Vmm2/ Imm8 Perform n SM4 round block encryptionsMem-V or key generations as indicated by Imm8 on the input block or 4-key schedule in each 128-bit lane of Vmml using round keys or constants in each 128-bit lane of Vmm2 or Mem-V, respectively. Store the resulting output blocks and/or 4-key schedules in each respective 128-bit lane of Vmml.SM4 2rnd block/key Vmml Vmm2 Vmm3/ Imm8 Perform two SM4 round blockMem-V encryptions or key generations asindicated by Imm8 on the input block or 4-key schedule in each 128-bit lane of Vmm2 using round keys or constants in each 128-bit lane of Vmm3 or Mem-V, respectively. Store the resulting output blocks and/or 4-key schedules in respective 128-bit lanes of Vmml.SM4 4rnd block/key Vmml Vmm2/ Imm8 Perform four SM4 round blockMem-V encryptions or key generations asindicated by Imm8 on the input block or 4-key schedule in each 128-bit lane of Vmml using round keys or constants in each 128-bit lane of Vmm2 or Mem-V, respectively. Store the resulting output blocks and/or 4-key schedules in respective 128-bit lanes of Vmml.SM4 round block Vmml Vmm2/ Imm8 Perform n SM4 round block encryptionsMem-V as indicated by Imm8 on input blocks in each 128-bit lane of Vmml using round keys in each respective 128-bit lane of Vmm2 or Mem-V. Store the resulting output blocks in respective 128-bit lanes of Vmml.SM4 round key Vmml Vmm2/ Imm8 Perform n SM4 round key generations asMem-V indicated by Imm8 on the 4-key schedule in each 128-bit lane of Vmml using constants in each respective 128-bit lane of Vmm2 or Mem-V. Store the resulting 4-key schedules in respective 128-bit lanes of Vmml. [0096] Example instructions illustrated above may specify a first and a second source data operand set (e.g. as Vmml and Vmm2/Mem-V, which may be 256 bits or 512 bits, etc), and substitution function indicators (e.g. in an immediate operand, Imm8). Embodiments of a processor for executing the example instructions illustrated above may include encryption units, responsive to the instruction, to: perform a slice of SM4-round exchanges on a portion of the first source data operand set with corresponding keys from the second source data operand set in response to a substitution function indicator that indicates a first substitution function (e.g. T or L based on a respective value of one in Imm8), perform a slice of SM4 key generations using another portion of the first source data operand set with corresponding constants from the second source data operand set in response to a substitution function indicator that indicates a second substitution function (e.g. T or L' based on a respective value of zero in Imm8), and store a set of result elements of the first instruction in a SIMD destination register.[0097] It will be appreciated that by performing both SM4-round exchanges and SM4 key generations with the same SIMD instruction, encryption or decryption may be processed concurrently with key expansion in a small buffer (e.g. 256 bits). Since 128-bits (e.g. four 32-bit word elements) are required for each new round exchange, or key generation, the newest 128-bits form each round may be pipelined or bypassed to the next consecutive round. In some embodiments a slice may comprise four rounds of SM4-round exchanges and four rounds of SM4 key generations. For such embodiments, thirty-two rounds of SM4-round exchanges and SM4 key generations may be performed using eight (or nine) SM4 round slice operations. In some embodiments each 128-bit lane of a 256-bit data path or of a 512-bit data path may be selected for processing a slice of SM4-round exchanges or for processing a slice of SM4 key generations based on a corresponding value in an immediate operand of the instruction that indicates a particular substitution function (e.g. T or T, or alternatively L or L'). In some alternative embodiments the lanes of a data path for processing a slice of SM4-round exchanges and for processing a slice of SM4 key generations may be predetermined and/or fixed. In some embodiments a slice may be implemented by micro-instructions (or micro-ops or u-ops) and results may be bypassed from one micro-instruction to the next micro-instruction. In some alternative embodiments a slice may be implemented by multiple layers (e.g. two, or four, or eight, etc.) of logic in hardware, or alternatively by some combination of micro-instructions and multiple layers of logic in hardware. In some embodiments a slice may comprise a number of rounds (e.g. one, two, four, eight, sixteen, or thirty-two) of SM4-round exchanges and SM4 key generations indicated by a value in an immediate operand of the instruction. In some alternative embodiments the number of rounds in a slice may be indicated by the instruction mnemonic and/or by an operation encoding (or opcode). In some embodiments wherein a slice may comprise a plurality of rounds (e.g. four, eight, sixteen, thirty-two, etc.), key information in a source operand may be updated in each round and supplied to block processing logic for the next round, and constants may be read (e.g. from a memory operand of 128-bits, 256-bits, 512-bits, 1024-bits, etc.) to be supplied to key processing logic for each successive round.[0098] Figure 4A is a block diagram illustrating an in-order pipeline and a register renaming stage, out-of-order issue/execution pipeline according to at least one embodiment of the invention. Figure 4B is a block diagram illustrating an in-order architecture core and a register renaming logic, out-of-order issue/execution logic to be included in a processor according to at least one embodiment of the invention. The solid lined boxes in Figure 4A illustrate the in-order pipeline, while the dashed lined boxes illustrates the register renaming, out-of-orderissue/execution pipeline. Similarly, the solid lined boxes in Figure 4B illustrate the in-order architecture logic, while the dashed lined boxes illustrates the register renaming logic and out-of- order issue/execution logic.[0099] In Figure 4A, a processor pipeline 400 includes a fetch stage 402, a length decode stage 404, a decode stage 406, an allocation stage 408, a renaming stage 410, a scheduling (also known as a dispatch or issue) stage 412, a register read/memory read stage 414, an execute stage 416, a write back/memory write stage 418, an exception handling stage 422, and a commit stage 424.[00100] In Figure 4B, arrows denote a coupling between two or more units and the direction of the arrow indicates a direction of data flow between those units. Figure 4B shows processor core 490 including a front end unit 430 coupled to an execution engine unit 450, and both are coupled to a memory unit 470.[00101] The core 490 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 490 may be a special-purpose core, such as, for example, a network or communication core, compression engine, graphics core, or the like.[00102] The front end unit 430 includes a branch prediction unit 432 coupled to an instruction cache unit 434, which is coupled to an instruction translation lookaside buffer (TLB) 436, which is coupled to an instruction fetch unit 438, which is coupled to a decode unit 440. The decode unit or decoder may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decoder may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. The instruction cache unit 434 is further coupled to a level 2 (L2) cache unit 476 in the memory unit 470. The decode unit 440 is coupled to a rename/allocator unit 452 in the execution engine unit 450.[00103] The execution engine unit 450 includes the rename/allocator unit 452 coupled to a retirement unit 454 and a set of one or more scheduler unit(s) 456. The scheduler unit(s) 456 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 456 is coupled to the physical register file(s) unit(s) 458. Each of the physical register file(s) units 458 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point, etc., status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. The physical register file(s) unit(s) 458 is overlapped by the retirement unit 454 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s), using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). Generally, the architectural registers are visible from the outside of the processor or from a programmer's perspective. The registers are not limited to any known particular type of circuit. Various different types of registers are suitable as long as they are capable of storing and providing data as described herein. Examples of suitable registers include, but are not limited to, dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. The retirement unit 454 and the physical register file(s) unit(s) 458 are coupled to the execution cluster(s) 460. The execution cluster(s) 460 includes a set of one or more execution units 462 and a set of one or more memory access units 464. The execution units 462 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 456, physical register file(s) unit(s) 458, and execution cluster(s) 460 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster, and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 464). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.[00104] The set of memory access units 464 is coupled to the memory unit 470, which includes a data TLB unit 472 coupled to a data cache unit 474 coupled to a level 2 (L2) cache unit 476. In one exemplary embodiment, the memory access units 464 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 472 in the memory unit 470. The L2 cache unit 476 is coupled to one or more other levels of cache and eventually to a main memory.[00105] By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 400 as follows: 1) the instruction fetch 438 performs the fetch and length decoding stages 402 and 404; 2) the decode unit 440 performs the decode stage 406; 3) the rename/allocator unit 452 performs the allocation stage 408 and renaming stage 410; 4) the scheduler unit(s) 456 performs the schedule stage 412; 5) the physical register file(s) unit(s) 458 and the memory unit 470 perform the register read/memory read stage 414; the execution cluster 460 perform the execute stage 416; 6) the memory unit 470 and the physical register file(s) unit(s) 458 perform the write back/memory write stage 418; 7) various units may be involved in the exception handling stage 422; and 8) the retirement unit 454 and the physical register file(s) unit(s) 458 perform the commit stage 424.[00106] The core 490 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA).[00107] It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).[00108] While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes a separate instruction and data cache units 434/474 and a shared L2 cache unit 476, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (LI) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor.Alternatively, all of the cache may be external to the core and/or the processor.[00109] Figure 5 is a block diagram of a single core processor and a multicore processor 500 with integrated memory controller and graphics according to embodiments of the invention. The solid lined boxes in Figure 5 illustrate a processor 500 with a single core 502A, a system agent 510, a set of one or more bus controller units 516, while the optional addition of the dashed lined boxes illustrates an alternative processor 500 with multiple cores 502A-N, a set of one or more integrated memory controller unit(s) 514 in the system agent unit 510, and an integrated graphics logic 508.[00110] The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 506, and external memory (not shown) coupled to the set of integrated memory controller units 514. The set of shared cache units 506 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 512 interconnects the integrated graphics logic 508, the set of shared cache units 506, and the system agent unit 510, alternative embodiments may use any number of well- known techniques for interconnecting such units.[00111] In some embodiments, one or more of the cores 502A-N are capable of multithreading. The system agent 510 includes those components coordinating and operating cores 502A-N. The system agent unit 510 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 502A-N and the integrated graphics logic 508. The display unit is for driving one or more externally connected displays.[00112] The cores 502A-N may be homogenous or heterogeneous in terms of architecture and/or instruction set. For example, some of the cores 502A-N may be in order while others are out-of-order. As another example, two or more of the cores 502A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.[00113] The processor may be a general-purpose processor, such as a Core™ i3, i5, i7, 2 Duo and Quad, Xeon™, Itanium™, XScale™ or StrongARM™ processor, which are available from Intel Corporation, of Santa Clara, Calif. Alternatively, the processor may be from another company, such as ARM Holdings, Ltd, MIPS, etc.. The processor may be a special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, co-processor, embedded processor, or the like. The processor may be implemented on one or more chips. The processor 500 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.[00114] Figures 6-8 are exemplary systems suitable for including the processor 500, while Figure 9 is an exemplary system on a chip (SoC) that may include one or more of the cores 502. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.[00115] Referring now to Figure 6, shown is a block diagram of a system 600 in accordance with one embodiment of the present invention. The system 600 may include one or more processors 610, 615, which are coupled to graphics memory controller hub (GMCH) 620. The optional nature of additional processors 615 is denoted in Figure 6 with broken lines.[00116] Each processor 610,615 may be some version of the processor 500. However, it should be noted that it is unlikely that integrated graphics logic and integrated memory control units would exist in the processors 610,615. Figure 6 illustrates that the GMCH 620 may be coupled to a memory 640 that may be, for example, a dynamic random access memory (DRAM). The DRAM may, for at least one embodiment, be associated with a non- volatile cache.[00117] The GMCH 620 may be a chipset, or a portion of a chipset. The GMCH 620 may communicate with the processor(s) 610, 615 and control interaction between the processor(s) 610, 615 and memory 640. The GMCH 620 may also act as an accelerated bus interface between the processor(s) 610, 615 and other elements of the system 600. For at least one embodiment, the GMCH 620 communicates with the processor(s) 610, 615 via a multi-drop bus, such as a frontside bus (FSB) 695.[00118] Furthermore, GMCH 620 is coupled to a display 645 (such as a flat panel display). GMCH 620 may include an integrated graphics accelerator. GMCH 620 is further coupled to an input/output (I/O) controller hub (ICH) 650, which may be used to couple various peripheral devices to system 600. Shown for example in the embodiment of Figure 6 is an external graphics device 660, which may be a discrete graphics device coupled to ICH 650, along with another peripheral device 670.[00119] Alternatively, additional or different processors may also be present in the system 600. For example, additional processor(s) 615 may include additional processors(s) that are the same as processor 610, additional processor(s) that are heterogeneous or asymmetric to processor 610, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor. There can be a variety of differences between the physical resources 610, 615 in terms of a spectrum of metrics of merit including architectural, micro-architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processors 610, 615. For at least one embodiment, the various processors 610, 615 may reside in the same die package.[00120] Referring now to Figure 7, shown is a block diagram of a second system 700 in accordance with an embodiment of the present invention. As shown in Figure 7, multiprocessor system 700 is a point-to-point interconnect system, and includes a first processor 770 and a second processor 780 coupled via a point-to-point interconnect 750. Each of processors 770 and 780 may be some version of the processor 500 as one or more of the processors 610,615.[00121] While shown with only two processors 770, 780, it is to be understood that the scope of the present invention is not so limited. In other embodiments, one or more additional processors may be present in a given processor.[00122] Processors 770 and 780 are shown including integrated memory controller units 772 and 782, respectively. Processor 770 also includes as part of its bus controller units point-to- point (P-P) interfaces 776 and 778; similarly, second processor 780 includes P-P interfaces 786 and 788. Processors 770, 780 may exchange information via a point-to-point (P-P) interface 750 using P-P interface circuits 778, 788. As shown in Figure 7, IMCs 772 and 782 couple the processors to respective memories, namely a memory 732 and a memory 734, which may be portions of main memory locally attached to the respective processors.[00123] Processors 770, 780 may each exchange information with a chipset 790 via individual P-P interfaces 752, 754 using point to point interface circuits 776, 794, 786, 798. Chipset 790 may also exchange information with a high-performance graphics circuit 738 via a high- performance graphics interface 739.[00124] A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode. [00125] Chipset 790 may be coupled to a first bus 716 via an interface 796. In one embodiment, first bus 716 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I O interconnect bus, although the scope of the present invention is not so limited.[00126] As shown in Figure 7, various I/O devices 714 may be coupled to first bus 716, along with a bus bridge 718 which couples first bus 716 to a second bus 720. In one embodiment, second bus 720 may be a low pin count (LPC) bus. Various devices may be coupled to second bus 720 including, for example, a keyboard and/or mouse 722, communication devices 727 and a storage unit 728 such as a disk drive or other mass storage device which may includeinstructions/code and data 730, in one embodiment. Further, an audio I/O 724 may be coupled to second bus 720. Note that other architectures are possible. For example, instead of the point-to- point architecture of Figure 7, a system may implement a multi-drop bus or other such architecture.[00127] Referring now to Figure 8, shown is a block diagram of a third system 800 in accordance with an embodiment of the present invention. Like elements in Figure 7 and Figure 8 bear like reference numerals, and certain aspects of Figure 7 have been omitted from Figure 8 in order to avoid obscuring other aspects of Figure 8.[00128] Figure 8 illustrates that the processors 870, 880 may include integrated memory and I/O control logic ("CL") 872 and 882, respectively. For at least one embodiment, the CL 872, 882 may include integrated memory controller units such as that described above in connection with Figures 5 and 7. In addition. CL 872, 882 may also include I/O control logic. Figure 8 illustrates that not only are the memories 832, 834 coupled to the CL 872, 882, but also that I/O devices 814 are also coupled to the control logic 872, 882. Legacy I/O devices 815 are coupled to the chipset 890.[00129] Referring now to Figure 9, shown is a block diagram of a SoC 900 in accordance with an embodiment of the present invention. Similar elements in Figure 5 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In Figure 9, an interconnect unit(s) 902 is coupled to: an application processor 910 which includes a set of one or more cores 502A-N and shared cache unit(s) 506; a system agent unit 510; a bus controller unit(s) 516; an integrated memory controller unit(s) 514; a set of one or more media processors 920 which may include integrated graphics logic 508, an image processor 924 for providing still and/or video camera functionality, an audio processor 926 for providing hardware audio acceleration, and a video processor 928 for providing video encode/decode acceleration; an static random access memory (SRAM) unit 930; a direct memory access (DMA) unit 932; and a display unit 940 for coupling to one or more external displays.[00130] Figure 10 illustrates a processor containing a central processing unit (CPU) and a graphics processing unit (GPU), which may perform at least one instruction according to one embodiment. In one embodiment, an instruction to perform operations according to at least one embodiment could be performed by the CPU. In another embodiment, the instruction could be performed by the GPU. In still another embodiment, the instruction may be performed through a combination of operations performed by the GPU and the CPU. For example, in oneembodiment, an instruction in accordance with one embodiment may be received and decoded for execution on the GPU. However, one or more operations within the decoded instruction may be performed by a CPU and the result returned to the GPU for final retirement of the instruction. Conversely, in some embodiments, the CPU may act as the primary processor and the GPU as the co-processor.[00131] In some embodiments, instructions that benefit from highly parallel, throughput processors may be performed by the GPU, while instructions that benefit from the performance of processors that benefit from deeply pipelined architectures may be performed by the CPU. For example, graphics, scientific applications, financial applications and other parallel workloads may benefit from the performance of the GPU and be executed accordingly, whereas more sequential applications, such as operating system kernel or application code may be better suited for the CPU.[00132] In Figure 10, processor 1000 includes a CPU 1005, GPU 1010, image processor 1015, video processor 1020, USB controller 1025, UART controller 1030, SPI/SDIO controller 1035, display device 1040, High-Definition Multimedia Interface (HDMI) controller 1045, MIPI controller 1050, flash memory controller 1055, dual data rate (DDR) controller 1060, security engine 1065, and I 2 S/I 2 C (Integrated Interchip Sound/Inter-Integrated Circuit) interface 1070. Other logic and circuits may be included in the processor of Figure 10, including more CPUs or GPUs and other peripheral interface controllers. [00133] One or more aspects of at least one embodiment may be implemented by representative data stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium ("tape") and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor. For example, IP cores, such as the Cortex™ family of processors developed by ARM Holdings, Ltd. and Loongson IP cores developed the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences may be licensed or sold to various customers or licensees, such as Texas Instruments, Qualcomm, Apple, or Samsung and implemented in processors produced by these customers or licensees.[00134] Figure 11 shows a block diagram illustrating the development of IP cores according to one embodiment. Storage 1130 includes simulation software 1120 and/or hardware or software model 1110. In one embodiment, the data representing the IP core design can be provided to the storage 1130 via memory 1140 (e.g., hard disk), wired connection (e.g., internet) 1150 or wireless connection 1160. The IP core information generated by the simulation tool and model can then be transmitted to a fabrication facility where it can be fabricated by a third party to perform at least one instruction in accordance with at least one embodiment.[00135] In some embodiments, one or more instructions may correspond to a first type or architecture (e.g., x86) and be translated or emulated on a processor of a different type or architecture (e.g., ARM). An instruction, according to one embodiment, may therefore be performed on any processor or processor type, including ARM, x86, MIPS, a GPU, or other processor type or architecture.[00136] Figure 12 illustrates how an instruction of a first type is emulated by a processor of a different type, according to one embodiment. In Figure 12, program 1205 contains some instructions that may perform the same or substantially the same function as an instruction according to one embodiment. However the instructions of program 1205 may be of a type and/or format that is different or incompatible with processor 1215, meaning the instructions of the type in program 1205 may not be able to be executed natively by the processor 1215.However, with the help of emulation logic, 1210, the instructions of program 1205 are translated into instructions that are natively capable of being executed by the processor 1215. In one embodiment, the emulation logic is embodied in hardware. In another embodiment, the emulation logic is embodied in a tangible, machine-readable medium containing software to translate instructions of the type in the program 1205 into the type natively executable by the processor 1215. In other embodiments, emulation logic is a combination of fixed-function or programmable hardware and a program stored on a tangible, machine-readable medium. In one embodiment, the processor contains the emulation logic, whereas in other embodiments, the emulation logic exists outside of the processor and is provided by a third party. In one embodiment, the processor is capable of loading the emulation logic embodied in a tangible, machine-readable medium containing software by executing microcode or firmware contained in or associated with the processor.[00137] Figure 13 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. Figure 13 shows a program in a high level language 1302 may be compiled using an x86 compiler 1304 to generate x86 binary code 1306 that may be natively executed by a processor with at least one x86 instruction set core 1316. The processor with at least one x86 instruction set core 1316 represents any processor that can perform substantially the same functions as a Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1304 represents a compiler that is operable to generate x86 binary code 1306 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1316. Similarly, Figure 13 shows the program in the high level language 1302 may be compiled using an alternative instruction set compiler 1308 to generate alternative instruction set binary code 1310 that may be natively executed by a processor without at least one x86 instruction set core 1314 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1312 is used to convert the x86 binary code 1306 into code that may be natively executed by the processor without an x86 instruction set core 1314. This converted code is not likely to be the same as the alternative instruction set binary code 1310 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1312 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1306.[00138] Figure 14A illustrates a diagram for one embodiment of an apparatus 1402 for execution of an instruction to provide SIMD SM4 cryptographic block cipher functionality. Apparatus 1402 comprises a first source data operand 1410 set of elements, a second source data operand 1420 set of elements, and one or more substitution function indicators in an 8-bit immediate operand 1430. In apparatus 1401 a first one or more SM4-round exchange of a portion (Xi - Xi+3) of the first source data operand 1410 set with a corresponding first one or more keys (RK from the second source data operand 1420 set is performed in response to a first indicator of the one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function, T, at multiplexer 1412. The block substitution function, T, is a reversible mixer-substitution comprising a non-linear substitution, τ (tau) and a linear substitution, L, i.e. T(.) = L( τ(.)) whereL(B) = B Θ (B «< 2) Θ (B «< 10) Θ (B «< 18) Θ (B «< 24), andB = (bo, bi, b2, b3) = x(a0, ai, a2, a3) = (Sbox(a0), Sbox(ai), Sbox(a2), Sbox(a3)), each of ao-a3, and bo-b3having 8 bits, the operation Θ represents a bitwise exclusive OR (XOR) and the operation «< represents a left rotation. Further details of the Sbox function, constant parameters, key expansion and encryption, etc. may be found in "SM4 Encryption Algorithm for Wireless Networks," translated and typeset by Whitfield Diffie of Sun Microsystems and Georger Ledin of Sonoma State University, 15 May 2008, version 1.03, available on the world- wide-web at eprint.iacr.org/2008/329.pdf. [00139] In apparatus 1401 a first one or more SM4 key generation using said portion (RIQ - RKi+3) of the first source data operand 1410 set with a corresponding first one or more constants (CKi) from the second source data operand 1420 set in response to a second indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a key substitution function, T, at multiplexer 1414. The block substitution function, , is a reversible mixer-substitution comprising the same non-linear substitution, τ (tau) but a different linear substitution, L', i.e. T'(.) = L'( τ(.)) whereL'(B) = B Θ (B «< 13) Θ (B «< 23), andB = (b0, bi, b2, b3) = x(a0, a a2, a3) = (Sbox(a0), Sbox(a , Sbox(a2), Sbox(a3)).[00140] It will be appreciated that the one or more substitution function indicators in immediate operand 1430 could be chosen in an alternative preferred embodiment to indicate block and key substitution functions, L and L', instead of T and , respectively (e.g. as illustrated in the apparatus of processing block 1403), to provide a further reduction in circuitry without any architecturally visible changes being required to apparatus 1401 or apparatus 1402 or to the particular instruction to provide SIMD SM4 cryptographic block cipher functionality. The input to T and Γ in apparatus 1401 is: Xi+1Θ Xi+2Θ Xi+3Θ RIQ and RKi+i Θ RKi+2Θ RKi+3Θ CIQ, respectively. The output of multiplexers 1412 and 1414 is then XORed with Xi and with RIQ, respectively, to produce Xi+4and RKi+4, respectively. According to one embodiment of apparatus 1401 a set of result elements 1440 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register (e.g. if only a single round is required, or if a micro-instruction is used to produce an intermediate result).[00141] In apparatus 1402 the set of result elements 1440 of the one or more SM4-round exchange and the one or more SM4 key generation is accessed (e.g. in a SIMD register) along with another source data operand 1420 set of elements, and one or more substitution function indicators in immediate operand 1430. In apparatus 1402 a second one or more SM4-round exchange of a portion (Xi+i - Xi+4) of the set of result elements 1440 with a corresponding second one or more keys (RIQ+i) from the source data operand 1420 set is performed in response to a third indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function, T, at multiplexer 1432. The input to T and T' in the second one or more SM4-round exchange of apparatus 1402 is: Xi+2Θ Xi+3Θ Xi+4Θ RKi+1. [00142] In processing block 1403 of apparatus 1402 a second one or more SM4 key generation using said portion (RIQ+i - RKi+4) of the set of result elements 1440 with a corresponding second one or more constants (CKi+1) from the source data operand 1420 set is performed in response to a fourth indicator of said another one or more substitution function indicators in immediate operand 1430 that indicates the key substitution function, L', at multiplexer 1434. The input to τ 1433 in the apparatus of processing block 1403 is: RKi+2Θ RKi+3 Θ RKi+4Θ CKi+i (e.g. as shown at XOR circuit 1431). The output of τ 1433 in apparatus 1403 is input to T 1435 and T' 1436. The selected output of multiplexers 1432 and 1434 is then XORed with Xi+1and with RIQ+i (e.g. as shown at XOR circuit 1437 of processing block 1403) to produce Xi+5 and RKi+5, respectively. According to one embodiment of apparatus 1402 another set of result elements 1450 of the second one or more SM4-round exchange and the second one or more SM4 key generation may be stored in a SEVID register (e.g. if only two rounds are required, or if another micro-instruction is used to produce another intermediate result).[00143] It will be appreciated that by performing both SM4-round exchanges and SM4 key generations with the same SEVID instruction, encryption or decryption may be processed concurrently with their respective subsequent key expansion in a small buffer (e.g. 256 bits). Since 128-bits (e.g. four 32-bit word elements) are required for each new round exchange, or key generation, the newest 128-bit results formed each round may be pipelined or bypassed to the next consecutive round. In some embodiments a slice may comprise two rounds of SM4-round exchanges and two rounds of SM4 key generations. For such embodiments, thirty-two rounds of SM4-round exchanges and SM4 key generations may be performed using sixteen (or seventeen) SM4 round slice operations. In some embodiments each 128-bit lane of a 256-bit data path or of a 512-bit data path may be selected for processing a slice of SM4-round exchanges or for processing a slice of SM4 key generations based on a corresponding value in an immediate operand of the instruction that indicates a particular substitution function (e.g. T or T', or alternatively L or L'). In alternative embodiments 128-bit lanes of a 256-bit data path or of a 512-bit data path may be determined for processing a slice of SM4-round exchanges or for processing a slice of SM4 key generations based on a mnemonic or operation encoding (or opcode) of the instruction. It will also be appreciated that the SM4 algorithm's encryption and decryption methods have the same structure, except that the order in which the round keys are used is reversed. For example, the key ordering for encryption is (RKo, RKi, RK2, ... RK31) whereas the key ordering for decryption is (RK31, RK30, RK¾, ... RKo).[00144] Figure 14B illustrates a diagram for an alternative embodiment of an apparatus 1404 for execution of an instruction to provide SEVID SM4 cryptographic block cipher functionality. Apparatus 1404 comprises a first source data operand 1410 set of elements, a second source data operand 1420 set of elements, and one or more substitution function indicators in an 8-bit immediate operand 1430. In this alternative embodiment of apparatus 1401 a first one or more SM4-round exchange of a portion (Xi - Xi+3) of the first source data operand 1410 set with a corresponding first one or more keys (RKi) from the second source data operand 1420 set is performed in response to a first indicator of the one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function as input to a corresponding processing block 1403. In this alternative embodiment of apparatus 1401 a first one or more SM4 key generation using said portion (RKi - RKi+3) of the first source data operand 1410 set with a corresponding first one or more constants (CKi) from the second source data operand 1420 set in response to a second indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a key substitution function as input to a second corresponding processing block 1403. According to one alternative embodiment of apparatus 1401 a set of result elements 1440 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register (e.g. if only a single round is required, or if a micro-instruction is used to produce an intermediate result). In other alternativeembodiments of apparatus 1401 a set of result elements 1440 of the one or more SM4-round exchange and the one or more SM4 key generation may be latched for bypass to, or stored in a temporary intermediate storage for additional processing layers. For example, someembodiments of apparatus 1401 may also comprise an intermediate source data operand 1440 set of elements. In apparatus 1404 a second one or more SM4-round exchange of a portion (Xi+i - Xi+4) of the intermediate source data operand 1440 set with a corresponding first one or more keys (RKi+i) from the second source data operand 1420 set is performed in response to the first indicator of the one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function as input to another corresponding processing block 1403. In one embodiment of apparatus 1404 a one (1) value in the first indicator indicates that a block substitution function is to be used on the corresponding 128-bit lane. In apparatus 1404 a second one or more SM4 key generation using the portion (RIQ+i - RKi+4) of the intermediate source data operand 1440 set with a corresponding second one or more constants (CKi+i) from the second source data operand 1420 set in response to the second indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a key substitution function as input to another second corresponding processing block 1403. In on embodiment of apparatus 1404 a zero (0) value in the second indicator indicates that a key substitution function is to be used on the corresponding 128-bit lane. According to one embodiment of apparatus 1404 a set of result elements 1450 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register (e.g. if only two rounds are required, or if a micro-instruction is used to produce an intermediate result). In alternative embodiments of apparatus 1404 the set of result elements 1450 of the one or more SM4-round exchange and the one or more SM4 key generation may be latched for bypass to, or stored in a temporary intermediate storage for additional processing layers.[00145] For example, embodiments of apparatus 1404 may also comprise a secondintermediate source data operand 1450 set of elements. In apparatus 1404 a third one or more SM4-round exchange of a portion (Xi+2- Xi+s) of the intermediate source data operand 1450 set with a corresponding third one or more keys (RKi+2) from the second source data operand 1420 set is performed in response to the first indicator of the one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function as input to yet another corresponding processing block 1403. Also in apparatus 1404 a third one or more SM4 key generation using a portion (RKi+2- RIQ+s) of the intermediate source data operand 1450 set with a corresponding third one or more constants (CKi+2) from the second source data operand 1420 set in response to the second indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a key substitution function as input to yet another second corresponding processing block 1403. According to one embodiment of apparatus 1404 a set of result elements 1450 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register (e.g. if only three rounds is required, or if a micro-instruction is used to produce an intermediate result). In alternative embodiments of apparatus 1404 the set of result elements 1460 of the one or more SM4-round exchange and the one or more SM4 key generation may again be latched for bypass to, or stored in a temporary intermediate storage for additional processing layers. Thus embodiments of apparatus 1404 may also comprise a third intermediate source data operand 1460 set of elements. In suchembodiments of apparatus 1404 a fourth one or more SM4-round exchange of a portion (Xi+3- Xi+6) of the intermediate source data operand 1460 set with a corresponding fourth one or more keys (RKi+3) from the second source data operand 1420 set is performed in response to the first indicator of the one or more substitution function indicators in immediate operand 1430 that indicates a block substitution function as input to yet another corresponding processing block 1403. Also in apparatus 1404 a fourth one or more SM4 key generation using a portion (RKi+3- RKi+6) of the intermediate source data operand 1460 set with a corresponding fourth one or more constants (CKi+3) from the second source data operand 1420 set in response to the second indicator of said one or more substitution function indicators in immediate operand 1430 that indicates a key substitution function as input to yet another second corresponding processing block 1403. According to one embodiment of apparatus 1404 a set of result elements 1470 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register (e.g. if only four rounds is required, or if a micro-instruction is used to produce an intermediate result). In alternative embodiments of apparatus 1404 the set of result elements 1470 of the one or more SM4-round exchange and the one or more SM4 key generation may be latched for bypass to, or stored in a temporary intermediate storage for additional processing layers.[00146] It will be appreciated that in some embodiments a slice may comprise four rounds of SM4-round exchanges and four rounds of SM4 key generations. For such embodiments, thirty- two rounds of SM4-round exchanges and SM4 key generations may be performed using eight (or nine) SM4 round slice operations. In some embodiments each 128-bit lane of a 256-bit data path or of a 512-bit data path may be selected for processing a slice of SM4-round exchanges or for processing a slice of SM4 key generations based on a corresponding value in an immediate operand of the instruction. In some alternative embodiments the lanes of a data path for processing a slice of SM4-round exchanges and for processing a slice of SM4 key generations may be predetermined and/or fixed according to the operation code (or opcodde). [00147] It will also be appreciated that in some embodiments a slice may be implemented by micro-instructions (or micro-ops or u-ops) and results may be bypassed from one microinstruction to the next micro-instruction. In some alternative embodiments a slice may be implemented by multiple layers (e.g. two, or four, or eight, etc.) of logic in hardware, or alternatively by some combination of micro-instructions and multiple layers of logic in hardware. In some embodiments a slice may comprise a number of rounds (e.g. one, two, four, eight, sixteen, or thirty-two) of SM4-round exchanges and SM4 key generations indicated by a value in an immediate operand of the instruction. In some alternative embodiments the number of rounds in a slice may be indicated by the instruction mnemonic and/or by and operation encoding (or opcode).[00148] Figure 14C illustrates a diagram for another alternative embodiment of an apparatus 1406 for execution of an instruction to provide SIMD SM4 cryptographic block cipher functionality. Apparatus 1406 comprises a first source data operand 1410 set of elements, a second source data operand 1420 set of elements, and one or more substitution function indicators (e.g. optionally in an optional 8-bit immediate operand 1430). In one embodiment of an apparatus 1405 a portion (Xi - Xi+3) is first selected according to an operand selection control 1451 from the first source data operand 1410 set with a corresponding first one or more keys (RKi) selected according to an element selection control 1457 from the second source data operand 1420 set for performing a first one or more SM4-round exchange in response to a first indicator of the one or more substitution function indicators 1452 in control block 1455 (and/or optionally also in an optional immediate operand 1430) that indicates a block substitution function as input to a corresponding processing block 1403. In this embodiment of apparatus 1405 a portion (RKi - RKi+3) may be first selected according to an operand selection control 1451 from the first source data operand 1410 set with a corresponding first one or more constants (CKi) selected according to an element selection control 1457 from the second source data operand 1420 set for performing a first one or more SM4 key generation in response to a second indicator of said one or more substitution function indicators 1452 in control block 1455 (and/or optionally also in an optional immediate operand 1430) that indicates a key substitution function as input to a second corresponding processing block 1403. According to one alternative embodiment of apparatus 1406 a set of result elements 1480 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register 1490 (e.g. if the required number of rounds for a slice has completed, or if a micro-instruction is used to produce an intermediate result). In other alternative embodiments of apparatus 1406 a set of result elements 1480 of the one or more SM4-round exchange and the one or more SM4 key generation may be latched for bypass 1453 to, or stored in a temporary intermediate storage for additional processing layers.[00149] For example, embodiments of apparatus 1406 may also comprise an intermediate source data operand 1480 set of elements. In apparatus 1405 a subsequent portion (Xj+i - Xj+4) is selected according to operand selection control 1451 from the intermediate source data operand 1480 set with a corresponding subsequent one or more keys (RKi+j+i) selected according to an element selection control 1457 from the second source data operand 1420 set for performing a subsequent one or more SM4-round exchange in response to the first indicator of the one or more substitution function indicators 1452 in control block 1455 (and/or optionally also in an optional immediate operand 1430) that indicates a block substitution function as input to thecorresponding processing block 1403. In one embodiment of apparatus 1405 a one (1) value in the first indicator indicates that a block substitution function is to be used on the corresponding 128-bit lane. In one embodiment of apparatus 1406 a value (which may be different than one) in the first indicator of the one or more substitution function indicators 1452 indicates that a block substitution function is to be used on the corresponding 128-bit lane, optionally in response to a corresponding value (which may be one (1) or may be different than one) in an immediate operand 1430. In apparatus 1405 a subsequent portion (RKj+i - RKj+4) is selected according to operand selection control 1451 from the intermediate source data operand 1480 set along with a corresponding subsequent one or more constants (CIQ+j+i) selected according to an element selection control 1457 from the second source data operand 1420 set for performing a subsequent one or more SM4 key generation in response to the second indicator of said one or more substitution function indicators 1452 in control block 1455 (and/or optionally also in an optional immediate operand 1430) that indicates a key substitution function as input to the second corresponding processing block 1403. In on embodiment of apparatus 1405 a zero (0) value in the second indicator indicates that a key substitution function is to be used on the corresponding 128-bit lane. In one embodiment of apparatus 1406 a value (which may be different than zero) in the second indicator of the one or more substitution function indicators 1452 indicates that a key substitution function is to be used on the corresponding 128-bit lane, optionally in response to a corresponding value (which may be zero (0) or may be different than zero) in an immediate operand 1430. According to one embodiment of apparatus 1406 a set of result elements 1480 of the one or more SM4-round exchange and the one or more SM4 key generation may be stored in a SIMD register 1490 (e.g. when the required number of rounds for a slice has completed).[00150] Figure 15A illustrates a flow diagram for one embodiment of a process 1501 for execution of an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality. Process 1501 and other processes herein disclosed are performed by processing blocks that may comprise dedicated hardware or software or firmware operation codes executable by general purpose machines or by special purpose machines or by a combination of both.[00151] In processing block 1531 an instruction is decoded for a SIMD SM4 round slice of operations, the instruction specifying block and/or key operation. For example, embodiments of the instruction may specify a first source data operand set, a second source data operand set, and one or more substitution function indicators, wherein the substitution function indicators may be chosen to specify either block or key operations on respective portions (e.g. such as 128-bit lanes) of the first and second source data operand sets (e.g. which could be stored in 256-bit or 512-bit SIMD registers). Responsive to the decoded instruction, a plurality of micro-instructions (or micro-ops, or uops) may optionally be generated in processing block 1536 (e.g. to perform individual rounds of the slice, or alternatively to perform either the specified block or the specified key operations). In processing block 1541 the first source data operand set is accessed (e.g. from a 256-bit or 512-bit SIMD register). In processing block 1551 the second source data operand set is accessed (e.g. from a 256-bit or 512-bit SIMD register or memory location). In processing block 1561 a SM4-round exchange is performed on a portion of the first source data operand set associated with the specified block operations and a corresponding one or more keys from the second source data operand set in response to an indicator of the one or more substitution function indicators that indicates (e.g. by a bit in an immediate operand having a first value) a substitution function for block operations. In processing block 1571 a SM4 key generation is performed using a second portion of the first source data operand set associated with the specified key operations and a corresponding one or more constants from the second source data operand set in response to another indicator of the one or more substitution function indicators that indicates (e.g. by a bit in an immediate operand having a second value) a substitution function for key operations. In processing block 1581, a determination is made as to whether or not all SM4 round operations of the slice have finished. If not processing reiterates beginning in processing block 1541. Otherwise processing proceeds to processing block 1591 where a set of result elements of the instruction are stored in a SIMD destination register.[00152] It will be appreciated that while the processing blocks of process 1501 and other processes herein disclosed are illustrated as being executed in an iterative fashion, execution in alternative orders, or concurrently, or in parallel may preferably be performed whenever possible.[00153] Figure 15B illustrates a flow diagram for an alternative embodiment of a process 1502 for execution of an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality. In processing block 1532 an instruction is decoded for a SIMD SM4 round slice of operations, the instruction specifying a set of substitution functions (e.g. for block and/or key operations). For example, embodiments of the instruction may specify a first source data operand set, a second source data operand set, and one or more substitution function indicators, wherein the substitution function indicators may be chosen to specify either block or key operations on respective portions (e.g. such as 128-bit lanes) of the first and second source data operand sets (e.g. which could be stored in 256-bit or 512-bit SIMD registers). Responsive to the decoded instruction, a plurality of micro-instructions (or micro-ops, or uops) may optionally be generated in processing block 1537 (e.g. to perform individual rounds of the slice, oralternatively to perform either the specified block or the specified key operations). In processing block 1542 the first source data operand set is accessed (e.g. from a 256-bit or 512-bit SIMD register). In processing block 1552 the second source data operand set is accessed (e.g. from a 256-bit or 512-bit SIMD register or memory location). In processing block 1562 one or more SM4-round exchanges are performed on a portion of the first source data operand set associated with a first substitution function and corresponding one or more keys from the second source data operand set in response to an indicator of the one or more substitution function indicators that indicates (e.g. by a bit in an immediate operand having a first value) that the first substitution function is for block operations. In processing block 1572 one or more SM4 key generations are performed using a portion of the first source data operand set associated with a second substitution function and corresponding one or more constants from the second source data operand set in response to another indicator of the one or more substitution function indicators that indicates (e.g. by a bit in an immediate operand having a second value) that the second substitution function is for key operations. In processing block 1582, a determination is made as to whether or not the SM4 round slice of operations have finished. If not processing reiterates beginning in processing block 1562. Otherwise processing proceeds to processing block 1592 where a set of result elements of the instruction are stored in a SIMD destination register.[00154] Figure 15C illustrates a flow diagram for another alternative embodiment of a process 1503 for execution of an instruction to provide a SIMD SM4 round slice ofcryptographic block cipher functionality. In processing block 1513 a first source data operand set including one or more input block and/or key schedule is stored in a first SIMD register (e.g. a 256-bit or 512-bit SIMD register). In processing block 1523 a second source data operand set including one or more set of round keys and/or constants is stored in a second SIMD register (e.g. a 256-bit or 512-bit SIMD register). In processing block 1533 an instruction is received for a SIMD SM4 round slice of operations, the instruction specifying a set of substitution functions (e.g. for block and/or key operations). For example, embodiments of the instruction may specify the first source data operand set, the second source data operand set, and one or more substitution function indicators, wherein some embodiments of the substitution function indicators may be chosen to specify either block or key operations on respective portions (e.g. such as 128-bit lanes) of the first and second source data operand sets. In process 1504 responsive to the instruction for a SIMD SM4 round slice of operations, a plurality of micro-instructions (or micro- ops, or uops) may optionally be generated in processing block 1538 (e.g. to perform individual rounds of the slice, or alternatively to perform either the specified block or the specified key operations). In processing block 1563 of process 1504 one or more SM4-round exchanges are performed on respective lanes of the first source data operand set associated with a first substitution function and corresponding one or more keys from the second source data operand set in response to one or more substitution function indicators that indicate (e.g. by bits in an immediate operand having a first value) the first substitution function for block operations. In processing block 1573 of process 1504 one or more SM4 key generations are performed using respective lanes of the first source data operand set associated with a second substitution function and corresponding one or more constants from the second source data operand set in response to one or more substitution function indicators that indicate (e.g. by respective bits in an immediate operand having a second value) the second substitution function for key operations. In processing block 1583 of process 1504, a determination is made as to whether or not the SM4 round slice of operations have finished. If not processing reiterates beginning in processing block 1563. Otherwise processing proceeds to processing block 1593 of process 1504 where a set of result elements of the instruction are stored in a SIMD destination register.[00155] Figure 16A illustrates a flow diagram for one embodiment of a process 1601 for efficiently implementing the SM4 cryptographic block cipher (e.g. for encryption) using an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality. In processing block 1610 a first source operand set (e.g. containing initial key values derived from a 128-bit encryption key) is stored in a first SIMD register. In processing block 1620 a second source operand set (e.g. containing constant parameter values CK0-CK3) is stored in a second SIMD register. It will be appreciated that initial preparation (not shown) of the first source operand set and of the second source operand set are performed according to the definition of the SM4 Encryption Algorithm for Wireless Networks standard (English description available on the world-wide-web at eprint.iacr.org/2008/329.pdf). In processing block 1630 SM4 key schedules (e.g. RK0-RK3) are generated using one or more lanes of the first source operand set associated with a key substitution function and corresponding constants from the second source operand set (e.g. CK0-CK3from one or more corresponding 128-bit lanes of the second SIMD register). In processing block 1640 a new first source operand (e.g. 1410) set is stored in a third SIMD register (which may be the same register as the first SIMD register in some embodiments). In processing block 1650 a new second source operand (e.g. 1420) set is stored in a fourth SIMD register (which may be the same register as the second SIMD register in some embodiments). It will be appreciated that processing block 1650 may be accomplished by use of one or more instructions to rearrange SM4 key schedules (e.g. RKi+4- RKi+7from operand 1470) and corresponding constants (e.g. CKi+4- CKi+7from memory) such as an instruction to permute elements, shuffle elements, blend elements, etc. into the new second source operand (e.g. 1420) set for processing in process 1603 by executing an instruction to perform a SIMD SM4 round slice of the SM4 cryptographic block cipher.[00156] In processing block 1660 of process 1603 SM4-round exchanges are performed on one or more lanes of the first source data operand (e.g. 1410) set associated with a specified block substitution function and corresponding one or more key schedules from the second source data operand (e.g. 1420) set in response to an indicator of one or more substitution function indicators that indicates (e.g. by a bit in immediate operand 1430 having a first value) a substitution function for block operations. In processing block 1670 SM4 key generations are performed using one or more lanes of the first source data operand (e.g. 1410) set associated with a specified key substitution function and corresponding one or more constants from the second source data operand (e.g. 1420) set in response to another indicator of the one or more substitution function indicators that indicates (e.g. by a bit in immediate operand 1430 having a second value) a substitution function for key operations. In processing block 1680 a set of result elements of the instruction are stored in a SIMD destination operand (e.g. 1470) in a SIMD register.[00157] In processing block 1690, a determination is made as to whether or not all of the SM4 round slices have finished. If not processing reiterates beginning in processing block 1640. Otherwise process 1601 ends in processing block 1699.[00158] Figure 16B illustrates a flow diagram for an alternative embodiment of a process 1602 for efficiently implementing the SM4 cryptographic block cipher using an instruction to provide a SIMD SM4 round slice of cryptographic block cipher functionality. It will be appreciated that initial preparation (not shown) of the first source operand set and of the second source operand set are performed according to the definition of the SM4 Encryption Algorithm. In processing block 1610 a first source operand set (e.g. containing initial key values derived from a 128-bit encryption key) is stored in a first SIMD register. In processing block 1620 a second source operand set (e.g. containing constant parameter values CK0-CK3) is stored in a second SIMD register. In processing block 1630 SM4 key schedules (e.g. RK0-RK3) are generated using one or more lanes of the first source operand set associated with a key substitution function and corresponding constants from the second source operand set (e.g. CKo- CK3from one or more corresponding 128-bit lanes of the second SIMD register). In processing block 1640 a new first source operand (e.g. 1410) set is stored in a third SIMD register (which may or may not be the same register as the first SIMD register in some embodiments). In processing block 1650 a new second source operand (e.g. 1420) set is stored in a fourth SIMD register (which may or may not be the same register as the second SIMD register in some embodiments). It will be appreciated that processing blocks 1640 and/or 1650 may be accomplished by use of one or more instructions to rearrange SM4 input block and/or key schedules (e.g. Xo - X3and/or RIQ - RKi+3from operand 1470) and corresponding constants (e.g. CKi - CKi+3from memory) such as an instruction to permute elements, shuffle elements, blend elements, etc. into the new second source operand (e.g. 1420) set for processing in process 1604 by executing an instruction to perform a SIMD SM4 round slice of the SM4 cryptographic block cipher.[00159] In processing block 1660 of process 1604 SM4-round exchanges are performed on one or more lanes of the first source data operand (e.g. 1410) set associated with a specified block substitution function and corresponding one or more key schedules from the second source data operand (e.g. 1420) set in response to an indicator of one or more substitution function indicators that indicates (e.g. by a bit, in the opcode or in immediate operand 1430, having a first value) a substitution function for block operations. In processing block 1670 SM4 key generations are performed using one or more lanes of the first source data operand (e.g. 1410) set associated with a specified key substitution function and corresponding one or more constants from the second source data operand (e.g. 1420) set in response to another indicator of the one or more substitution function indicators that indicates (e.g. by a bit, in the opcode or in immediate operand 1430, having a second value) a substitution function for key operations. In processing block 1682 a set of result elements of the instruction are stored in a destination and a new first source operand (e.g. 1470) set in the third SIMD register.[00160] In processing block 1690, a determination is made as to whether or not all of the SM4 round slices have finished. If not processing reiterates beginning in processing block 1650. Otherwise process 1601 ends in processing block 1699.[00161] Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non- volatile memory and/or storage elements), at least one input device, and at least one output device.[00162] Program code may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.[00163] The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.[00164] One or more aspects of at least one embodiment may be implemented byrepresentative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers ormanufacturing facilities to load into the fabrication machines that actually make the logic or processor.[00165] Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable' s (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. [00166] Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.[00167] In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.[00168] Thus, techniques for performing one or more instructions according to at least one embodiment are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and not restrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.
A method, system, apparatus, and computer program product for optimizing power consumption in special media playback scenarios. The method includes identifying a scenario where decoding of a first portion of a multimedia stream can be interrupted; and interrupting the decoding of the first portion of the multimedia stream while continuing to decode a second portion of the multimedia stream. The first portion may be a video stream and the second portion may be an audio stream, and the scenario may include a playback window for the video stream being hidden. The first portion may be an audio stream and the second portion may be a video stream, and the scenario may include the audio stream being muted. The method may further include determining that the scenario has changed and resuming decoding of the first portion of the multimedia stream.
What is claimed is: 1. A computer-implemented method comprising: identifying a scenario where decoding of a first portion of an multimedia stream can be interrupted; interrupting the decoding of the first portion of the multimedia stream while continuing to decode a second portion of the multimedia stream. 2. The method of claim 1 wherein the first portion is a video stream and the second portion is an audio stream; and the scenario includes a playback application for the video stream being hidden. 3. The method of claim 1 wherein the first portion is an audio stream and the second portion is a video stream; and the scenario includes the audio stream being muted. 4. The method of claim 1 further comprising: determining that the scenario has changed; and resuming decoding of the first portion of the multimedia stream. 5. The method of claim 4 wherein resuming decoding of the first portion of the multimedia stream comprises: identifying a first frame currently being decoded in the second portion of the multimedia stream; identifying a second frame in the first portion of the multimedia stream, the second frame corresponding to the first frame; and resuming rendering of the first portion of the multimedia stream with the second frame. 6. A system comprising: at least one processor; and a memory coupled to the at least one processor, the memory comprising instructions for performing the following: identifying a scenario where decoding of a first portion of an multimedia stream can be interrupted; intenaipting the decoding of the first portion of the multimedia stream while continuing to decode a second portion of the multimedia stream. 7. The system of claim 6 wherein the first portion is a video stream and the second portion is an audio stream; and the scenario includes a playback application for the video stream being hidden. 8. The system of claim 6 wherein the first portion is an audio stream and the second portion is a video stream; and the scenario includes the audio stream being muted. 9. The system of claim 6 wherein the instructions further comprise instructions for performing the following: determining that the scenario has changed; and resuming decoding of the first portion of the multimedia stream. 10. The system of claim 9 wherein resuming decoding of the first portion of the multimedia stream comprises: identifying a first frame currently being decoded in the second portion of the multimedia stream; identifying a second frame in the first portion of the multimedia stream, the second frame corresponding to the first frame; and resuming rendering of the first portion of the multimedia stream with the second frame. 11. A computer program product comprising: a computer-readable storage medium; and instructions in the computer-readable storage medium, wherein the instmctions, when executed in a processing system, cause the processing system to perform operations comprising: identifying a scenario where decoding of a first portion of an multimedia stream can be interrupted; interrupting the decoding of the first portion of the multimedia stream while continuing to decode a second portion of the multimedia stream. 12. The computer program product of claim 1 1 wherein the first portion is a video stream and the second portion is an audio stream; and the scenario includes a playback application for the video stream being hidden. 13. The computer program product of claim 1 1 wherein the first portion is an audio stream and the second portion is a video stream; and the scenario includes the audio stream being muted. 14. The computer program product of claim 1 1 wherein the instructions further cause the processing system to perform operations comprising: determining that the scenario has changed; and resuming decoding of the first portion of the multimedia stream. 15. The computer program product of claim 14 wherein resuming decoding of the first portion of the multimedia stream comprises: identifying a first frame currently being decoded in the second portion of the multimedia stream; identifying a second frame in the first portion of the multimedia stream, the second frame corresponding to the first frame; and resuming rendering of the first portion of the multimedia stream with the second frame.
POWER OPTIMIZATION FOR SPECIAL MEDIA PLAYBACK SCENARIOS Copyright Notice Contained herein is material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction of the patent disclosure by any person as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all rights to the copyright whatsoever. Technical Field The present disclosure relates generally to power optimization in computing devices. Background With the proliferation of mobile devices in today's society, applications running in mobile computing environments are increasing in number and sophistication. Users commonly watch television and/or movies as well as listen to music on their mobile devices, all applications that can require a substantial amount of power. With the limited batteiy life of many mobile devices and the high power demands of multimedia applications, a substantial amount of the power used by the mobile device is consumed by multimedia applications. Brief Description of the Drawings Fig. 1 is a block diagram of a system configured to enable power optimization for special media playback scenarios in accordance with one embodiment of the invention. Fig. 2 is a media pipeline showing data flows between components of the system of Fig. 1 during a normal playback scenario. Fig. 3 is a media pipeline showing data flows between components of the system of Fig. 1 during a playback scenario where the video playback application is overlaid by another application in accordance with one embodiment of the invention. Fig. 4 is a media pipeline showing data flows between components of the system of Fig. 1 during a playback scenario where the audio output is muted in accordance with one embodiment of the invention. Fig. 5 is a sequence diagram showing interaction between components of the system of Fig. 1 during a normal playback scenario. Fig. 6 is a sequence diagram showing interaction between components of the system of Fig. 1 during a playback scenario where the video playback application is overlapped by another application in accordance with one embodiment of the invention. Fig. 7 is a sequence diagram showing interaction between components of the system of Fig. 1 during a playback scenario where the audio output is muted in accordance with one embodiment of the invention. Fig. 8 is a sequence diagram showing interaction between components of the system of Fig. 1 during a playback scenario where the audio output is muted in accordance with another embodiment of the invention. Detailed Description Embodiments of the present invention may provide a method, apparatus, system, and computer program product for optimizing power consumption during special media playback scenarios. In one embodiment, the method includes identifying a scenario where decoding of a first portion of a multimedia stream can be interrupted; and interrupting the decoding of the first portion of the multimedia stream while continuing to decode a second portion of the multimedia stream. The first portion may be a video stream and the second portion may be an audio stream, and the scenario may include a playback window for the video stream being hidden. The first portion may be an audio stream and the second portion may be a video stream, and the scenario may include the audio stream being muted. The method may further include determining that the scenario has changed and resuming decoding of the first portion of the multimedia stream. The method may further include identifying a first frame currently being decoded in the second portion of the multimedia stream; identifying a second frame in the first portion of the multimedia stream, the second frame corresponding to the first frame; and resuming rendering of the first portion of the multimedia stream with the second frame. Reference in the specification to "one embodiment" or "an embodiment" of the present invention means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrases "in one embodiment," "according to one embodiment" or the like appearing in various places throughout the specification are not necessarily all referring to the same embodiment. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will be apparent to one of ordinary skill in the art that embodiments of the present invention may be practiced without the specific details presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the present invention. Various examples may be given throughout this description. These are merely descriptions of specific embodiments of the invention. The scope of the invention is not limited to the examples given. Fig. 1 is a block diagram of a system configured to enable power optimization for special media playback scenarios in accordance with one embodiment of the invention. System 100 includes a software environment having an application layer 110 and an operating system / runtime layer 150 and a hardware environment including a processor 160 and a memory 170. A user 102 of the system uses applications running on processor 160 in application layer 110, such as media application 120 and other applications 130. User 102 may shift focus from one application to another, thereby causing the active application to overlay an inactive application.. For example, user 102 may play a video using media application 120, but make a word processing application active, thereby hiding the video application. User 102 may choose to continue to listen to the audio stream while working in the word processing application. In a normal playback scenario, the video stream would continue to be decoded along with the audio stream even though display of the video stream is inactive. In the embodiment shown in Fig. 1, this playback scenario where the video display is overlaid by another application can be detected and used to optimize power consumption in system 100. Operating system / runtime 150 detects scenarios where power consumption can be optimized. Policy data store 140 stores power optimization parameters that are configurable by user 102. One example of a power optimization parameter is an amount of time that a video playback application is overlaid by another application before switching to a power conservation mode that interrupts video decoding. For example, if the video playback application is overlaid by another application for 10 seconds, decoding of the video stream may be interrupted to save power. Another example of a power optimization parameter is an amount of time that audio is muted before switching to a power conservation mode that interrupts audio decoding. When operating system / runtime 150 detects a scenario where power consumption can be optimized, such as a video playback application being overlaid by another application, or muting of an audio stream, operating system / runtime 150 checks the policy data store 140 to determine whether to activate the policy. If the power optimization parameters of a policy are met, operating system / runtime 150 notifies the media application 120 to interrupt decoding of the applicable audio or video stream. In response to the notification by operating system / runtime 150, media application 120 interrupts decoding of the applicable audio or video stream. In one embodiment, interrupting decoding of the applicable audio or video stream includes turning off bitstream parsing and rendering as well. Referring to the hardware environment of system 100, processor 160 provides processing power to system 100 and may be a single-core or multi-core processor, and more than one processor may be included in system 100. Processor 160 may be connected to other components of system 100 via one or more system buses, communication pathways or mediums (not shown). Processor 160 runs host applications such as media application 120 and other applications 130 under the control of operating system / runtime layer 150. System 100 further includes memory devices such as memory 170. These memory devices may include random access memory (RAM) and read-only memory (ROM). For purposes of this disclosure, the term "ROM" may be used in general to refer to non-volatile memory devices such as erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash ROM, flash memory, etc. These memory devices may further include mass storage devices such as integrated drive electronics (IDE) hard drives, and/or other devices or media, such as floppy disks, optical storage, tapes, flash memory, memory sticks, digital video disks, biological storage, etc. Processor 160 may also be communicatively coupled to additional components, such as a display controller, small computer system interface (SCSI) controllers, network controllers, universal serial bus (USB) controllers, input devices such as a keyboard and mouse, etc. System 100 may also include one or more bridges or hubs, such as a memory controller hub, an input/output (I/O) controller hub, a PCI root bridge, etc., for communicatively coupling various system components. As used herein, the term "bus" may be used to refer to shared communication pathways, as well as point-to-point pathways. Some components of system 100 may be implemented as adapter cards with interfaces (e.g., a PCI connector) for communicating with a bus. In one embodiment, one or more devices may be implemented as embedded controllers, using components such as programmable or nonprogrammable logic devices or arrays, application-specific integrated circuits (ASICs), embedded computers, smart cards, and the like. As used herein, the terms "processing system" and "data processing system" are intended to broadly encompass a single machine, or a system of communicatively coupled machines or devices operating together. Example processing systems include, without limitation, distributed computing systems, supercomputers, high-performance computing systems, computing clusters, mainframe computers, mini-computers, client-server systems, personal computers, workstations, servers, portable computers, laptop computers, tablets, telephones, personal digital assistants (PDAs), handheld devices, entertainment devices such as audio and/or video devices, and other devices for processing or transmitting information. System 100 may be controlled, at least in part, by input from conventional input devices, such as keyboards, mice, etc., and/or by commands received from another machine, biometric feedback, or other input sources or signals. System 100 may utilize one or more connections to one or more remote data processing systems (not shown), such as through a network controller, a modem, or other communication ports or couplings. System 100 may be interconnected to other processing systems (not shown) by way of a physical and/or logical network, such as a local area network (LAN), a wide area network (WAN), an intranet, the Internet, etc. Communications involving a network may utilize various wired and/or wireless short range or long range carriers and protocols, including radio frequency (RF), satellite, microwave, Institute of Electrical and Electronics Engineers (IEEE) 802.11, Bluetooth, optical, infrared, cable, laser, etc. Fig. 2 is a media pipeline showing data flows between components of the media application of Fig. 1 during a normal playback scenario. Media source file 210 represents an input media stream that is received by a demultiplexor/splitter 220 component of media application 120 of Fig. 1. Demultiplexor/splitter 220 splits the input media stream into a video stream 221 and an audio stream 222. Video stream 221 is provided as input to a video decoder 230, which parses and decodes the bit stream and provides the decoded video bit stream 231 to video renderer 240, which renders the video output. From demultiplexor/splitter 220, audio stream 222 is provided as input to an audio decoder 250. The decoded output audio stream 251 is provided to a sound device 260. Fig. 3 is a media pipeline showing data flows between components of the media application of Fig. 1 during a playback scenario where the video playback application is overlapped by another application in accordance with one embodiment of the invention. Media source file 310 represents an input media stream that is received by a demultiplexor/splitter 320 component of media application 120 of Fig. 1. Demultiplexor/splitter 320 splits the input media stream into a video stream 321 and an audio stream 322. However, because the video playback application is overlaid by another application, demultiplexor/splitter 320 does not provide the video stream 321 to video decoder 330, and thus the video stream does not reach video renderer 340, so no video output is rendered. During the time that no video is being decoded, substantial power savings are possible by eliminating the CPU cycles for decoding and rendering the video. Although the video stream is not decoded, demultiplexor/splitter 320 continues to provide the audio stream 322 to an audio decoder 350. The decoded output audio stream 351 is provided to a sound device 360. A simulation of the video playback application being overlaid by another application was performed in a WINDOWS® Vista system running INTEL® Core2Duo™ 2.0 GHz with 3GB RAM playing a media stream whose video stream being encoded in MPEG4-Part2 and audio stream being encoded in MP3. A one-minute playback scenario with both audio and video decoding was compared to a one-minute playback scenario with only audio decoding (where the video application was overlaid by another application). A 42% reduction in clocks per instruction retired (CPI) was found, which produced proportional savings in power consumed. Fig. 4 is a media pipeline showing data flows between components of the media application of Fig. 1 during a playback scenario where the audio output is muted in accordance with one embodiment of the invention. Media source file 410 represents an input media stream that is received by a demultiplexor/splitter 420 component of media application 120 of Fig. 1. Demultiplexor/splitter 420 splits the input media stream into a video stream 421 and an audio stream 422. The video stream 421 is provided as input to a video decoder 430, which parses and decodes the bit stream and provides the decoded bit stream 431 to video renderer 440, which renders the video output. However, demultiplexor/splitter 420 does not provide audio stream 422 as input to audio decoder 450, and no output audio stream is provided to sound device 460. Substantial power savings can be achieved by bypassing the CPU cycles to decode and render audio output. Fig. 5 is a sequence diagram showing interaction between components of the media application of Fig. 1 during a normal playback scenario. In action 5.1, an input media stream is provided to media player 510 In response to receiving the video clip, media player 510 calls audio decoder 520, providing a bit stream in action 5.2. In action 5.3, audio decoder 520 decodes the bit stream and renders the audio stream output on speakers 550. In action 5.4, media player 510 calls video decoder 530, providing the video stream. In action 5.5, video decoder 530 decodes and renders the video output stream on display 560. During all of this activity, OS services 540 monitors for a scenario in which power consumption can be optimized when the policy is active. The steps in Figure 5 are repeated for all frames in the Video clip. Audio and video decoding and rendering actions may happen in parallel; e.g., actions 5.2 and 5.3 may occur in parallel with actions 5.4 and 5.5. In addition, some audio or video frames may be decoded at the same time that other audio or video frames are being rendered; e.g., some frames may be decoded in step 5.2 (or 5.4) at the same time that other frames are being rendered in step 5.3 (or 5.5). Fig. 6 is a sequence diagram showing interaction between components of the media application of Fig. 1 during a playback scenario where the video playback application is overlaid by another application in accordance with one embodiment of the invention. In action 6.1 , an input media stream is provided to media player 610 In response to receiving the video clip, media player 610 calls audio decoder 620, providing a bit stream in action 6.2. In action 6.3, audio decoder 620 decodes the bit stream and renders the audio stream output on speakers 650. In action 6.4, media player 610 calls video decoder 630, providing the video stream. In action 6.5, video decoder 630 decodes and renders the video output stream on display 660. During all of this activity, OS services 640 monitors for a scenario in which power consumption can be optimized. Up until this point, the normal playback scenario has been followed as no opportunities to optimize power consumption have occurred. The steps in figure 6 are performed for all frames in the media clip. The audio and video steps in the figure happen in parallel. In action 6.6, OS services 640 identifies a scenario where the video playback application has been overlaid by another application. In action 6.7, OS services 640 sends an event PLAYBACK_APPLICATION_LOST_FOCUS to media player 610. In response to receiving the event, media player 610 interrupts decoding of the video stream to enter a power optimization mode. In action 6.8, media player 610 continues to send the audio stream to audio decoder 620 for decoding, and in action 6.9, audio decoder 620 renders the output audio stream on speakers 650. Audio only playback continues until OS services 640 identifies a scenario where video decoding is again needed. In action 6.10, the user restores the focus on the video playback applicaiton. In response to detecting this event, in action 6.1 1, OS services 640 sends an event PLAYBACK APPLICATION FOCUS REGAI ED to media player 610. In response to receiving the event, media player 610 identifies the current frame being played in audio output by calling the GetReferenceFrames function with the CurrentFrame parameter. The currently active audio frame is used to identify the corresponding video frame and the associated reference frames for decoding the current video frame to place the video playback in synchronization with the audio playback. In action 6.13, all of the reference frames are sent from media player 610 to video decoder 630 for decoding. All of the reference frames are decoded in order to identify the reference frame corresponding to the current audio frame. Only the frames starting from current video frame are displayed. Even though all of the reference frames must be decoded, only a limited number of reference frames are available. For example, under the H.264 standard, a maximum of 16 reference frames are available, such that a video clip running at 24 frames per second would require less than one second to decode the reference frames. In action 6.14, now that the audio and video streams are synchronized, normal playback resumes with the video playback application focused and non-muted audio. In action 6.14, media player 610 provides the audio stream to audio decoder 620, which decodes and renders the audio stream on speakers 650 in action 6.15. In action 6.16, media player 610 sends the video stream to video decoder 630 for decoding, and in action 6.17, video decoder 630 decodes and renders the video stream on display 660. Fig. 7 is a sequence diagram showing interaction between components of the media application of Fig. 1 during a playback scenario where the audio output is muted in accordance with one embodiment of the invention. In action 7.1 , an input media stream is provided to media player 710 via a command PlayVideoClip(NoOfFrames). In response to receiving the video clip, media player 710 calls audio decoder 720, providing a bit stream in action 7.2. In action 7.3, audio decoder 720 decodes the bit stream and renders the audio stream output on speakers 750. In action 7.4, media player 710 calls video decoder 730, providing the video stream. In action 7.5, video decoder 730 decodes and renders the video output stream on display 760. During all of this activity, OS services 740 monitors for a scenario in which power consumption can be optimized. Up until this point, the normal playback scenario has been followed as no opportunities to optimize power consumption have occurred. In action 7.6, OS services 740 identifies a scenario where the audio playback has been muted. In action 7.7, OS services 740 sends an event AUDIO_MUTED to media player 710. In response to receiving the event, media player 710 interrupts decoding of the audio stream to enter a power optimization mode. In action 7.8, media player 710 continues to send the video stream to video decoder 730 for decoding, and in action 7.9, video decoder 730 renders the output video stream on display 760. Video only playback continues until OS services 740 identifies a scenario where audio decoding is again needed. In action 7.10, the user un-mutes the audio playback. In response to detecting this event, in action 7.1 1, OS services 740 sends an event AUDIO_U MUTED to media player 710. In response to receiving the event, media player 710 identifies the current frame being played in video output by calling the GetReferenceFrames function with the CurrentFrame parameter. The currently active video frame and the time of un-muting the audio is used to identify the corresponding audio reference frames to place the video playback in synchronization with the audio playback. In action 7.13, all of the reference frames are sent from media player 710 to audio decoder 730 for decoding. All of the reference frames are decoded in order to identify the reference frame corresponding to the current audio frame. In action 7.14, now that the audio and video streams are synchronized, normal playback resumes with the video playback application focused and non-muted audio. In action 7.14, media player 710 provides the audio stream to audio decoder 720, which decodes and renders the audio stream on speakers 750 in action 7.15. In action 7.16, media player 710 sends the video stream to video decoder 730 for decoding, and in action 7.17, video decoder 730 decodes and renders the video stream on display 760. Fig. 8 is a sequence diagram showing interaction between components of the system of Fig. 1 during a playback scenario where the audio output is muted in accordance with another embodiment of the invention. In action 8.1, an input media stream is provided to media player 810 via a command PlayVideoClip(NoOfFrames). In response to receiving the video clip, media player 810 calls audio decoder 820, providing a bit stream in action 8.2. In action 8.3, audio decoder 820 decodes the bit stream and renders the audio stream output on speakers 850. In action 8.4, media player 810 calls video decoder 830, providing the video stream. In action 8.5, video decoder 830 decodes and renders the video output stream on display 860. During all of this activity when the policy is active, OS services 840 monitors for a scenario in which power consumption can be optimized. Up until this point, the normal playback scenario has been followed as no opportunities to optimize power consumption have occurred. In action 8.6, OS services 840 identifies a scenario where the audio playback has been muted. In action 8.7, OS services 840 sends an event AUDIO_MUTED to media player 810. In response to receiving the event, media player 810 interrupts decoding of the audio stream to enter a power optimization mode. In action 8.8, media player 810 continues to send the video stream to video decoder 830 for decoding, and in action 8.9, video decoder 830 renders the output video stream on display 860. Video only playback continues until OS services 840 identifies a scenario where audio decoding is again needed. In action 8.10, the user un-mutes the audio playback. In response to detecting this event, in action 8.1 1, OS services 840 sends an event AUDIO_UNMUTED to media player 810. Normal playback resumes with the video playback application focused and non-muted audio. In action 8.12, media player 810 provides the audio stream to audio decoder 820, which decodes and renders the audio stream on speakers 850 in action 8.13. In action 8.14, media player 810 sends the video stream to video decoder 830 for decoding, and in action 8.15, video decoder 830 decodes and renders the video stream on display 860. The techniques described herein enable power savings to be achieved by recognizing special playback scenarios in which audio or video decoding can be avoided. The resultant power savings extend battery life for mobile devices without compromising the user's enjoyment of multimedia presentations. Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs executing on programmable systems comprising at least one processor, a data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Program code may be applied to input data to perform the functions described herein and generate output information. Embodiments of the invention also include machine-accessible media containing instructions for performing the operations of the invention or containing design data, such as HDL, which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products. Such machine-accessible storage media may include, without limitation, tangible arrangements of particles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto- optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash programmable memories (FLASH), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor. The programs may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The programs may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language. Presented herein are embodiments of methods and systems for optimizing power consumption during special media playback scenarios. While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that numerous changes, variations and modifications can be made without departing from the scope of the appended claims. Accordingly, one of skill in the art will recognize that changes and modifications can be made without departing from the present invention in its broader aspects. The appended claims are to encompass within their scope all such changes, variations, and modifications that fall within the true scope and spirit of the present invention.
Disclosed is a target port that implements a transport layer retry (TLR) mechanism. The target port includes a circuit having a transmit transport layer and receive transport layer in which both the transmit and receive transport layers are coupled to a link. A transmit protocol processor of the transmit transport layer controls a TLR mechanism in a serialized protocol. A receive protocol processor of the receive transport layer is coupled to the transmit transport layer and likewise controls the TLR mechanism in the serialized protocol.
1.A device comprising:a circuit comprising a transmit transport layer and a receive transport layer, the transmit and receive transport layers being coupled to a link;a transmission protocol processor of the transmission layer, which controls a transport layer retry (TLR) mechanism of a target port through a serialization protocol;A receiving protocol processor of the receiving transport layer is coupled to the transmitting protocol layer to control a TLR mechanism of the target port by the serialization protocol.2.The apparatus of claim 1 wherein said serialization protocol is compatible with a serial connection (Small Computer System Interface (SCSI)) (SAS) protocol standard.3.The apparatus of claim 1 further comprising a task relationship identifying a target port, an initiator port, a logical unit, and a task.4.The apparatus of claim 3, further comprising an input/output (I/O) context buffer of said transmit transport layer storing an I/O context comprising data related to said task relationship.5.The apparatus of claim 4 wherein said different target port transmission flag is a shadow tag comprising a number that is not associated with a pending command.6.The apparatus of claim 5 wherein said shadow flag is updated in said input/output (I/O) context buffer of said task relationship.7.The apparatus of claim 3, further comprising an input/output (I/O) context buffer of said receive transport layer storing an I/O context of said task relationship.8.The apparatus of claim 7 wherein said I/O context buffer stores dynamic and snapshot fields associated with said task relationship.9.The apparatus of claim 8 wherein said receiving protocol processor determines which of the I/O contexts associated with said task relationship are associated with the save and saves from the originator port Write a data frame.10.The device of claim 1 wherein said circuit is an integrated circuit.11.A method comprising:A transport protocol processor that controls the coupling of the link provides a transport layer retry (TLR) mechanism for the target port through a serialization protocol;Controlling, by the receiving protocol processor coupled to the link, the TLR mechanism of the target port by the serialization protocol;Define task relationships that identify initiator ports, destination ports, logical units, and tasks.12.The method of claim 11 wherein said serialization protocol is compatible with a serial connection (Small Computer System Interface (SCSI)) (SAS) protocol standard.13.The method of claim 11 further comprising storing an (I/O) context of said task relationship.14.The method of claim 13 wherein said different target port transport flag is a shadow tag comprising a number that is not associated with any pending commands, and said I/O context is updated with said shadow tag .15.The method of claim 14 further comprising storing an I/O context of said task relationship having dynamic and snapshot fields.16.The method of claim 15 wherein determining which of the write data frames received from the initiator port are saved based on the dynamic and snapshot fields of the I/O context associated with the task relationship.17.A controller comprising:Target port circuit, including:Transmitting a transport layer comprising a transmit protocol processor coupled to the link, the transmit protocol processor controlling the transport layer through a serialization protocol compatible with a serial connection (Small Computer System Interface (SCSI)) (SAS) protocol standard Retry (TLR) mechanism;Receiving a transport layer comprising a receive protocol processor coupled to the transmit protocol layer and the link, the receive protocol processor controlling the TLR mechanism via the serialization protocol compatible with a SAS protocol standard;Wherein the target port circuit communicates with an initiator port of a second controller of a storage device compatible with the SAS protocol standard.18.The controller of claim 17 further comprising input of said transmit transport layer storing an I/O context associated with a task relationship for identifying an initiator port, a target port, a logical unit, and a task An output (I/O) context buffer that stores dynamic and snapshot fields associated with the task relationship.19.The controller of claim 18 wherein said different target port transfer flag is a shadow mark comprising a number not associated with a pending command, and said shadow mark is at said input/output (I/ O) Update in the context buffer.20.The controller of claim 19, further comprising an input/output (I/O) context buffer of said receive transport layer storing an I/O context of said task relationship, said I/O A context buffer stores dynamic and snapshot fields associated with the task relationship, and wherein the receive protocol processor is based on the dynamic and snapshot fields of the receive I/O context buffer associated with the task relationship Determine which write data frames are received from the initiator port.
Automatic serial protocol destination port transport layer retry mechanismTechnical fieldEmbodiments of the invention relate to the field of retry mechanisms in serialization protocols. More specifically, embodiments of the present invention relate to an automated serial (Small Computer System Interface (SCSI)) Protocol (SSP) target port transport layer retry (TLR) mechanism.Background techniqueSerial Attached SCSI (SAS) is a protocol evolution of the parallel SCSI protocol. SAS provides point-to-point serial peripheral interfaces through which device controllers can be directly linked to each other. SAS combines two established technologies - SCSI and Serial Advanced Technology Attachment (SATA) technology to combine the practicality and reliability of the SCSI protocol with the performance advantages of SATA's serial architecture.SAS is a performance improvement over traditional SCSI because SAS enables multiple devices of different sizes and types to be connected simultaneously in full duplex mode. In addition, the SAS device can be hot swapped.    Computer devices, storage devices, and various electronic devices are designed to conform to faster protocols, such as the SAS protocol, that operate in a serial manner to provide the speed and performance required for today's applications.In the SAS specification [eg Serial Attached SCSI-1.1 (SAS-1.1), American Information Technology National Standard (ANSI), T10 Committee, Rev. 09d, Status: T10 Approved, Project: 1601-D, May 30, 2005 In the [hereinafter referred to as SAS standard], the SSP target port transport layer retry (TLR) requirement of the SSP target port is defined.According to the SAS standard, if the "Transport LAYER RETRIES" bit in the Protocol Specific Logical Unit Mode page is set to 1, the SSP target port should handle the link layer that appears when transmitting a Transport Ready (XFER_RDY) frame. error. This SAS standard protocol is described below.The SSP destination port first sets the "RETRYDATA FRAME" bit in each XFER_RDY frame to one. If the SSP destination port sends an XFER_RDY frame and does not receive an acknowledgment (ie, an ACK/NAK timeout occurs), or receives a negative acknowledgment (NAK), the SSP destination port should be retransmitted with a different value in the destination port transport tag field. A value, an XFER_RDY frame in which the "Retransmission" (RETRANSMIT) bit is set to 1. For ACK/NAK timeout situations, the SSP target port is required to close the connection and open a new connection to retransmit the XFER_RDY frame. The SSP target port retransmits each XFER_RDY frame that has not received an ACK at least once.If the SSP destination port sends a read data frame for a logical unit whose "Transport Layer Retry" bit is set to 1 in the Logical Unit Mode page, the SSP destination port shall handle the occurrence of the read data frame as described below. Link layer error.If the SSP destination port sends a read data frame but does not receive an ACK/NAK (ie, an ACK/NAK timeout occurs), or receives a NAK for that frame, the SSP destination port retransmits all reads from the last ACK/NAK balance point. Data Frame. For ACK/NAK timeout situations, the SSP target port is required to close the connection and open a new connection to retransmit the read data frame.In this case, the "CHANGE DATA POINTER" is set to 1 in the first retransmission read data frame, and is set to zero in the next read data frame. The SSP target port shall retransmit each read data frame that has not received an ACK at least once. The number of times the SSP destination port retransmits each read data frame is typically vendor specific.These fairly well defined rules set forth in the SAS standard for SSP target port processing transport layer retry are currently handled by firmware. The firmware implementation generates a large amount of firmware overhead and a large amount of processor computation cycle time due to the large number of required synchronous handshakes between firmware and hardware.DRAWINGS1 is a block diagram illustrating an example of a system in which an SSP target port can be employed.2 is a block diagram illustrating a scatter-set list of input/output (I/O) commands.Figure 3 is a block diagram illustrating the I/O context of an ITLQ relationship (nexus).Figure 4 is a block diagram showing an example of an SSP target port.Figure 5 is a block diagram showing an example of an SSP target port.Figure 6 is a diagram illustrating the SSP target port of the SAS controller and the implemented functionality of the Transport Layer Retry (TLR) process for handling I/O write commands.Figure 7 is a diagram illustrating how the SSP target port handles retrying write data frames as part of the TLR mechanism.Figure 8 is a block diagram illustrating how the SSP transport port handles read data frames as part of the TLR mechanism of the I/O read command.Detailed waysVarious embodiments of the embodiments of the invention are described in detail in the following description. However, these details are included to assist the understanding of the invention and to describe exemplary embodiments of the invention. Such details should not be used to limit the invention to the specific embodiments described, as other variations and embodiments are possible while still remaining within the scope of the invention. In addition, while the invention has been described in detail, the embodiments of the invention may be In other instances, details such as well-known methods, types of data, protocols, procedures, components, electrical structures, and circuits are not described in detail in order to avoid obscuring the understanding of the invention.Embodiments of the present invention are directed to an automated serial (Small Computer System Interface (SCSI)) Protocol (SSP) target port transport layer retry (TLR) mechanism. In particular, embodiments relate to hardware-automated SSP target ports that employ a transport layer retry (TLR) mechanism in both wide and narrow port configurations, thereby improving frame processing latency and protocol reduction. Overhead and improve overall system input/output (I/O) performance. For example, the SSP target port can be implemented as a circuit, such as an integrated circuit.1, FIG. 1 is a block diagram illustrating a system including a first device 102 coupled to another device 110, each device having a SAS controller 104 and 113 including an SSP target port, respectively. Device 110 is communicatively coupled to device 102 over a link in accordance with SAS protocol standards. Each device includes SAS controllers 104 and 113 for providing communication between the two devices 102 and 110 over respective links.Apparatus 102 can include a processor 107 that controls operations in apparatus 102 and SAS controller 104 to control serial communication with apparatus 110 in accordance with SAS standards. Moreover, device 102 can include a memory 109 coupled to processor 107 and a plurality of different input/output (I/O) devices (not shown).Similarly, device 110 may also include a processor 117 that controls operations in device 110 and SAS controller 113 to control serial communication with additional devices 102 in accordance with the SAS protocol. Moreover, device 110 can include a memory 119 coupled to processor 117 and a plurality of different input/output (I/O) devices (not shown).Each device may include SAS controllers 104 and 113, respectively. Further, the SAS controller 113 can include an SSP target port 114 and an SSP initiator port 116, while the SAS controller 104 can include an SSP target port 106 and an SSP initiator port 103. According to this example, device 102 can communicate tasks to SSP target port 114 of SAS controller 113 of device 110 via a link via SSP initiator port 103.It should be understood that device 110 and device 102 may be any type of device, such as a personal computer, laptop computer, network computer, server, router, expander, set top box, mainframe computer, storage device, hard drive, flash memory, floppy disk A drive, compact disc read only memory (CD-ROM), digital video disc (DVD), flash memory, handheld personal device, cellular telephone, etc., or any type of device having a processor and/or memory.Embodiments of the present invention relate to a device 102 having a SAS controller 104 and a structure and functionality for implementing a Transport Layer Retry (TLR) mechanism by the SSP target port 114, wherein the SAS controller 104 includes a link to another device 110 passes the SSP initiator port 103 of the task, which is described in detail below. To aid in this description, task relationship 120 may be defined as the relationship between SSP target port 114, SSP initiator port 103, logical unit (including devices, links and nodes through which tasks are sent), and the task itself (referred to as ITLQ). relationship).Referring briefly to Figure 2, Figure 2 illustrates a Decentralized Set of List (SGL) buffering mechanism 150 that employs an address length (A/L) pair 152 to point to a primary or local storage buffer 160 that stores received or transmitted frames and indicates storage. The size of the buffer 160. In addition, the SGL buffer mechanism 150 also includes a buffer number field 153 and an SGL pointer 155. The main memory may be a memory associated with the device itself, such as memory 119, or may be a memory of SAS controller 113 itself. The use of Decentralized Concentrated List (SGL) memory accesses is well known and will not be described in detail, and is merely one method of memory access that can be used in conjunction with embodiments of the present invention.In one embodiment, multiple I/O contexts are defined for each task or ITLQ relationship. Referring to Figure 3, Figure 3 is a table illustrating the I/O context of an ITLQ relationship. The I/O context is based on the initial I/O read/write information that is passed to the transport layer. The I/O context has dynamic fields that are saved by the transport layer. The direct memory access (DMA) processor of the SSP target port can track the current I/O process, and multiple I/O contexts can be stored in the SSP target port, as will be described below. Specifically, table 300 of Figure 3 illustrates these I/O context fields.For example, the I/O context of the ITLQ relationship may include a retransmission bit 320, a destination port transport "tag" (TAG) 330, and a phantom target port and transport TAG 340. As will be discussed below, the shadow target port transmission flag is used in the process to generate different and unused target transmission flags due to XFER_RDY frame retry.The I/O context of the ITLQ relationship may also include a dynamic field 360, such as: a current scatter set list pointer (SGL_PTR), which may be a pointer to a local or primary store buffer; a current address length pair (A/L); current I /O read/write data transfer count (I/O_XC); and current I/O read/write data relative offset (I/O_RO).Further, like the dynamic field 360, the I/O context of the ITLQ relationship may also include a snapshot field 370, such as: snapshot SGL_PTR; snapshot A/L; snapshot I/O_XC; and snapshot I/O_RC. Snapshot fields are similar to dynamic fields, but they are previously saved fields for the SSP target port transport layer retry mechanism, as described below. In addition, other I/O context fields 310 may also be employed.As will be appreciated, the transmit transport layer of the SSP target port updates the dynamic field 360 when transmitting a read data frame from the transmit buffer to the link and receiving an acknowledgment (ACK). In addition, the receive transport layer updates the dynamic field 360 when the DMA processor sends the write data frame from the receive buffer to the primary or local memory.Reference is now made to Fig. 4, which is a block diagram illustrating an example of an SSP target port 114. In one embodiment, the SSP target port 114 includes an SSP target write sequence processor 405 and an SSP target read sequence processor 410. The SSP target write sequence processor 405 handles the transport layer retry condition of the I/O write command. The SSP target read sequence processor 410 processes the transport layer retry of the I/O read command. In one embodiment, the SSP target write sequence processor 405 and the SSP target read sequence processor 410 can be implemented in hardware, as described with reference to Figures 5-8. More specifically, the SSP target write sequence processor 405 can be implemented by the receive transport layer of the SSP target port 114, and the SSP target read sequence processor 410 can be implemented by the transmit transport layer of the SSP target port 114, as will be described in more detail below. .It should be noted that it is assumed that the SSP target port 114 assigns a unique TAG to each ITALQ relationship. The TAG field is used by the SSP target port 114 to associate an I/O context with a particular ITLQ relationship. If the TAG is not unique across different remote nodes, the SSP target port 114 connects the remote node index with the TAG to form a unique I/O context ID to associate the I/O context of the particular ITLQ relationship. Note that each remote node is assigned a unique remote node index by the device.Reference is now made to Fig. 5, which is a block diagram illustrating an SSP target port 114 in accordance with one embodiment of the present invention. The SSP target port 114 includes a receive transport layer 504 and a transmit transport layer 508.In one embodiment, the target port may be hardware based. Target port 114 may be a circuit. For example, the circuitry may be an integrated circuit, a processor, a microprocessor, a signal processor, an application specific integrated circuit (ASIC), or any type of suitable logic or circuitry that implements the functionality described herein.In particular, target port 114 includes a transmit transport layer 508 and a receive transport layer 504 both coupled to link 502. The Transmit Protocol Processor 512 of the Transmit Transport Layer 508 controls the TLR mechanism through a serialization protocol. The receive protocol processor 532 of the receive transport layer 504 is coupled to the transmit transport layer and similarly controls the TLR mechanism through a serialization protocol. As mentioned earlier, the serialization protocol is compatible with the Serial Attached (Small Computer System Interface (SCSI)) (SAS) protocol standard.As will be discussed below, if the transmission protocol processor 512 of the transmission protocol layer 508 transmits an XFER_RDY frame in which the "Retry Data Frame" bit is set to 1, and does not receive an acknowledgment or receives a NAK for that XFER_RDY frame, then the transmission protocol Processor 512 retransmits XFER_RDY with different target port transmission flags and a "Retransmission" bit set to one. In one embodiment, the different transmission tags may be shadow tags that include numbers that are not associated with any outstanding ITALQ relationship.The receive and transmit transport layers 504, 508 are coupled to the link and physical layer 502. In addition, both the receive transport layer (RxTL) 504 and the transmit transport layer (TxTL) 508 employ a direct memory access (DMA) processor 520 and a common I/O context store 530.Looking more particularly at the receive transport layer 504, the receive transport layer 504 includes a receive frame parser 536 for parsing frames received from the link and physical layer 502, a receive buffer 534 for storing received frame data, SAS receive Protocol processor 532 and a common I/O context store 530 that stores ITLQ relational I/O contexts (as described in connection with FIG. 3). The receive transport layer 504 implements the functionality of the SSP target write sequence processor 405 as previously described.Looking at the transmit transport layer 508, the transmit transport layer 508 includes a common context storage device 530 that stores an ITLQ relational I/O context (as described with reference to FIG. 3), a SAS transmit protocol processor 512, and a transmit buffer for storing transmit data. 514. Transmit transport layer 508 implements the functionality of SSP target read sequence processor 405 as previously described.The SAS Transmit Protocol Processor 512 and the SAS Receive Protocol Processor 532 are used to implement the SAS standard protocol, as well as aspects for implementing the Transport Layer Retry (TLR) mechanism to be described. The SAS transmit and receive processors may be any type of appropriate processor or logic that implements these TLR functions. In addition, the various components previously described for SSP target port 114 and their respective functionality in implementing the transport layer retry mechanism are now discussed in detail with reference to Figures 6-8.Referring now to Figures 6-8, Figures 6-8 illustrate the operation of the previously described SSP target port 114 when implementing a Transport Layer Retry (TLR) mechanism in the SAS controller of the device.Referring specifically to Figure 6, Figure 6 is a diagram illustrating the executed functionality of the SSP target port 114 of the SAS controller and the Transport Layer Retry (TLR) process for processing I/O write commands. It should be understood that the SSP target port 114 may be a narrow SSP target port in which there is only one physical association with the SSP target port (eg, 1140), or the SSP target port 103 may be a wide port in which the SSP target port, as shown in FIG. 6 There are multiple physical associations with the SSP target port 1140-N as shown.As shown at point 610 of Figure 6, the SSP target port 1140 transmits an XFER_RDY frame (with Tag = A) in which the "Retry Data Frame" bit is set to 1, and ACK/NAK is not received in this example (ie An ACK/NAK timeout occurred, or the NAK of that frame was received. In response, the SSP target port 1140 retransmits the XFER_RDY frame with a different value in the destination port transmission flag field with the "Retransmission" bit set to one. For the ACK/NAK timeout case, the SSP target port 114 is required to open a new connection before starting the retry sequence.Using the ACK/NAK timeout condition as an example, the transmit transport layer 5080 requests the link layer to close the connection. However, the retry sequence is initiated before the connection is closed, wherein the transmit transport layer 5080 of the SSP target port 114 "retransmits" the I/O context of that particular ITALQ relationship under the control of the SAS transmit protocol processor 512. The bit is set to 1 to remember that sending the XFER_RDY frame is in a retry state. Additionally, at point 630, the SAS transmission protocol processor 512 causes the link layer to close the connection.Therefore, when a new connection is established, the transmit transport layer can check the "retransmit" bit in the I/O context of that particular ITALQ relationship, determine that it is equal to 1, and then start retransmission with different values in the target transport layer tag. The XFER_RDY frame in which the "Retransmit" bit is set to 1.The number of times the transmit transport layer retries the XFER_RDY frame may be a programmable value stored in the configuration space, load page, or chip initialization parameters.In one embodiment, different target port transmission flags due to XFER_RDY retry may be generated by a transport layer that requests firmware to transmit a target port that is not associated with any pending ITLQ relationship.In a specific embodiment, different transport tags can be created by a shadow target port transport tagging mechanism.The SAS standard defines a 16-bit target port transport tag field that supports a total of 64,000 (eg, 64K) pending I/O commands. The SSP target port typically supports less than 64K, such as 16K pending I/O commands. In this example, the two most significant bits are not used in the destination port transport tag.In this shadow marking embodiment, the SSP target port 114 only changes the two most significant bits to produce different target port transmission flags for XFER_RDY frame retry. Therefore, the maximum number of times it can be retried is 2^2=4 (2^ bits not used in the destination port transfer flag). This mechanism is called the shadow target port transport marking mechanism. The shadow target port transport flag can be stored and updated in the I/O context of that particular ITLQ relationship as shown in FIG. 3 (eg, at point 620).In a shadow target port transmission tag embodiment, when the transmit transport layer 508 retransmits an XFER_RDY frame in which there is a different value in the target port transport flag field and the "Retransmit" bit is set to 1, (eg, by the SAS protocol processor) 512) Selecting a new shadow target port transmission tag and updating (eg, at point 620) a shadow target port transmission tag field (eg, FIG. 3) in the I/O context of that particular ITALQ relationship.Continuing with the current instance of the ACK/NAK timeout condition, at point 640 of Figure 6, link layer 502N opens a new connection. Under the control of the SAS Transmit Protocol Processor 512, the Transmit Transport Layer 508N checks whether the I/O Context "Retransmission" bit field of that particular ITLQ relationship is set to 1 during the processing of the I/O Write Command. If so, the transmit transport layer 508N sets the "Retransmit" bit of the XFER_RDY frame to 1 from the I/O context of that particular ITLQ relationship and sets the target port transport flag field to the new shadow target transport tag value. Thus, during the processing of the write command, it is seen in the transmit buffer 514 that for the XFER_RDY frame, the TAG is set to a shadow value (eg, shadow A), the "retransmission" bit is set to 1, and the XFER_RDY frame is transmitted.Referring to Figure 7, when the SSP initiator port retransmits data from the ITLQ relationship due to ACK/NAK timeout or NAK received, the SSP initiator port is required to send all write data frames from the previous XFER_RDY frame. In one embodiment, the SSP target port 114 handles this by updating the dynamic fields in the I/O context of that particular ITLQ relationship for all of the last good write data frames received for that pending initiator port write command. Happening. The receiving transport layer is received when any one of the receive transport layers 5040-N of the target port 114 receives a write data frame in which the "change data pointer" bit is set to 1 (eg, the first retransmission write data frame of the ITLQ relationship) 504 needs to check if the write data frame is a valid retransmission write data frame. This can be done by checking if the data offset field of the write data frame is less than or equal to the dynamic I/O read/write data relative offset field of the I/O context buffer. If the write data frame is valid, the SSP target port 114 can initiate a transport layer retry (TLR) process.For example, if the offset data offset field of the write data frame is less than the I/O context dynamic I/O read/write offset field, the SSP target port 114 jumps to drop mode (for that particular ITLQ relationship) and discards All write data bytes received for that ITLQ relationship until the saved dynamic I/O read/write offset is reached. It then switches to the normal receive mode to save all future data bytes for that particular ITLQ relationship.On the other hand, if the data offset field of the write data frame is equal to the I/O context read/write offset field of the I/O context, it only enters the regular receive mode in order to save all data bytes of that ITLQ relationship.In one embodiment, to handle write data frame retry reception, the transmit transport layer 508 obtains a snapshot of the dynamic fields in the common context buffer 530 when transmitting the XFER_RDY frame (see, for example, FIG. 3). Since the originator port is required to send all write data frames from the current XFER_RDY frame during the transport layer retry, when the target port receives a write data frame in which the "change data pointer" bit is set to 1, that data frame Must be the first write data frame of the previous XFER_RDY frame. It should be noted that the receive transport layer 504 checks if the write data offset field is equal to the snapshot I/O read/write offset field in the common context buffer 530 to ensure that the first retry write data frame is the first of the previous XFER_RDY frame. Write a data frame.Thus, SSP target port 114 can select all previously received good write data bytes of that ITLQ relationship without discarding the saved relative offset of the last good write data frame. Specifically, let the DMA processor retransmit the write data bytes that have been received to the primary or local storage buffer without loss.An example of a write data frame that reconstructs an ITLQ relationship is now described. For example, at point 710, SSP target port 114 receives A2 and responds with an ACK, but in this example, the ACK is lost in transmission. Therefore, an ACK/NAK timeout occurs on the SSP initiator port. The SSP initiator port reopens the new connection and retransmits all write data frames starting with frame A1 from the previous XFER_RDY frame. It should be noted that whenever the DMA processor 520 outputs a write data frame from the receive buffer, the receive transport layer 5040 updates the dynamic field in accordance with the size of the write data frame in the receive buffer. In this example, the last good received data frame is A2.As shown at point 720, the "change data pointer bit" for the write data frame is set to 1 and the write data frame data offset field is smaller than the I/O read/write data of the I/O common context buffer 530 of the receive transport layer. With respect to the offset, the receive transport layer 5040 enters the discard mode and discards all write data before the last good received write data frame relative offset.At point 730, the receive transmit layer 5040 returns to the normal mode to save all new good write data bytes in the receive buffer 534.Turning now to Figure 8, Figure 8 is a block diagram illustrating how the SSP transport port 114 processes read data frames as part of the Transport Layer Retry (TLR) mechanism of the I/O read command.If the SSP destination port 114 sends a read data frame but does not receive an ACK/NAK (ie, an ACK/NAK timeout occurs), or receives a NAK for that read data frame, the SSP target port begins to reload when the previous ACK/NAK balance occurs. Pass all read data frames. For the ACK/NAK timeout case, the SSP target port 114 opens a new connection before starting the retry sequence.As shown in FIG. 8, when the transmit transport layer 5040 sends a read data frame and an ACK/NAK timeout occurs or a NAK is received, at point 820, the transmission is transmitted by copying the snapshot field to the dynamic field again (see, for example, FIG. 3). Layer 5040, under the control of SAS Transmit Protocol Processor 512, causes the dynamic field in the I/O context of that ITLQ relationship to fall back to the previous ACK/NAK balance point at point 810. .This enables the transmit transport layer 5040 to retransmit all read data frames from the previous ACK/NAK balance point. It starts retransmitting all read data frames starting with frame A4.To accomplish this, the link layer 502 needs to provide an ACK/NAK balance indication to the transport layer. Thus, whenever an ACK/NAK balance occurs, the transport layer 508 obtains a snapshot of the dynamic field by copying the contents of the dynamic field to the snapshot field in the common context buffer 530.For the ACK/NAK timeout condition, at point 840, the transmit transport layer 5080 requests the link layer to close the connection. In order to handle the case where the connection is closed due to ACK/NAK timeout before the transmit transport layer begins the retry sequence, the transmit transport layer 5000 needs to set the "Retransmit" bit in the I/O context of that particular ITLQ to 1 in order to remember I. The /O read data sequence is in a retry state.Thus, when a new connection is established, at point 850, the transmit transport layer 504N can check the "retransmit" bit and the common context buffer 530 to check that it is set to 1, and by reading the first retransmission data frame The "Change Data Pointer" bit is set to 1 to begin servicing the I/O read data retry sequence. This can be done under the control of the SAS Transmit Protocol Processor 512, and the read data can be sent to the link via the Transmit Buffer 514.The number of times the transmit transport layer 504N retries the read data frame may be programmable via a particular configuration space or mode page or chip initialization parameter.In accordance with an embodiment of the present invention, a complete hardware automation mechanism for handling SSP target port transport layer retry is disclosed, which does not actually require assistance from firmware at all. In this way, the firmware overhead is significantly reduced, and there is a significant reduction in CPU computation cycle time and synchronous exchange between firmware and hardware. This translates into improved overall system performance and improved SAS protocol control performance, especially in multi-protocol applications. In addition, the firmware design that is still needed, especially in mass storage system environments, is greatly simplified and the real-time processing requirements from the firmware are significantly reduced.Further, although the embodiments of the present invention have been described with reference to the embodiments, the description should not be construed as limiting. Various modifications of the illustrative embodiments of the invention, as well as other embodiments of the invention, which are apparent to those skilled in the <RTIgt;
Integrated circuit assemblies may contain various mold, fill, and/or underfill materials. As these integrated circuit assemblies become ever smaller, it becomes challenging to prevent voids from forming within these materials, which may affect the reliability of the integrated circuit assemblies. Since integrated circuit assemblies are generally formed by electrically attaching integrated circuit dice on electronic substrates, the present description proposes injecting the mold, fill, and/or underfill materials through openings formed in the electronic substrate to fill voids that may form and/or to prevent the formation of the voids altogether.
An integrated circuit assembly, comprising:an electronic substrate having a first surface and an opposing second surface, wherein the electronic substrate includes at least one inlet opening extending from the first surface to the second surface;at least one integrated circuit die attached to the electronic substrate;at least one void defined by the electronic substrate and the integrated circuit die; anda fill material within the at least one void, wherein a portion of the fill material extends into the opening in the electronic substrate.The integrated circuit assembly of claim 1, further comprising a mold material on the electronic substrate and the at least one integrated circuit die.The integrated circuit assembly of claim 2, wherein the at least one void is further defined by the mold material.An electronic system, comprising:an electronic substrate having a first surface and an opposing second surface, wherein the electronic substrate includes at least one inlet opening extending from the first surface to the second surface;at least two first level integrated circuit dice having a first surface and an opposing second surface, wherein the second surface of each at least two integrated first level integrated circuit attached to the electronic substrate;at least one second level integrated circuit die having a first surface and an opposing second surface, wherein the second surface of second level integrated circuit die is attached to the first surface of at least one of the first level integrated circuit dice;at least one void defined by the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die; anda fill material within the at least one void, wherein a portion of the fill material extends into the inlet opening in the electronic substrate.The electronic system of claim 4, wherein the electronic substrate further comprises at least one vent opening extending from the first surface of the electronic substrate to the second surface of the electronic substrate.The electronic system of claim 5, further comprises a portion of the fill material extending into the at least one vent opening.The electronic system of one of the claims 4-6, further comprising a mold material on the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die.The electronic system of claim 7, wherein the at least one void is further defined by the mold material.The electronic system of claim 7 or 8, wherein the fill material is substantially the same as the mold material.The electronic system of one of the claims 4-9, wherein at least one of the at least two first level integrated circuit dice and the at least one second level integrated circuit die is electrically attached to the electronic substrate with at least one bond wire.The electronic system of one of the claims 4-10, wherein at least one of the at least two first level integrated circuit dice is electrically attached to the at least one second level integrated circuit die with at least one bond wire.A method of fabricating an integrated circuit assembly, comprising:forming an electronic substrate having a first surface and an opposing second surface;forming an opening in the electronic substrate, wherein the opening extends from the first surface of the electronic substrate to the second surface of the electronic substrate;forming at least two first level integrated circuit dice having a first surface and an opposing second surface;attaching the second surface of each of the at least two first level integrated circuit dice to the first surface of the electronic substrate;forming at least one second level integrated circuit die having a first surface and an opposing second surface;attaching the second surface of the at least one second level integrated circuit die to the first surface of at least one of the first level integrated circuit dice, wherein at least one void is defined by the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die;orienting the electronic substrate with the second surface gravitationally higher than the first surface thereof;dispensing a fill material from the second surface of the electronic substrate through the inlet opening and into the void; andcure the fill material.The method of claim 12, wherein a portion of the fill material extends into the inlet opening.The method of claim 12 or 13, further comprising forming at least one vent opening extending from the first surface of the electronic substrate to the second surface of the electronic substrate.The method of claim 14, wherein a portion of the fill material extends into the at least one vent opening.
TECHNICAL FIELDEmbodiments of the present description generally relate to the field of integrated circuit package or assembly fabrication, and, more specifically, to filling voids within the integrated circuit assembly by injecting a fill material through an electronic substrate of the integrated circuit assembly.BACKGROUNDThe integrated circuit industry is continually striving to produce ever faster, smaller, and thinner integrated circuit devices and packages for use in various electronic products, including, but not limited to, computer servers and portable products, such as portable computers, electronic tablets, cellular phones, digital cameras, and the like.One way to achieve these goals is by increasing integration density, such as by stacking components within the integrated circuit assemblies. One stacking method may comprise a method typically used in NAND memory die stacking, wherein the backside surface of the largest integrated circuit die, such as a non-volatile memory die (for example, a 3D XPoint device) is attached to an electronic substrate (e.g. a package substrate/interposer, a printed circuit board, or the like) having a surface with bond pads thereon, of the controller die facing in a direction opposite the electronic substrate. Backside surfaces of smaller integrated circuit dice, such as NAND memory dice, are stacked on the largest integrated circuit die in a configuration to allow access to the bond pads. Bond wires are then used to form electrical connections between the bond pads on various integrated circuit dice and/or between the integrated circuit dice and the electronic substrate. A mold material may then be disposed over the assembly to protect the integrated circuit dice and the bond wires. Although such an assembly configuration may prevent any voids from being formed in the mold material, the configuration may not be the most advantageous configuration from a reliability and operational standpoint. A more advantageous configuration may be to stack the smaller integrated circuit dice on the electronic substrate and attach the largest integrated circuit die to the smaller integrated circuit dice. However, film over wire layers may be needed between the stacks to prevent bond wire shorting and reduce the risk of voids in the mold material. Furthermore, spacers, such as silicon spacers, may also be needed to evenly distribute the smaller integrated circuit dice stacks. Both film over wire layers and spacers add cost and increase the height of the integrated circuit assemblies, which is counter to the goals of the integrated circuit industry.Additionally, as the goals of the integrated circuit industry are achieved, packaging of the integrated circuit devices becomes more challenging. A typical integrated circuit package includes at least one integrated circuit device that is mounted on an electronic substrate, such that bond pads, or other such electrical attachment structure, on the integrated circuit device are attached directly to corresponding bond lands, or other such electrical attachment structure, on the electronic substrate with interconnection structures. To enhance the reliability of the connection between the integrated circuit device bond pads and the electronic substrate bond lands, an underfill material may be disposed between the integrated circuit device and the electronic substrate for mechanical reinforcement.Underfill materials are generally low viscosity materials, such as low viscosity epoxy materials, which may be dispensed from a dispensing needle along at least one edge of the integrated circuit device or die. The underfill material is drawn between the integrated circuit device and the electronic substrate by capillary action, and the underfill material is subsequently cured (hardened). However, as integrated circuit devices become smaller, there is a reduction in the size of the gap between the integrated circuit device and the electronic substrate, a reduction of the size of the gap between the integrated circuit device and neighboring components, and a reduction of the interconnection structure pitch (spacing). If the viscosity of the underfill material is too high, voids can form therein. Prevention of this voiding requires decreasing the viscosity and/or improving the wettability of the underfill material in order to wick between the integrated circuit device and the electronic substrate. The decreased viscosity and/or improved wettability can result in the underfill material "bleeding out" beyond the gap between the integrated circuit device and the electronic substrate and covering valuable surface area on the electronic substrate and/or interfering with other components in the integrated circuit assembly. One way to prevent such underfill bleed out is through the fabrication of containment structures, such as dams, trenches, and the like. However, these containment structures add cost to the integrated circuit assembly and still require a portion of the valuable surface area on the electronic substrate.BRIEF DESCRIPTION OF THE DRAWINGSThe subject matter of the present disclosure is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following detailed description and appended claims, taken in conjunction with the accompanying drawings. It is understood that the accompanying drawings depict only several embodiments in accordance with the present disclosure and are, therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings and/or schematics, such that the advantages of the present disclosure can be more readily ascertained, in which:FIG. 1 is a side cross-sectional view of an integrated circuit assembly, according to an embodiment of the present description.FIG. 2 is a view along line 2-2 of FIG. 1 of an integrated circuit die configuration, according to an embodiment of the present description.FIG. 3 is a view along line 2-2 of FIG. 1 of an alternate integrated circuit die configuration, according to an embodiment of the present description.FIG. 4 is a side cross-sectional view of an integrated circuit assembly, according to another embodiment of the present description.FIG. 5 is a flow chart of a process of fabricating an integrated circuit assembly, according to an embodiment of the present description.FIG. 6 is a side cross-sectional view of an integrated circuit assembly, according to an embodiment of the present description.FIG. 7 is a view along line 7-7 of FIG. 6 , according to an embodiment of the present description.FIG. 8 is an electronic system, according to one embodiment of the present description.DETAILED DESCRIPTIONIn the following detailed description, reference is made to the accompanying drawings that show, by way of illustration, specific embodiments in which the claimed subject matter may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the subject matter. It is to be understood that the various embodiments, although different, are not necessarily mutually exclusive. For example, a particular feature, structure, or characteristic described herein, in connection with one embodiment, may be implemented within other embodiments without departing from the spirit and scope of the claimed subject matter. References within this specification to "one embodiment" or "an embodiment" mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one implementation encompassed within the present description. Therefore, the use of the phrase "one embodiment" or "in an embodiment" does not necessarily refer to the same embodiment. In addition, it is to be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the claimed subject matter. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the subject matter is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the appended claims are entitled. In the drawings, like numerals refer to the same or similar elements or functionality throughout the several views, and that elements depicted therein are not necessarily to scale with one another, rather individual elements may be enlarged or reduced in order to more easily comprehend the elements in the context of the present description.The terms "over", "to", "between" and "on" as used herein may refer to a relative position of one layer with respect to other layers. One layer "over" or "on" another layer or bonded "to" another layer may be directly in contact with the other layer or may have one or more intervening layers. One layer "between" layers may be directly in contact with the layers or may have one or more intervening layers.The term "package" generally refers to a self-contained carrier of one or more dice, where the dice are attached to the package substrate, and may be encapsulated for protection, with integrated or wire-bonded interconnects between the dice and leads, pins or bumps located on the external portions of the package substrate. The package may contain a single die, or multiple dice, providing a specific function. The package is usually mounted on a printed circuit board for interconnection with other packaged integrated circuits and discrete components, forming a larger circuit.Here, the term "cored" generally refers to a substrate of an integrated circuit package built upon a board, card or wafer comprising a non-flexible stiff material. Typically, a small printed circuit board is used as a core, upon which integrated circuit device and discrete passive components may be soldered. Typically, the core has vias extending from one side to the other, allowing circuitry on one side of the core to be coupled directly to circuitry on the opposite side of the core. The core may also serve as a platform for building up layers of conductors and dielectric materials.Here, the term "coreless" generally refers to a substrate of an integrated circuit package having no core. The lack of a core allows for higher-density package architectures, as the through-vias have relatively large dimensions and pitch compared to high-density interconnects.Here, the term "land side", if used herein, generally refers to the side of the substrate of the integrated circuit package closest to the plane of attachment to a printed circuit board, motherboard, or other package. This is in contrast to the term "die side", which is the side of the substrate of the integrated circuit package to which the die or dice are attached.Here, the term "dielectric" generally refers to any number of non-electrically conductive materials that make up the structure of a package substrate. For purposes of this disclosure, dielectric material may be incorporated into an integrated circuit package as layers of laminate film or as a resin molded over integrated circuit dice mounted on the substrate.Here, the term "metallization" generally refers to metal layers formed over and through the dielectric material of the package substrate. The metal layers are generally patterned to form metal structures such as traces and bond pads. The metallization of a package substrate may be confined to a single layer or in multiple layers separated by layers of dielectric.Here, the term "bond pad" generally refers to metallization structures that terminate integrated traces and vias in integrated circuit packages and dies. The term "solder pad" may be occasionally substituted for "bond pad" and carries the same meaning.Here, the term "solder bump" generally refers to a solder layer formed on a bond pad. The solder layer typically has a round shape, hence the term "solder bump".Here, the term "printed circuit board" generally refers to a planar platform comprising dielectric and metallization structures. The substrate mechanically supports and electrically couples one or more IC dies on a single platform, with encapsulation of the one or more IC dies by a moldable dielectric material. The substrate generally comprises solder bumps as bonding interconnects on both sides. One side of the substrate, generally referred to as the "die side", comprises solder bumps for chip or die bonding. The opposite side of the substrate, generally referred to as the "land side", comprises solder bumps for bonding the package to a printed circuit board.Here, the term "assembly" generally refers to a grouping of parts into a single functional unit. The parts may be separate and are mechanically assembled into a functional unit, where the parts may be removable. In another instance, the parts may be permanently bonded together. In some instances, the parts are integrated together.Throughout the specification, and in the claims, the term "connected" means a direct connection, such as electrical, mechanical, or magnetic connection between the things that are connected, without any intermediary devices.The term "coupled" means a direct or indirect connection, such as a direct electrical, mechanical, magnetic or fluidic connection between the things that are connected or an indirect connection, through one or more passive or active intermediary devices.The term "circuit" or "module" may refer to one or more passive and/or active components that are arranged to cooperate with one another to provide a desired function. The term "signal" may refer to at least one current signal, voltage signal, magnetic signal, or data/clock signal. The meaning of "a," "an," and "the" include plural references. The meaning of "in" includes "in" and "on."The vertical orientation is in the z-direction and it is understood that recitations of "top", "bottom", "above" and "below" refer to relative positions in the z-dimension with the usual meaning. However, it is understood that embodiments are not necessarily limited to the orientations or configurations illustrated in the figure.The terms "substantially," "close," "approximately," "near," and "about," generally refer to being within +/- 10% of a target value (unless specifically specified). Unless otherwise specified the use of the ordinal adjectives "first," "second," and "third," etc., to describe a common object, merely indicate that different instances of like objects to which are being referred and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking or in any other manner.For the purposes of the present disclosure, phrases "A and/or B" and "A or B" mean (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).Views labeled "cross-sectional", "profile" and "plan" correspond to orthogonal planes within a cartesian coordinate system. Thus, cross-sectional and profile views are taken in the x-z plane, and plan views are taken in the x-y plane. Typically, profile views in the x-z plane are cross-sectional views. Where appropriate, drawings are labeled with axes to indicate the orientation of the figure.As will be understood to those skilled in the art, integrated circuit assemblies may contain various mold, fill, and/or underfill materials. As these integrated circuit assemblies become ever smaller, it becomes challenging to prevent voids from forming within these materials, which may affect the reliability of the integrated circuit assemblies. Since integrated circuit assemblies are generally formed by electrically attaching integrated circuit dice on electronic substrates, the embodiments of the present description relate to injecting the mold, fill, and/or underfill materials through openings formed in the electronic substrate to fill voids that may form and/or to prevent the formation of the voids altogether.FIG. 1 illustrates an integrated circuit assembly 100 having at least one integrated circuit die (illustrated as at least two first level integrated circuit dice 1201 attached to an electronic substrate 110 and at least one second level integrated circuit die 1202 attached to the at least two first level integrated circuit dice 1201, according to an embodiment of the present description.The electronic substrate 110 may be any appropriate device, including, but not limited to, a passive substrate (such as a package substrate or interposer, a printed circuit board, and the like) or a combination of an active device (not shown), such as, a microprocessor, a chipset, a graphics device, a wireless device, a memory device, an application specific integrated circuit, combinations thereof, stacks thereof, or the like, embedded in the passive electronic substrate.The electronic substrate 110 may comprise a plurality of dielectric material layers (not shown), which may include build-up films and/or solder resist layers, and may be composed of an appropriate dielectric material, including, but not limited to, bismaleimide triazine resin, fire retardant grade 4 material, polyimide material, silica filled epoxy material, glass reinforced epoxy material, and the like, as well as low-k and ultra low-k dielectrics (dielectric constants less than about 3.6), including, but not limited to, carbon doped dielectrics, fluorine doped dielectrics, porous dielectrics, organic polymeric dielectrics, and the like. The electronic substrate 110 may further include conductive routes 118 or "metallization" (shown in dashed lines) extending through the electronic substrate 110. The bond pads 116 on the first surface 112 of the electronic substrate 110 may be in electrical contact with the conductive routes 118, and the conductive routes 118 may extend through the electronic substrate 110 and be electrically connected to external components (not shown).As will be understood to those skilled in the art, the conductive routes 118 may be a combination of conductive traces (not shown) and conductive vias (not shown) extending through the plurality of dielectric material layers (not shown). These conductive traces and conductive vias are well known in the art and are not shown in FIG. 1 for purposes of clarity. The conductive traces and the conductive vias may be made of any appropriate conductive material, including but not limited to, metals, such as copper, silver, nickel, gold, aluminum, alloys thereof, and the like. As will be understood to those skilled in the art, the electronic substrate 110 may be a cored substrate or a coreless substrate.The integrated circuit dice 1201, 1202 may be any appropriate active devices, including, but not limited to, microprocessors, chipsets, graphics devices, wireless devices, memory devices, application specific integrated circuits, or the like, an may be any appropriate passive device, including, but not limited to capacitors, resistors, inductors, and the like.Each of the integrated circuit dice 1201, 1202 may include a first surface 122, an opposing second surface 124, and at least one side 126 extending between the first surface 122 and the second surface 124. Each of the integrated circuit dice 1201, 1202 may further include at least one bond pad 128 on the first surfaces 122 thereof. As illustrated, the second surfaces 124 of the at least two first level integrated circuit dice 1201 may be attached to the first surface 112 of the electronic substrate 110, and the second surface 124 of the second level integrated circuit die 1202. It is understood that the second level integrated circuit die 1202 is positioned to avoid interference with the bond pads 128 on the first level integrated circuit dice 1201. In an embodiment of the present description shown in FIG. 1 , the first level integrated circuit dice 1201 and the second level integrated circuit die 1202 may be electrically attached to the electronic substrate 110 through at least one bond wire 130 extending between the bond pads 128 of the first level integrated circuit dice 1201 and the bond pads 116 of the electronic substrate 110 and between the bond pads 128 of the second level integrated circuit die 1202 and the bond pads 116 of the electronic substrate 110, respectively. Additionally, the first level integrated circuit dice 1201 and the second level integrated circuit die 1202 may be electrically attached to one another through at least one bond wire 130 extending between the bond pads 128 of the first level integrated circuit dice 1201 and the bond pads 128 of the second level integrated circuit dice 1202.An electrically-insulating mold material 140, such as an epoxy material, may be disposed over the integrated circuit dice 1201, 1202 and the electronic substrate 110, and may substantially surround each bond wire 130. The mold material 140 may provide structural integrity and may prevent contamination, as will be understood to those skilled in the art.As shown in FIG. 1 , the stack configuration of the first level integrated circuit dice 1201 and the second level integrated circuit die 1202 can result in at least one void 150 forming in areas where the mold material 140 does not flow, such as tunnels and blind cavities. In one embodiment shown in FIG. 1 , the at least one void 150 may be defined by the first surface 112 of the electronic substrate 110, the sides 126 of the first level integrated circuit dice 1201, and the second surface 124 of the second level integrated circuit die 1202. The mold material 140 itself may additionally define the at least one void 150, as will be understood.As shown in FIG. 1 , the electronic substrate 110 may include at least one inlet opening 170 extending from the first surface 112 to an opposing second surface 114 of the electronic substrate 110. The inlet opening 170 may be used to introduce a fill material 160, such as an epoxy material, into the at least one void 150, and, thus, at least a portion of the fill material 160 will extend into the inlet opening 170. In one embodiment, the fill material 160 may be dispensed into the void 150 as a viscous liquid and then hardened with a curing process. It may be advantageous to form at least one vent opening 180 extending from the first surface 112 to the second surface 114 of the electronic substrate 110, such that the assembly can be flipped over and the fill material 160 dispensed into the void 150, such that ambient atmosphere may be vented out of the vent opening 180 as the fill material 160 fills the void 150. In one embodiment of the present description, the fill material 160 may be substantially the same material as the mold material 140. In an embodiment of the present description, the fill material 160 may be injected under positive pressure into the inlet opening 170 from the second surface 114 of the electronic substrate 110. In an embodiment of the present description, the at least one inlet opening 170 and/or the at least one vent opening 180 may be made by any known process, including, but not limited to, laser drilling, ion ablation, etching, and the like. In a specific embodiment, the at least one inlet opening 170 and/or the at least one vent opening 180 may be formed as a plated through hole, as known in the art.As shown in FIG. 2 , the integrated circuit assembly 100 may comprise two first level integrated circuit dice 1201, one second level integrated circuit die 1202, one inlet opening 170, and four vent openings 180. It is noted that the mold material 140 (see FIG. 1 ) is not shown and the second level integrated circuit die 1202 and associated components are shown in shadow for clarity.As shown in FIG. 3 , the integrated circuit assembly 100 may have any appropriate number of integrated circuit dice. In one embodiment, the integrated circuit assembly 100 may comprise four first level integrated circuit dice 1201, one second level integrated circuit die 1202, one inlet opening 170, and four vent openings 180. It is noted that the mold material 140 (see FIG. 1 ), the bond wires 130, and associated bond pads are not shown for clarity.Although FIG. 1 illustrates two levels of integrated circuit dice, the embodiments of the present description are not so limited and can include any number of levels. In one example, an embodiment of the present description has four levels of integrated circuit dice, i.e. first level dice 1201, second level dice 1202, third level dice 1203, and fourth level die 1204, as shown in FIG. 4 .FIG. 5 is a flow chart of a process 200 of fabricating an integrated circuit assembly. As set forth in block 210, an electronic substrate having a first surface and an opposing second surface may be formed. An opening may be formed in the electronic substrate, wherein the opening extends from the first surface to the second surface of the electronic substrate, as set forth in block 220. As set forth in block 230, at least two first level integrated circuit dice may be formed having a first surface and a second surface. The second surface of each of the at least two first level integrated circuit dice may be attached to the first surface of the electronic substrate, as set forth in block 240. As set forth in block 250, at least one second level integrated circuit die may be formed having a first surface and a second surface. The second surface of the at least one second level integrated circuit die may be attached to the first surface of at least one of the first level integrated circuit dice, wherein at least one void is defined by the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die, as set forth in block 260. As set forth in block 270, the electronic substrate may be oriented with the second surface gravitationally higher than the first surface thereof. A fill material may be dispensed from the second surface of the electronic substrate through the opening and into the at least one void, as set forth in block 280. As set forth in block 290, the fill material may be cured.FIG. 6 illustrates a further embodiment of the present description comprising an integrated circuit assembly 300 having at least one integrated circuit device 310 attached to the electronic substrate 110 in a configuration generally known as a flip-chip or controlled collapse chip connection ("C4") configuration, according to an embodiment of the present description. The integrated circuit device 310 may be any appropriate active device, as described with regard to integrated circuit dice 1201, 1202 of FIG. 1 .In an embodiment of the present description shown in FIG. 6 , the integrated circuit device 310 may be attached to the electronic substrate 110 with a plurality of device-to-substrate interconnects 320. In one embodiment of the present description, the device-to-substrate interconnects 320 may extend between the bond pads 116 on the first surface 112 of the electronic substrate 110 and bond pads 318 on a first surface 312 of the integrated circuit device 310.In one embodiment, the device-to-substrate interconnects 320 may be solder balls formed from tin, lead/tin alloys (for example, 63% tin / 37% lead solder), and high tin content alloys (e.g. 90% or more tin - such as tin/bismuth, eutectic tin/silver, ternary tin/silver/copper, eutectic tin/copper, and similar alloys). The device-to-substrate interconnects 320 may be in electrical communication with integrated circuitry (not shown) within the integrated circuit device 310.An electrically-insulating underfill material 330, such as an epoxy material, may be disposed between the integrated circuit device 310 and the electronic substrate 110 to substantially surround each device-to-substrate interconnect of the plurality of device-to-substrate interconnects 320. The underfill material 330 may provide structural integrity and may prevent contamination, as will be understood to those skilled in the art. As discussed with regard to FIG. 1 , the electronic substrate 110 may include at least one inlet opening 170 extending from the first surface 112 to the second surface 114 of the electronic substrate 110, as previously discussed. The inlet opening 170 is used to introduce the underfill material 330 between the integrated circuit device 310 and the electronic substrate 110, and, thus, at least a portion of the underfill material 330 will extend into the inlet opening 170. In one embodiment, the underfill material 330 may be dispensed between the first surface 312 of the integrated circuit device 310 and the first surface 112 of the electronic substrate 110 as a viscous liquid and then hardened with a curing process. In an embodiment of the present description, the underfill material 330 may be injected under positive pressure into the inlet opening 170 from the second surface 114 of the electronic substrate 110, which reduces or eliminates capillary action as the driving force for the distribution of the underfill material 330.FIG. 7 illustrates a view along line 7-7 of FIG. 6 . As shown, the inlet opening 170 may be substantially centrally located within a substantially symmetrical array of bond pads 116 of the electronic substrate 110. However, it is understood that the inlet opening 170 may be located in any position to achieve the shortest flow time of the underfill material 330, particularly when the bond pads 116 (and hence the device-to-substrate interconnects 320 (see FIG. 6 )) have a non-symmetrical arrangement.FIG. 8 illustrates an electronic or computing device 400 in accordance with one implementation of the present description. The computing device 400 may include a housing 401 having a board 402 disposed therein. The computing device 400 may include a number of integrated circuit components, including but not limited to a processor 404, at least one communication chip 406A, 406B, volatile memory 408 (e.g., DRAM), non-volatile memory 410 (e.g., ROM), flash memory 412, a graphics processor or CPU 414, a digital signal processor (not shown), a crypto processor (not shown), a chipset 416, an antenna, a display (touchscreen display), a touchscreen controller, a battery, an audio codec (not shown), a video codec (not shown), a power amplifier (AMP), a global positioning system (GPS) device, a compass, an accelerometer (not shown), a gyroscope (not shown), a speaker, a camera, and a mass storage device (not shown) (such as hard disk drive, compact disk (CD), digital versatile disk (DVD), and so forth). Any of the integrated circuit components may be physically and electrically coupled to the board 402. In some implementations, at least one of the integrated circuit components may be a part of the processor 404.The communication chip enables wireless communications for the transfer of data to and from the computing device. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non-solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. The communication chip may implement any of a number of wireless standards or protocols, including but not limited to Wi-Fi (IEEE 802.11 family), WiMAX (IEEE 802.16 family), IEEE 802.20, long term evolution (LTE), Ev-DO, HSPA+, HSDPA+, HSUPA+, EDGE, GSM, GPRS, CDMA, TDMA, DECT, Bluetooth, derivatives thereof, as well as any other wireless protocols that are designated as 3G, 4G, 5G, and beyond. The computing device may include a plurality of communication chips. For instance, a first communication chip may be dedicated to shorter range wireless communications such as Wi-Fi and Bluetooth and a second communication chip may be dedicated to longer range wireless communications such as GPS, EDGE, GPRS, CDMA, WiMAX, LTE, Ev-DO, and others.The term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory to transform that electronic data into other electronic data that may be stored in registers and/or memory.In one embodiment, at least one of the integrated circuit components may include an electronic substrate having a first surface and an opposing second surface, wherein the electronic substrate includes at least one inlet opening extending from the first surface to the second surface, at least one integrated circuit die attached to the electronic substrate; at least one void defined by the electronic substrate and the integrated circuit die; and a fill material within the at least one void, wherein a portion of the fill material extends into the opening in the electronic substrate.In various implementations, the computing device may be a laptop, a netbook, a notebook, an ultrabook, a smartphone, a tablet, a personal digital assistant (PDA), an ultra-mobile PC, a mobile phone, a desktop computer, a server, a printer, a scanner, a monitor, a set-top box, an entertainment control unit, a digital camera, a portable music player, or a digital video recorder. In further implementations, the computing device may be any other electronic device that processes data.It is understood that the subject matter of the present description is not necessarily limited to specific applications illustrated in FIGs. 1-8 . The subject matter may be applied to other integrated circuit devices and assembly applications, as well as any appropriate electronic application, as will be understood to those skilled in the art.The following examples pertain to further embodiments and specifics in the examples may be used anywhere in one or more embodiments, wherein Example 1 is an integrated circuit assembly comprising an electronic substrate having a first surface and an opposing second surface, wherein the electronic substrate includes at least one inlet opening extending from the first surface to the second surface, at least one integrated circuit die attached to the electronic substrate; at least one void defined by the electronic substrate and the integrated circuit die; and a fill material within the at least one void, wherein a portion of the fill material extends into the opening in the electronic substrate.In Example 2, the subject matter of Example 1 can optionally include a mold material on the electronic substrate and the at least one of the integrated circuit dice.In Example 3, the subject matter of Example 2 can optionally include the at least one void being further defined by the mold material.In Example 4, the subject matter of any of Examples 1 to 3 can optionally include at least one vent opening extending from the first surface of the electronic substrate to the second surface of the electronic substrate.In Example 5, the subject matter of any of Examples 1 to 4 can optionally include a portion of the fill material extending into the vent opening.Example 6 is an electronic system, comprising an electronic substrate having a first surface and an opposing second surface, wherein the electronic substrate includes at least one opening extending from the first surface to the second surface; at least two first level integrated circuit dice having a first surface and an opposing second surface, wherein the second surface of each at least two integrated first level integrated circuit attached to the electronic substrate; at least one second level integrated circuit die having a first surface and an opposing second surface, wherein the second surface of second level integrated circuit die is attached to the first surface of at least one of the first level integrated circuit dice; at least one void defined by the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die; and a fill material within the at least one void, wherein a portion of the fill material extends into the inlet opening in the electronic substrate.In Example 7, the subject matter of Example 6 can optionally include the electronic substrate further comprises at least one vent opening extending from the first surface of the electronic substrate to the second surface of the electronic substrate.In Example 8, the subject matter of Example 7 can optionally include a portion of the fill material extending into the vent opening.In Example 9, the subject matter of any of Examples 6 to 8 can optionally include a mold material on the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die.In Example 10, the subject matter of Example 9 can optionally include the at least one void being further defined by the mold material.In Example 11, the subject matter of Example 9 can optionally include the fill material being substantially the same as the mold material.In Example 12, the subject matter of any of Examples 6 to 11 can optionally include at least one of the at least two first level integrated circuit dice and the at least one second level integrated circuit die being electrically attached to the electronic substrate with at least one bond wire.In Example 13, the subject matter of any of Examples 6 to 12 can optionally include at least one of the at least two first level integrated circuit dice being electrically attached to the at least one second level integrated circuit die with at least one bond wire.Example 14 is a method of fabricating an integrated circuit assembly comprising forming an electronic substrate having a first surface and an opposing second surface; forming an opening in the electronic substrate, wherein the opening extends from the first surface of the electronic substrate to the second surface of the electronic substrate; forming at least two first level integrated circuit dice having a first surface and an opposing second surface; attaching the second surface of each at least two integrated first level integrated circuit to the first surface of the electronic substrate; forming at least one second level integrated circuit die having a first surface and an opposing second surface; attaching the second surface of the at least one second level integrated circuit die to the first surface of at least one of the first level integrated circuit dice, wherein at least one void is defined by the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die; orienting the electronic substrate with the second surface gravitationally higher than the first surface thereof; dispensing a fill material from the second surface of the electronic substrate through the inlet opening and into the at least one void; and curing the fill material.In Example 15, the subject matter of Example 14 can optionally include a portion of the fill material extending into the inlet opening.In Example 16, the subject matter of any of Examples 14 and 15 can optionally include forming at least one vent opening extending from the first surface of the electronic substrate to the second surface of the electronic substrate.In Example 17, the subject matter of Example 16 can optionally include a portion of the fill material extending into the at least one vent opening.In Example 18, the subject matter of Example 14 to 17 can optionally include forming a mold material on the electronic substrate, the at least two first level integrated circuit dice, and the second level integrated circuit die.In Example 19, the subject matter of Example 18 can optionally include the at least one void being further defined by the mold material.In Example 20, the subject matter of any of Examples 18 to 19 can optionally include the fill material is substantially the same as the mold material.In Example 21, the subject matter of any of Examples 14 to 21 can optionally include electrically attaching at least one of the at least two first level integrated circuit dice and the at least one second level integrated circuit die to the electronic substrate with at least one bond wire.In Example 22, the subject matter of any of Examples 14 to 21 can optionally include electrically attaching at least one of the at least two first level integrated circuit dice to the at least one second level integrated circuit die with at least one bond wire.Having thus described in detail embodiments of the present invention, it is understood that the invention defined by the appended claims is not to be limited by particular details set forth in the above description, as many apparent variations thereof are possible without departing from the spirit or scope thereof.
A method for managing data between a virtual machine a bus controller includes transmitting an input output (IO) request from the virtual machine to a service virtual machine that owns the bus controller. According to an alternate embodiment, managing data between a virtual machine and a bus controller includes trapping a register access made by the virtual machine. A schedule is generated to be implemented by the bus controller. Status is returned to the virtual machine via a virtual host controller. Other embodiments are described and claimed.
CLAIMS What is claimed is: 1. A method for managing data between a virtual machine and a bus controller, comprising: trapping a register access made by the virtual machine; generating a schedule to be implemented by the bus controller; and returning status to the virtual machine via a virtual bus controller. 2. The method of Claim 1, wherein generating the schedule is performed in response to receiving a notification that one of a transfer descriptor and a queue head has been modified or added. 3. The method of Claim 1, wherein generating the schedule comprises pointing transfer descriptors in frames from a virtual machine to transfer descriptors in corresponding frames from another virtual machine. 4. The method of Claim 3, wherein pointing is indicated on frame lists in the virtual machines. 5. The method of Claim 3, wherein pointing is indicated on copies of frame lists in a virtual machine monitor. 6. A method for managing data between a virtual machine and a bus controller, comprising: transmitting an input output (IO) request from the virtual machine to a service virtual machine that owns the bus controller. 7. The method of Claim 6, further comprising: processing the IO request at the service virtual machine; and transmitting status from the service virtual machine to the virtual machine. 8. The method of Claim 6, wherein the data is asynchronous data. 9. The method of Claim 6, wherein the data is isochronous data. 10. The method of Claim 6, wherein transmitting the IO request comprises using a remote procedure call. 11. The method of Claim 6, wherein transmitting the IO request comprises using a shared memory communication. 12. An article of manufacture comprising a machine accessible medium including sequences of instructions, the sequences of instructions including instructions which when executed cause the machine to perform: trapping a register access made by a virtual machine; generating a schedule to be implemented by a bus controller; and returning status to the virtual machine via a virtual bus controller. 13. The article of manufacture of Claim 12, wherein generating the schedule is performed in response to receiving a notification that one of a transfer descriptor and a queue head has been modified or added. 14. The article of manufacture of Claim 12, wherein generating the schedule comprises pointing transfer descriptors in frames from a virtual machine to transfer descriptors in corresponding frames from another virtual machine. 15. The article of manufacture of Claim 14, wherein pointing is indicated on frame lists in the virtual machines. 16. The article of manufacture of Claim 14, wherein pointing is indicated on copies of frame lists in a virtual machine monitor. 17. A computer system, comprising: a bus; a bus controller to control the bus; a memory; and a processor to implement a bus module to trap a register access made by a virtual machine and generate a schedule to be implemented by the bus controller 18. The computer system of Claim 17, wherein the bus module comprises a trap handler. 19. The computer system of Claim 17, wherein the bus module comprises an interrupt handler. 20. The computer system of Claim 18, wherein the trap handler includes a schedule trap unit to point transfer descriptors in frames from the virtual machine to transfer descriptors in corresponding frames from another virtual machine.
METHOD AND APPARATUS FOR SUPPORTING UNIVERSAL SERIAL BUS DEVICES IN A VIRTUALIZED ENVIRONMENTFIELD[0001] Embodiments of the present invention relate to virtualization. More specifically, embodiments of the present invention relate to methods and apparatus for supporting Universal Serial Bus (USB) devices in virtualized environments.BACKGROUND[0002] Virtualization is a technique in which a computer system is partitioned into multiple isolated virtual machines (VMs), each of which appears to the software within it to be a complete computer system. A conventional virtual machine manager (VMM) may run on a computer to present the abstraction of one or more VMs or guests to other software. Each VM may function as a self-contained platform that runs its own software stack, including an operating system (OS) and applications. Collectively this software stack is referred to as "guest software." [0003] Guest software running on a VM expects to operate as if it were running on a dedicated computer. For example, the guest software expects to control various computer operations and have access to physical (i.e., hardware) resources during these operations. The VMM controls which physical devices are assigned to each VM and also implements virtual devices which are visible to the VMs. If a physical device is fully assigned to a single VM, it is not available to the other VMs in the computer system. If a physical device is to be shared by more than one VM, the VMM typically implements a virtual device for each VM and arbitrates access of the virtual devices to the physical device. [0004] USB 2.0 (Universal Serial Bus Revision 2.0 Specification, published 2002) is an external bus that supports data rates of up to 480 Mbps. USB 2.0 is an extension of USB 1.1 (Universal Serial Bus Revision 1.1 Specification, published 1996) and is fully compatible with USB 1.1. Current virtualization software solutions provide limited support for USB 2.0. BRIEF DESCRIPTION OF THE DRAWINGS[0005] The features and advantages of embodiments of the present invention are illustrated by way of example and are not intended to limit the scope of the embodiments of the present invention to the particular embodiments shown. [0006] Figure 1 illustrates an embodiment of a computer system according to an embodiment of the present invention.[0007] Figure 2 is a block diagram that illustrates a virtualized environment in which an embodiment of the invention resides according to a first embodiment.[0008] Figure 3 is a block diagram that illustrates a virtualized environment in which an embodiment of the invention resides according to a second embodiment.[0009] Figure 4 is a block diagram that illustrates a USB module according to an embodiment of the present invention.[0010] Figure 5a illustrates an example of asynchronous schedules from VMs.[0011] Figure 5b illustrates an example of how asynchronous schedules may be linked according to an embodiment of the present invention.[0012] Figure 5c illustrates an example of how asynchronous schedules may be copied and merged according to an embodiment of the present invention.[0013] Figure 6a illustrates an example of isochronous schedules from VMs.[0014] Figure 6b illustrates an example of how isochronous schedules may be linked according to an embodiment of the present invention.[0015] Figure 6c illustrates an example of how isochronous schedules may be copied and merged according to an embodiment of the present invention.[0016] Figure 7 is a flow chart illustrating a method for generating an asynchronous schedule for a host controller according to an embodiment of the present invention. [0017] Figure 8 is a flow chart illustrating a method for generating an isochronous schedule for a host controller according to an embodiment of the present invention. DETAILED DESCRIPTION[0018] In the following description, for purposes of explanation, specific nomenclature is set forth to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that specific details in the description may not be required to practice the embodiments of the present invention. In other instances, well-known circuits, devices, and programs are shown in block diagram form to avoid obscuring embodiments of the present invention unnecessarily.[0019] Figure 1 is a block diagram of an exemplary computer system 100 according to an embodiment of the present invention. The computer system 100 includes a processor 101 that processes data signals. The processor 101 may be a complex instruction set computer microprocessor, a reduced instruction set computing microprocessor, a very long instruction word microprocessor, a processor implementing a combination of instruction sets, or other processor device. Figure 1 shows the computer system 100 with a single processor. However, it is understood that the computer system 100 may operate with multiple processors. Additionally, each of the one or more processors may support one or more hardware threads. The processor 101 is coupled to a CPU bus 110 that transmits data signals between processor 101 and other components in the computer system 100.[0020] The computer system 100 includes a memory 113. The memory 113 may be a dynamic random access memory device, a static random access memory device, read-only memory, and/or other memory device. The memory 113 may store instructions and code represented by data signals that may be executed by the processor 101. A cache memory 102 may reside inside processor 101 that stores data signals stored in memory 113. The cache 102 speeds access to memory by the processor 101 by taking advantage of its locality of access. In an alternate embodiment of the computer system 100, the cache resides external to the processor 101. A bridge memory controller 111 is coupled to the CPU bus 110 and the memory 113. The bridge memory controller 111 directs data signals between the processor 101, the memory 113, and other components in the computer system 100 and bridges the data signals between the CPU bus 110, the memory 113, and IO bus 120.[0021] The IO bus 120 may be a single bus or a combination of multiple buses. The IO bus 120 provides communication links between components in the computer system 100. A network controller 121 is coupled to the IO bus 120. The network controller 121 may link the computer system 100 to a network of computers (not shown) and supports communication among the machines. A display device controller 122 is coupled to the IO bus 120. The display device controller 122 allows coupling of a display device (not shown) to the computer system 100 and acts as an interface between the display device and the computer system 100. Alternatively, the display device controller 122 may be connected directly to bridge 111.[0022] IO bus 130 may be a single bus or a combination of multiple buses. IO bus 130 provides communication links between components in the computer system 100. A data storage device 131 is coupled to the IO bus 130. The data storage device 131 may be a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device or other mass storage device. An input interface 132 is coupled to the IO bus 130. The input interface 132 may be, for example, a keyboard and/or mouse controller or other input interface. The input interface 132 may be a dedicated device or can reside in another device such as a bus controller or other controller. The input interface 132 allows coupling of an input device to the computer system 100 and transmits data signals from an input device to the computer system 100. A camera 133 is coupled to IO bus 130. The camera 133 operates to capture images that may be displayed on a display device or stored in memory 113.[0023] A bus bridge 123 couples IO bus 120 to IO bus 130. The bus bridge 123 operates to buffer and bridge data signals between IO bus 120 and IO bus 130. The bus bridge 123 includes a bus controller 124. According to an embodiment of the computer system 100, IO bus 130 is a USB 2.0 bus and the bus controller 124 is a host controller (USB host controller). The host controller 124 controls IO bus 130 by executing a schedule of tasks provided. The host controller 124 also sends out packets on IO bus 130, looks for status, and provides a register interface to software.[0024] According to an embodiment of the present invention, the processor 101 executes instructions stored in memory 113 that include virtualization software. The virtualization software supports virtualization on the computer system 100 and usage of input output devices, such as USB 2.0 devices, in a virtualized environment. In one embodiment, virtualization is performed at the USB request buffer level where operating systems in VMs (guest operating systems) run virtual root hub drivers instead of USB system software. The virtual root hub drivers communicate with a dedicated service VM which runs the USB system software. In an alternate embodiment, virtualization is performed at the register level where guest operating systems run legacy USB system software. A USB module resides in a VMM that performs trap and interrupt handling. The virtualization software may support two or more USB 2.0 devices in a virtualized environment and may support isochronous data transfer. [0025] Figure 2 is a block diagram that illustrates a virtualized environment 200 according to a first embodiment of the present invention. The virtualized environment 200 includes a VMM 210. The VMM 210 interfaces a physical machine. The physical machine may include components of a computer system such as, for example, one or more processors, a memory, buses, a host controller, and various IO devices. According to an embodiment of the present invention, the physical machine may be implemented by the computer system 100 shown in Figure 1 or a computer system having components similar to those shown in Figure 1. The VMM 210 facilitates one or more VMs 220 to be run. According to an embodiment of the present invention, the VMM 210 may be a sequence of instructions stored in a memory of a computer system. The VMM 210 manages and mediates computer system resources in the physical machine between the VMs 220 and allows the isolation of or data sharing between VMs 220. The VMM 210 achieves this isolation or sharing by virtualizing resources in the physical machine and exporting a virtual hardware interface (i.e., a VM) that could reflect an underlying architecture of the physical machine, a variant of the physical machine, or an entirely different physical machine. [0026] The virtualized environment 200 includes one or more VMs 221-223 (collectively shown as 220). According to an embodiment of the present invention, a VM may be described as an isolated model of a machine including, but not limited to, a replica of the physical machine, a subset of the physical machine, or model of an entirely different machine. A VM may include the resources of the computer system in the physical machine, a subset of the resources of the computer system in the physical machine, or entirely virtual resources not found in the physical machine.[0027] According to an embodiment of the present invention, the VMM 210 has control of the physical machine and creates VMs 220, each of which behaves like a physical machine that can run its own operating system (OS). VMs 221-223 may run operating systems (guest operating systems) 231-233 respectively where the operating systems 231-233 may be unique to one another. To maximize performance, the VMM 210 allows a VM to execute directly on the resources of the computer system in the physical machine when possible. The VMM 210 may take control, however, whenever a VM attempts to perfo[pi]n an operation that may affect the operation of other VMs, the VMM 210 or of the operation of resources in the physical machine. The VMM 210 may emulate the operation and may return control to the VM when the operation is completed. One or more applications (guest applications) may be run on each of the VMs 221-223. [0028] VMs 221 and 222 include client drivers 241 and 242 respectively. The client drivers 241 and 242 support input output devices coupled to an input output bus. According to an embodiment of the present invention, the client drivers 241 and 242 support USB 2.0 devices. VMs 221 and 222 also include virtual root hub (VHub) drivers 251 and 252. The client drivers 241 and 242 submit IO requests to their respective VHub drivers 251 and 252. The VHub drivers 251 and 252 transport the IO requests to an entity that controls a host controller for the physical machine. [0029] The virtualized environment 200 includes a VM 223 that operates as a dedicated service VM. According to an embodiment of the virtualized environment 200, the service VM 223 controls the host controller for the physical machine and operates to virtualize support for an IO bus such as USB 2.0. The service VM 223 includes a USB server 243. The USB server 243 interfaces with the VHub drivers 251 and 252 and receives IO requests from the VHub drivers 251 and 252. According to an embodiment of the present invention, the VHub drivers 251 and 22 and the USB server 243 include a plurality of queues for storing outstanding IO requests to be processed. The transport between VHubs drivers 251 and 252 and the USB server 243 may be implemented using messaging mechanisms and techniques such as message passing, client-server (sockets), shared memory buffers (inter- VM communication), remote procedure calls, or other procedures. [0030] The service VM 223 includes a hub driver 253. The hub driver 253 detects the attachment and removal of USB 2.0 devices from a USB 2.0 bus. Upon a device attach event, the hub driver 253 may query a device to determine its type and its characteristics. Based upon policy, the hub driver 253 may select which VM the device should attach to. The service VM 223 sends a message to the VHub driver corresponding to the selected VM. The VHub driver may trigger plug-and-play events that lead to the loading of appropriate client drivers. [0031] The service VM 223 includes a USB driver 263. The USB driver 263 manages the USB 2.0 bus. The USB driver 263 makes policy decisions for the USB 2.0 bus and allocates bandwidth for devices on the USB 2.0 bus.[0032] The service VM 223 includes a host controller (HC) driver 273. The HC driver 273 interfaces with a host controller and sets up a schedule that the host controller executes. The schedule may include one or more transfer descriptors (TDs). TDs are IO requests that may include an address in memory in which to start a transfer, a size of the memory to transfer, and a destination USB device and endpoint address. The schedule may also include one or more queue heads (QHs) which point to chains of TDs. According to an embodiment of the present invention, the host controller driver 273 generates a schedule for the host controller that includes both isochronous (periodic) data and asynchronous (bulk) data. The asynchronous data may be scheduled to be executed only after the isochronous data has been executed for a frame. According to an embodiment of the present invention, a frame may be a unit of time during which zero or more data packets may be transmitted.[0033] On completion of an IO request, the USB server 243 returns status values to the appropriate VHub driver. The VHub driver in turn completes the IO request from its corresponding client driver. Interrupts from the host controller are handled by the service VM 223. The service VM 223 also handles device attach events. By implementing USB system software such as the hub driver 253, USB driver 263, and HC driver 273 on the dedicated service VM 223, only one copy of the USB system software needs to be ran in the virtualized environment 200. This allows USB 2.0 parameters such as device identifiers, bus power, and isochronous bandwidth to be managed centrally by the service VM 223.[0034] Figure 3 is a block diagram that illustrates a virtualized environment 300 according to a second embodiment of the present invention. The virtualized environment 300 includes a VMM 310. According to an embodiment of the present invention, the VMM 310 may include properties that are similar to and perform some procedures that are similar to those described with respect to the VMM 210 in Figure 2. The VMM 310 interfaces a physical machine. The physical machine may be one that is similar to the physical machine described with respect to Figure 2. The VMM 310 includes a plurality of virtual bus controller. According to an embodiment of the present invention where USB is utilized, the virtual bus controllers may be implemented with virtual host controllers (V Host Controllers) 381 and 382. The virtual host controllers 381 and 382 are presented to VMs in the virtualized environment 300. VMs in the virtualized environment 300 communicate with the virtual host controllers 381 and 382 as if they were the actual host controller in the physical machine.[0035] The VMM 310 includes a bus module. According to an embodiment of the present invention, the bus module may be implemented with USB module 390. The USB module 390 may be a sequence of instructions and associated memory. The USB module 390 controls the host controller in the physical machine and maintains a schedule, called the active schedule that is executed by the host controller. According to an embodiment of the virtualized environment, the USB module 390 traps accesses made by VMs to the virtual host controllers 381 and 382. The USB module 390 may implement the semantics of registers, update the state of the virtual host controllers 381 and 382, and return status of the virtual host controllers 381 and 382. The USB module 390 may also trap accesses to pages that include a schedule. These traps may be implemented as page faults, for example. The pages may include a periodic frame list, QHs, and/or TDs. When a VM updates QHs or TDs, the USB module 390 updates the active schedule. Status information from the active schedule in the USB module 390 may be copied back into a schedule in a VM. The USB module 390 may also generate interrupts in the VM as required by the state of a virtual host controller. [0036] The virtualized environment 300 includes one or more VMs 321-322 (collectively shown as 320). According to an embodiment of the present invention, the VMM 310 has control of the physical machine and creates VMs 320, each of which behaves like a physical machine that can run its own operating system (OS). VMs 321-322 may run operating systems (guest operating systems) 331-332 respectively where the operating systems 331-332 may be unique to one another. One or more applications (guest applications) may be run on each of the VMs 321-322.[0037] The VMs 321 and 322 include client drivers 341 and 342 respectively. The client drivers 341 and 342 support input output devices coupled to an input output bus. According to an embodiment of the present invention, the client drivers 341 and 342 support USB 2.0 devices. The client drivers 341 and 342 generate IO requests to access USB 2.0 devices. [0038] The VMs 321 and 322 includes hub drivers 351 and 352 respectively. The hub drivers 351 and 352 detect the attachment and removal of USB 2.0 devices from a USB 2.0 bus. Upon a device attach event, the hub drivers 351 and 352 may query a device to determine its type and its characteristics. [0039] The VMs 321 and 322 include USB drivers 361 and 362 respectively. The USB drivers 361 and 362 manage the USB 2.0 bus. The USB drivers 361 and 362 make policy decisions for the USB 2.0 bus and allocate bandwidth for devices on the USB 2.0 bus. [0040] The VMs 321 and 322 include host controller (HC) drivers 371 and 372 respectively. The HC drivers 321 and 322 interface with virtual host controllers 381 and 382, respectively. Each host controller driver sets up a schedule for its virtual host controller to execute. The schedule may include TDs and/or QHs that describes activities on each frame of the bus associated with the host controller. According to an embodiment of the present invention, the host controller drivers 371 and 372 generate a schedule for the host controller that includes both isochronous data and asynchronous data.[0041] It should be appreciated that instead of having the USB module 390 trap every VM access that generates or modifies a TD for isochronous data transfers, the host controller drivers 271 and 272 may notify the USB module 390 after one or more TDs have been generated or modified by its corresponding VM. This allows the USB module 390 to process isochronous schedules without having to trap every VM access that generates or modifies a TD which reduces overhead and is more efficient. According to an embodiment of the present invention, notification may be provided to the USB module 390 when an endpoint is opened or closed or when an isochronous transfer is setup. The notification may include information about a new endpoint (e.g. device and endpoint number) or information about the new transfer (e.g. device, endpoint numbers, and start and end frames).[0042] By implementing virtual host controllers and a USB module in the VMM, legacy USB system software, such as client drivers, hub drivers, USB drivers and host controller drivers, may be run on the VMs in a virtualized environment. One benefit of host controller register virtualization is that the VMMs can maintain binary legacy compatibility and run legacy guest binaries.[0043] Figure 4 is a block diagram that illustrates a USB module 400 according to an embodiment of the present invention. The USB module 400 may be used to implement the USB module 390 shown in Figure 3. The USB module 400 includes a trap handler 410. The trap handler 410 manages register accesses made by a VM. The trap handler 410 includes a schedule trap unit 411. The schedule trap unit 411 traps QH or TD writes and reads by VMs as virtual host controller drivers set up schedules for virtual host controllers. According to an embodiment of the present invention, the schedule trap unit 411 may link schedules generated by virtual host controller drivers together by modifying QHs and TDs in place in the VMs to generate a schedule for the host controller in the physical machine. Alternatively, the schedule trap unit 411 may copy the schedules into the USB module 400 and modify (merge) the copy of the schedules to generate a schedule for the host controller in the physical machine. When a VM updates its schedule, the schedule trap unit 411 may perform the linking or copying. When linking is performed and a VM attempts to read back a schedule, the schedule trap unit 411 returns an expected value that the VM set up. In this embodiment, schedule trap unit 411 may manage and store expected values for the VM. When copying and merging is performed, the schedule trap unit 411 may store the copied and merged copy of the schedule. The schedule trap unit 411 also performs address translation and USB device address translation to support the generation of an active schedule for the host controller in the physical machine. [0044] The trap handler 410 includes a register read trap unit 412. The register read trap unit 412 traps status register reads made by a VM. Status registers may be read to identify various states of a USB 2.0 bus. States indicated by status registers may include, for example, the health of a bus, the presence of errors, the presence of IO devices at ports, and whether a transaction has been completed. The register read trap unit 412 performs status virtualization by returning an appropriate status of a VM that corresponds to the status register read. [0045] The trap handler 410 includes a register write trap unit 413. The register write trap unit 413 traps register writes made by a VM. Registers may be written by a VM to effectuate actions to be performed by a host controller. The register write trap unit 413 manages the register writes to allow a single host controller on the physical machine to be shared by a plurality of VMs. According to an embodiment of the present invention, isochronous and asynchronous list addresses are recorded but not written into registers.[0046] The USB module 400 includes an interrupt handler 420. The interrupt handler 420 manages interrupts made to a processor by a host controller on the physical machine. The interrupt handler 420 includes a USB interrupt unit 421. For interrupts generated by the host controller to indicate that work on a schedule has been completed, the USB interrupt unit 421 identifies which VM submitted the work and generates an interrupt to the identified VM. [0047] The interrupt handler 420 includes a status interrupt unit 422. For interrupts generated by the host controller to indicate that a device has been attached to the physical machine, the status interrupt unit 422 determines which VM to assign the device to. According to an embodiment of the interrupt handler 420, the status interrupt unit 422 makes this determination based on a port number, a device type, a device serial number or other criteria. The status interrupt unit 422 generates an interrupt to the VM. [0048] The interrupt handler 420 includes an error interrupt unit 423. According to an embodiment of the interrupt handler 420, for interrupts generated by the host controller to indicate that an error has occurred, the error interrupt unit 423 determines whether the error is a global error or a local error. If the interrupt was generated in response to a global error, the host controller is stopped. If the interrupt was generated in response to a local error caused by a TD, the error interrupt unit 423 may prompt the host controller to retry the TD or retire the TD.[0049] It should be appreciated that the USB module 400 may include other components such as components for performing memory allocation and deallocation, and initiating asynchronous and isochronous schedules. [0050] Figure 5a illustrates a graphical representation of exemplary asynchronous schedules generated by host controller drivers in VMs. Schedule 510 represents a first asynchronous schedule from a first VM. Schedule 510 includes a plurality of QHs 512-513. Each of the queue heads 512-513 has a chain of TDs 514-515 that may include one or more TDs that hangs from it. Schedule 510 includes a dummy QH (H) 511 that represents the beginning of the schedule. Schedule 520 represents a second asynchronous schedule from a second VM. Schedule 520 includes a plurality of QHs 522-524. Each of the QHs 522-524 has a chain of TDs 525-527 respectively that hangs from it. Schedule 520 includes a dummy QH (H) 521 that represents the beginning of the schedule. [0051] Figure 5b illustrates an example of how asynchronous schedules may be linked according to an embodiment of the present invention. According to an embodiment of the present invention, linking asynchronous schedules is achieved by pointing the last QH in each VM to the first QH of the next VM and pointing the last QH of the last VM to the first VM's dummy QH. Schedule 530 represents a linked asynchronous schedule. As shown, the last QH 513 from the first VM is pointed to the first QH 522 of the second VM. The last QH 524 in the second VM is pointed to the dummy QH 511 of the first VM. An asynchronous list address register (async list address) may be a register that includes the address of a next asynchronous queue head to be executed. The linking illustrated may be performed by the schedule trap unit 411 shown in Figure 4. In this embodiment, the schedules are linked by modifying pointers associated with the QHs stored in the VM. It should be appreciated that although only two VMs are shown, the example can be generalized to n VMs, where n can be any number.[0052] Figure 5c illustrates an example of how asynchronous schedules may be copied and merged according to an embodiment of the present invention. According to an embodiment of the present invention, copying and merging of asynchronous schedules also involves pointing the last QH in each VM to the first QH of the next VM and pointing the last QH of the last VM to the first VM's dummy QH. However, instead of linking schedules in place in the VMs, a VMM makes a copy of the schedules and merges the copy. Schedule 540 represents a merged asynchronous schedule. As shown, the last QH 513 from the first VM is pointed to the first QH 522 of the second VM. The last QH 524 in the second VM is pointed to the dummy QH 511 of the firstVM. An asynchronous list address register (async list address) may be a register that includes the address of a next asynchronous queue head to be executed. The copying and merging illustrated may be performed by the schedule trap unit 411 shown in Figure 4. [0053] Figure 6a illustrates a graphical representation of exemplary isochronous schedules generated by host controller drivers in VMs. Schedule 610 represents a first isochronous schedule from a first VM. The schedule 610 includes a periodic frame list 611 that includes a list of time slots or frames. Each frame may have zero or more TDs scheduled for execution. TDs 621-622correspond to a first frame 620 in the periodic frame list 611. TDs 631-632 correspond to a second frame 630 in the periodic frame list 611. Interrupt tree 640 includes a plurality of QHs 641-643 that correspond to polling rates. TDs 621-622 are associated with QH 642 which has a polling rate of 4 milliseconds and TDs 631 -632 are associated with QH 641 which has a polling rate of 8 milliseconds as shown. Schedule 650 represents a second isochronous schedule from a second VM. The schedule 650 includes a periodic frame list 651 that includes a list of time slots or frames. Each frame may have zero or more TDs scheduled for execution. TDs 661-662 correspond to a first frame 660 in the periodic frame list 651. TDs 671-672 correspond to a second frame 670 in the periodic frame list 651. Interrupt tree 680 includes a plurality of QHs 681-683 that correspond to polling rates. TDs 661-662 are associated with QH 682 which has a polling rate of 4 milliseconds and TDs 671-672 are associated with QH 681 which has a polling rate of 8 milliseconds as shown.[0054] Figure 6b illustrates an example of how isochronous schedules may be linked according to an embodiment of the present invention. According to an embodiment of the present invention, linking isochronous schedules is achieved by forming a link list of isochronous TDs in each frame end to end. This may be achieved, for example, by assuming that the host controller drivers are constrained to only a select set of periods, the QHs for the same period may be respectively merged. As shown, TD 621 points to TD 622, TD 622 points to TD 661, and TDs 661 points to TD 662. TDs 631 points to TD 632, TD 632 points to TD 671, and TDs 671 points to TD 672. QH 641 points to QH 681. QH 681 points to QH 642. QH 642 points to QH 682. QH 682 points to QH 643. QH 643 points to QH 683. A periodic frame list base address register (list base) includes a beginning address of a periodic frame list in a system memory. Contents of the periodic frame list base address register are combined with a frame index register to allow a host controller to walk through a periodic frame list in sequence. The linking illustrated may be performed by the schedule trap unit 411 shown in Figure 4. In this embodiment, the schedules are linked by modifying pointers associated with the TDs stored in the VM. [0055] Figure 6c illustrates an example of how isochronous schedules may be copied and merged according to an embodiment of the present invention. According to an embodiment of the present invention, copying and merging of isochronous schedules also involves forming a link list of isochronous TDs in each frame end to end and merging the QHs for the same period. However, instead of linking schedules in place in the VMs, a VMM makes a copy of the schedules and merges the copy. As shown, TD 621 points to TD 622, TD 622 points to TD 661, and TDs 661 points to TD 662. TD 631 points to TD 632, TD 632 points to TD 671, and TD 671 points to TD 672. QH 641 points to QH 681. QH 681 points to QH 642. QH 642 points to QH 682. QH 682 points to QH 643. QH 643 points to QH 683. A periodic frame list base address register (list base) includes a beginning address of a periodic frame list in a system memory. Contents of the periodic frame list base address register are combined with a frame index register to allow a host controller to walk through a periodic frame list in sequence. The copying and merging illustrated may be performed by the schedule trap unit 411 shown in Figure 4. [0056] Figure 7 is a flow chart illustrating a method for generating an asynchronous schedule for a host controller according to an embodiment of the present invention. The procedure illustrated may be performed by a USB module such as the one shown in Figures 3 and 4. At 701, asynchronous schedules (AS) for virtual host controllers from a plurality of VMs are sorted and placed in an order. The asynchronous schedule from a first (initial) VM to be processed is designated as the first (initial) asynchronous schedule. [0057] At 702, a last QH in an asynchronous schedule from a previous VM is pointed to the first QH in an asynchronous schedule from a next VM.[0058] At 703, it is determined whether an asynchronous schedule from an additional VM exists to be processed. If it is determined that an asynchronous schedule from an additional VM exists to be processed, control returns to 702 and the next VM is designated as the previous VM and the additional VM is designated as the next VM. If it is determined that an asynchronous schedule from an additional VM does not exist to be processed, control proceeds to 704. [0059] At 704, the last QH in the asynchronous schedule from the final or last VM is pointed to a dummy QH in the asynchronous schedule from the initial VM.[0060] Figure 8 is a flow chart illustrating a method for generating an isochronous schedule for a host controller according to an embodiment of the present invention. The procedure illustrated may be performed by a USB module such as the one shown in Figures 3 and 4. At 801, the TDs in each frame of a frame list are connected to TDs in corresponding frames of other frame list. According to an embodiment of the present invention, the last TD in a chain of TDs in a frame from a frame list of a first VM points to a first TD in a chain of TDs in a corresponding frame from a frame list of a second VM. [0061] At 802, QHs for the same period or polling rate from the frame lists of the VMs are merged.[0062] Figures 7 and 8 are flow charts illustrating methods according to embodiments of the present invention. Some of the techniques illustrated in these figures may be performed sequentially, in parallel or in an order other than that which is described. It should be appreciated that not all of the techniques described are required to be performed, that additional techniques may be added, and that some of the illustrated techniques may be substituted with other techniques.[0063] Embodiments of the present invention may be provided as a computer program product, or software, that may include an article of manufacture on a machine accessible or machine readable medium having instructions. The instructions on the machine accessible or machine readable medium may be used to program a computer system or other electronic device. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks or other type of niedia/machine-readable medium suitable for storing or transmitting electronic instructions. The techniques described herein are not limited to any particular software configuration. They may find applicability in any computing or processing environment. The terms "machine accessible medium" or "machine readable medium" used herein shall include any medium that is capable of storing, encoding, or transmitting a sequence of instructions for execution by the machine and that cause the machine to perform any one of the methods described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, process, application, module, unit, logic, and so on) as taking an action or causing a result. Such expressions are merely a shorthand way of stating that the execution of the software by a processing system causes the processor to perform an action to produce a result.[0064] In the foregoing specification embodiments of the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the embodiments of the invention. The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense.
A system and method for controlling the direction of data flow in a memory system is provided. The system comprising memory devices, a memory controller, a buffering structure, and a data flow director. The memory controller sends data, such as read-data, write-data, address information and command information, to the memory devices and receives data from the memory devices. The buffering structure interconnects the memory device and the memory controller. The buffering structure is adapted to operate in a bi-directional manner for the direction of data flow therethrough. The data flow director, which may reside in the buffering structure, the memory controller, the memory devices, or an external device, controls the direction of data flow through the buffering structure based on the data transmitted from the memory controller or the memory device.
1. A memory system, comprising:at least one memory device having a plurality of storage cells to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device, wherein the first data is selected from a group of address information, data to be written to the at least one memory device, status information, and command information. 2. The memory system of claim 1, wherein the buffer has the functionality to either receive the first data from the memory controller and transmit said first data to the at least one memory device, or receive the second data from the at least one memory device and transmit said second data to the memory controller.3. The memory system of claim 1, wherein the data flow director resides in the buffer, the data flow director further comprising an embedded decoder to decode the first data received from the memory controller and embedded logic to determine, based on the decoded first data, whether and when to change the direction of the data flow through the buffer.4. The memory system of claim 1, wherein the data flow director resides in an external data flow controlling device, the data flow director including an embedded decoder to decode the first data received from the memory controller and embedded logic to determine, based on the decoded first data, whether and when to change the direction of the data flow through the buffer.5. The memory system of claim 1, wherein the data flow director decodes the first data to be sent from the memory controller to the buffer and controls, based on the decoded first data, the direction of the data flow through the buffer utilizing an output enable control signal.6. The memory system of claim 1, wherein the at least one memory device and the buffer are housed within a memory module.7. The memory system of claim 1, wherein the buffer resides on a motherboard of a computer system and the at least one memory device is housed within a memory module.8. A memory system, comprising:at least one memory device to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough, wherein the buffer includes at least one data buffer that buffers at least one of the first and second data and an address and command buffer that buffers address information and command information; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device. 9. A memory system, comprising:at least one memory device to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device, wherein the data flow director resides in the buffer, the data flow director further comprising an embedded decoder to decode the first data received from the memory controller and embedded logic to determine, based on the decoded first data, whether and when to change the direction of the data flow through the buffer, and wherein a default direction for the direction of data flow through the buffer is set in the direction from the memory controller to the at least one memory device, and when the decoder detects a request for a read, the embedded logic enters a mode where a prescribed amount of time is elapsed while the direction of the data flow through the buffer is changed to allow read data to be returned to the memory controller. 10. The memory system of claim 9, wherein the embedded logic is a finite state machine.11. The memory system of claim 9, wherein the embedded logic includes a read only memory.12. A memory system, comprising:at least one memory device to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device, wherein the data flow director resides in the buffer, the data flow director further comprising an embedded decoder to decode the first data received from the memory controller and embedded logic to determine, based on the decoded first data, whether and when to change the direction of the data flow through the buffer, and wherein a default direction for the direction of data flow through the buffer is set in the direction from the at least one memory device to the memory controller, and when the decoder detects a request for a write, the embedded logic enters a mode where a prescribed amount of time is elapsed while the direction of the data flow through the buffer is changed to allow the data to be sent to the at least one memory device for storage. 13. A memory system, comprising:at least one memory device having a plurality of storage cells to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device, wherein the data flow director resides in the memory controller. 14. A memory system, comprising:at least one memory device to store data; a memory controller that sends first data to the at least one memory device and receives second data from the at least one memory device; a buffer interconnecting for the at least one memory device and the memory controller, the buffer operating in a bi-directional manner for direction of data flow therethrough; and a data flow director to control the direction of data flow through the buffer based on the first or second data transmitted from the memory controller or the at least one memory device, wherein the data flow director resides in the memory device. 15. The memory system of claim 14, wherein the data flow director decodes the second data received from the buffer, and controls, based on the decoded second data, the direction of the data flow through the buffer utilizing an output enable control signal.16. A buffering device interconnecting a memory controller and a memory device, comprising:at least one buffer to receive data from the memory controller or the memory device, wherein the at least one buffer includes an address and command buffer and a data buffer, the direction of data flow through the data buffer being controlled by the address and command buffer, wherein the address and command buffer includes an embedded decoder to decode data received from the memory controller or the memory device and embedded logic to determine, based on the decoded data, whether and when to change a direction of data flow through the data buffer, and wherein the at least one buffer is adapted to operate in a bi-directional manner for the direction of data flow therethrough, receiving data from the memory controller and transmitting said data to the memory device, as well as receiving data from the memory device and transmitting said data to the memory controller. 17. The buffering device of claim 16, wherein the data to be received and transmitted is selected from the group of data to be written to the memory device, data to be read from the memory device, address information, and command information.18. A buffering device interconnecting a memory controller and a memory device, comprising:at least one buffer to receive data from the memory controller or the memory device; an embedded decoder to decode data received from the memory controller or the memory device; and embedded logic to determine, based on the decoded data, whether and when to change a direction of data flow through the buffer, wherein the buffer is adapted to operate in a bi-directional manner for the direction of data flow therethrough, receiving data from the memory controller and transmitting said data to the at least one memory device, as well as receiving data from the at least one memory device and transmitting said data to the memory controller, wherein a default direction for the direction of data flow through the buffer is set from the memory controller to the memory device, and when the decoder detects a request for a read, the embedded logic enters a mode where a prescribed amount of time is elapsed while direction of the data flow through the buffer is changed to allow read data to be returned to the memory controller.19. A buffering device interconnecting a memory controller and a memory device, comprising:at least one buffer to receive data from the memory controller or the memory device; an embedded decoder to decode data received from the memory controller or the memory device; and embedded logic to determine, based on the decoded data, whether and when to change a direction of data flow through the buffer, wherein the buffer is adapted to operate in a bi-directional manner for the direction of data flow therethrough, receiving data from the memory controller and transmitting said data to the at least one memory device, as well as receiving data from the at least one memory device and transmitting said data to the memory controller, wherein a default direction for the direction of data flow through the buffer is set from the memory device to the memory controller, and when the decoder detects a request for a write, the embedded logic enters a mode where a prescribed amount of time is elapsed while the direction of the data flow through the buffer is changed to allow the data to be sent to the memory device for storage.20. A method of operating a memory system including a memory controller, a memory device, and a buffer, the method comprising:transmitting data from the memory controller to the memory device via the buffer, or from the memory device to the memory controller via the buffer, the buffer being adapted to operate in a bi-directional manner for a direction of data flow therethrough; decoding the data; determining the direction of data flow through the buffer based on the decoded data; and changing the direction of data flow through the buffer to the determined direction when the determined direction differs from a default direction. 21. The method of claim 20 further comprising waiting for a delay period before changing the direction of data flow through the buffer to the determined direction when the determined direction differs from a default direction.22. The method of claim 20 further comprising:determining a period for changing the direction of data flow through the buffer when the determined direction differs from the default direction; changing the direction of data flow for said period; and returning the direction of data flow back to the default direction after the period. 23. The method of claim 20, wherein the default direction for the direction of data flow through the buffer is set from the memory controller to the memory device, and when a request for a read is decoded, a prescribed amount of time is elapsed while the direction of the data flow through the buffer is changed to allow read data to be returned to the memory controller.24. The method of claim 20, wherein a default direction for the direction of data flow through the buffer is set from the memory device to the memory controller, and when a request for a write is detected, a prescribed amount of time is elapsed while the direction of the data flow through the buffer is changed to allow the data to be sent to the memory device for storage.25. The method of claim 20, wherein the data transmitted is selected from the group of data to be written to the at least one memory device, data to be read from the at least one memory device, address information and command information.
BACKGROUND OF THE INVENTION1. Field of the InventionThe present invention generally relates to a memory system, and in particular, to a system and method for controlling the direction of data flow in a memory system having a buffering structure interconnecting a memory controller and memory devices. The memory devices may, for example, be dynamic random access memory (DRAM) devices.2. Related ArtA typical memory system includes a memory controller and memory devices, such as DRAMs, coupled thereto. In some systems, a processor performs memory controller functions. As used herein, the term memory controller includes such a processor. FIG. 1 illustrates a prior art memory system having memory devices 80-95 residing on memory modules. The memory modules are connected to a memory controller 50 via connectors 60, 70. In such a system, it is required that each component operates with the same interface voltage and frequency. Therefore, the memory controller 50 is manufactured to operate with specific memory devices 80-95 meeting these parameters. Conversely, the memory devices 80-95 are also utilized only with a memory controller 50 having the same interface voltage and operating frequency. Therefore, the memory devices 80-95 utilized with memory controllers are limited to only those having the same interface voltage and operating frequency as that of the memory controller 50.FIG. 2 shows the bi-directional nature of communication exchanges in a memory interface. The memory interface may be a data bus 52, which may represent, address bus lines, command signal lines, and/or data bus lines. The data bus 52 of the memory interface is bi-directional since a memory system has to be able to write data and read data on the same pins connecting the memory controller 50 and the memory devices 80-95. The memory controller 50 includes a bi-directional internal input/output buffer. The memory devices 80-95 also have bi-directional internal input/output buffers. When the memory controller 50 is doing a read or a write, it has complete knowledge of the read and write. That is, the memory controller 50 knows when to turn off its driver and listen to its input internal buffer. Similarly, the memory devices 80-95 also have complete knowledge of when to turn off their drivers and listen to their internal buffers based on the commands that they receive and the function(s) to be performed. Because both the memory controller 50 and the memory devices 80-95 are able to control the direction of their own internal buffers automatically, the direction of data flow between the memory controller 50 and the memory devices 80-95 can be readily controlled. However, the memory devices 80-95 are limited to only those having the same interface voltage and operating frequency as that of the memory controller 50. The cost requiring specifically designed memory devices 80-95 to match the memory controller 50, and vice versa, creates enormous development expenses, as well as limiting the interchangeability of various existing memory components.In other memory systems, solutions have evolved to provide connection to memory devices on selective bases. For example, in a Double Data Rate-Synchronous DRAM (DDR), located on the module are field effect transistor (FET) switches which isolate the DRAM from the main memory bus. This isolates the capacitive load. The FET switches are turned on t o connect the DRAM to the memory bus only when the DRAM is being read or written. When the FET switches are to be turned on, the DRAM sends out a control signal. This method focuses on using a special kind of DRAM and does not deal with the flow direction of the data. The FETs as a whole are merely acting as a switch that connects the DRAM to the memory bus. Therefore, there is a need for a system and method to control the direction of data flow in a memory system that would not require each component to operate with the same interface voltage and frequency.BRIEF DESCRIPTION OF THE FIGURESFIG. 1 illustrates a prior art memory system;FIG. 2 shows the bi-directional nature of communication exchanges in a prior art memory interface;FIG. 3 illustrates a diagram of a memory system according to an embodiment of the present invention and in which embodiments of the present invention may function;FIG. 4 shows an illustrative example of a memory system according to an embodiment of the present invention;FIG. 5a depicts a buffering structure according to an embodiment of the present invention;FIG. 5b shows an illustrative example of mechanisms to change the direction of data flow in a buffer;FIG. 6 shows an illustrative example of a state diagram according to an embodiment of the present invention;FIG. 7 shows a memory system utilizing an external data flow controlling device according to an embodiment of the present invention;FIG. 8 shows a memory system with a data flow director residing in a memory controller according to an embodiment of the present invention;FIG. 9 shows a memory system with a data flow director residing in a memory device according to an embodiment of the present invention; andFIG. 10 illustrates processes for operating a memory system according to an embodiment of the present invention.DETAILED DESCRIPTIONFIG. 3 illustrates a diagram of a memory system according to an embodiment of the present invention and in which embodiments of the present invention may function. The memory system 100 comprises a memory controller 110, a buffer 120, and memory devices 130-145. The buffer 120 is an external buffer(s) or register(s) that has the functionality of reducing the impedance seen by the memory controller 110. The memory controller 110 is coupled to the buffer 120, which is being further coupled to the memory devices 130-145, such as DRAM devices. Although the Input/Output connection lines 114 are represented as single lines to the buffer 120, and to the memory devices 130-145, each represented line 114 may in fact be a plurality of lines. The memory controller 110 may, for example, be a chipset central processing unit, and it is adapted to transmit different information-e.g., data, address information, command information-to the memory devices 130-145 via the buffer 120. The memory controller 110 is further adapted to receive data from the memory devices 130-145 via the buffer 120. The buffer 120, may be comprised of a single component, or a number of specialized buffers or registers. The specialized buffers or registers may, for example, be data buffers 123, 124 for buffering data, and address and command buffer 122 (ADDR/CMD Buffer) for buffering address information and command information transmitted from the memory controller 110 and/or status information transmitted from the memory devices 130-145. By placing the buffer 120 in between the memory controller 110 and memory devices 130-145, transfer of data and information between the memory controller 110 and memory devices 130-145 is facilitated. The electrical characteristics of the memory system 100 are also improved and bolder scaling is allowed.FIG. 4 shows an illustrative example of a memory system according to an embodiment of the invention. In this example, the memory controller 110 resides on a motherboard 200. The memory devices 130-145, 170-185 resides on memory modules 150, 155. The memory modules 150, 155 are connected to the motherboard 200 through connectors 160, 165. The memory devices 130-145 reside on the first memory module 150, while the memory devices 170-185 reside on the second memory module 155. In other embodiments, the configuration of the memory devices 130-145, 170-185 on the memory modules 150, 155 may be different, and the memory controller 110 may control more or fewer memory devices than those shown in FIG. 4.In this embodiment, the buffers 120 and 125 reside on the memory modules 150 and 155, respectively, creating buffered modules in which the direction of data flow between the controller 110 and the memory devices 130-145, 170-185 is controlled. However, the buffers 120, 125, and the individual elements of the buffers 120, 125 such as data buffers 123, 124 and ADDR/CMD buffer 122, are not limited to the placement shown in FIG. 4. That is, they are not limited to placement on a memory module. The buffering of data and command/address can also be performed on the motherboard device 200 or on external (discrete) buffers. In one embodiment, external (discrete) buffers are utilized to allow different voltages and frequencies to be used for the memory controller 110 and memory devices 130-145, 170-185.In carrying out instructions from a central processing unit (CPU) of a computer, information and data are constantly sent from the memory controller 110 to the memory devices, and vice versa. In one instance, the memory controller 110 may wish to write data to the memory devices, wherein the memory controller 110 sends address and command information for a write and the data to be written to the buffer 120. The buffer 120 receives the information and the data from the memory controller 110 and sends them to specific locations within the memory devices. In another instance, the memory controller 110 may wish to read data from the memory devices, wherein the memory controller 110 sends address and command information for a read to the buffer 120 and receives read-data from the memory device via the buffer 120. The buffer 120 receives the read-data from specific locations within the memory devices and sends the read-data to the memory controller 110. Thus, data must be able to flow through the buffer 120 in a bi-directional manner, from the memory controller 110 to the buffer 120 to the memory devices, and from the memory devices to the buffer 120 to the memory controller 110. In a first mode, the buffer 120 receives data from the memory controller 110 and sends data to the memory devices. In a second mode, the buffer 120 receives data from the memory devices and sends data to the memory controller 110.Various methods may be utilized to control the direction of data flow between the memory controller 110 and the memory devices 130-145, switching the direction of data flow from the first mode to the second mode or from the second mode to the first mode. A first method involves the buffer 120 determining the direction of data flow. A second method involves an external device indicating the direction of data flow through the buffer 120. A third method involves the memory controller 110 setting the direction of data flow through the buffer 120. A fourth method involves the memory devices 130-145, 170-185 setting the direction of data flow through the buffer 120.FIG. 5a depicts a buffering structure according to a preferred embodiment of the present invention. The buffering structure 120' includes buffers 120, 125 as shown in FIG. 4. In this embodiment, the buffering structure 120' determines the direction of data flow utilizing a decoder 190 and logic 195, which reside in the buffering structure 120'. The buffering structure 120' also includes data buffers 123', 124' and an ADDR/CMD buffer 122', wherein the decoder 190 and the logic 195 are embedded in the ADDR/CMD buffer 122'. Data such as data to be written to a memory device (written-data 112), address information, and/or command information 116 are sent from a memory controller (not shown). The written-data passes through the data buffers 123', 124'. The number of the data buffers contained in the buffering structure 120's is dependent upon a specific application of the present invention; there could be only one data buffer or there could be multiple data buffers. The command information 116 and address information pass through the ADDR/CMD buffer 122' along with a chip select signal 118 that selects a particular memory device (not shown) to which the command and address information is directed. When the ADDR/CMD buffer 122' receives command information 116 and the chip select signal 118 from the memory controller, it determines whether the command information 116 is directed to memory devices served by the buffering structure 120' or directed to memory devices served by another buffering structure in the memory system. Such determination may, for example, be made by analyzing the chip select signal 118 to see if the select signal 118 purports to select a memory device or a memory module that is served by the buffering structure 120'. If the chip select signal 118 selects a memory device or a memory module served by the buffering structure 120', the chip select signal 118 is reset and the embedded decoder 190 in the ADDR/CMD buffer 122' decodes the command information 116, which may, for example, be a read command or a write command.According to a preferred embodiment of the present invention, the buffering structure 120' may, for example, be defaulted to drive toward the memory devices, i.e., from left to right in FIG. 5a. Thus, when a request to write data to the memory devices is received at the buffering structure 120', there is no need to switch the direction of data flow through the buffering structure 120'. However, when there is a request to read data from the memory devices, the direction of data flow through the buffering structure 120' needs to be switched. This way, data to be read, or read-data, can flow from the memory devices to the memory controller, i.e., from right to left in FIG. 5a. In the embodiment, the buffering structure 120' is defaulted to drive towards the memory devices. When the ADDR/CMD buffer 122' receives a read command and decodes it to determine that it is a read command, the embedded logic 195 in the ADDR/CMD buffer 122' sets a delay period and then drives the direction of data flow through the buffering structure 120', or just the data buffers 123', 124', in the opposite direction. The delay period may be needed before the read-data can be driven back from the memory devices at a pre-ordained time that has been stipulated by the memory system. The delay period affords time for signals, such as a chip select signal, address information, and command information, to propagate from the buffering structure 120' to the memory devices. The memory devices also need time to retrieve the read-data once the memory devices receive and decode the address information and the command information 116. When the memory devices see the read command from the buffering structure 120', the memory devices find the read-data and then drive the read-data back in the direction of the buffering structure 120'. At that point, the delay period comes to an end.The delay period symbolizes the amount of time until the data is ready to be driven back from the memory devices, which in a memory system with DRAM devices is often referred to as the read latency or as CAS (column address strobe) latency. The delay period may be implemented in various ways, and is implemented by taking into consideration the capability of the memory devices responsible for the delay. The delay may be programmed as the number of clocks required between a read request and the availability of the read data. The delay value may be hardwired in the embedded logic 195. In one implementation, the buffering structure 120' counts the number of clocks it takes for the memory devices to start returning the data that the memory controller requests to the buffering structure 120'. The number of clocks may be obtained from a BIOS (Basic Input/Output System) or similar devices when the memory system boots. During start up, the BIOS obtains information about the memory devices, from the devices themselves or from an EPROM (Erasable Programmable Read Only Memory) residing on the memory module containing the memory devices. Based on the obtained information, the BIOS determines the delay/latency associated with the memory devices. The BIOS then conveys this information to either the memory controller and/or the embedded logic 195 in the buffering structure 120'.In one implementation, when the delay period ends, and on the same clock that the memory devices start to drive the data to be read back, the embedded logic 195 causes a control signal 126 to be sent from the ADDR/CMD buffer 122' to the data buffers 123', 124'. This signal controls the data buffers 123', 124' to change their drive direction back toward the memory controller. FIG. 5b shows an illustrative example of a mechanism for the buffering structure 120' to change the direction of data flow therethrough. In this illustrative example, the drive direction is controlled using two drivers configured in the fashion illustrated in FIG. 5b.The drivers may reside in the data buffers 123', 124'. When the drive direction needs to be changed, the ADDR/CMD buffer 122' sends a control signal "high" to the two drivers in the data buffers 123', 124' through a link 126. A control signal "high" turns off the top driver and turns on the bottom driver, allowing the data to be driven toward the memory controller. On the other hand, when a "low" signal is sent to the drivers, the top driver is turned on and the bottom driver is turned off, keeping the drive direction in the default direction.In another implantation, the ADDR/CMD buffer 122' may also reverse the direction of data flow through itself. Once the direction of data flow through the buffering structure 120' or the data buffers 123', 124' is driven in the opposite direction, the read-data, or any type of data that the memory controller may request from the memory devices, can flow readily from the memory devices to the memory controller. When the read-data is being driven in the opposite direction, the embedded logic 195 controls the data buffers 123', 124' to continue driving the read-data in the direction of the memory controller for the required amount of time it takes to return all the read-data for that read command.The amount of time for returning all the read-data (hereinafter referred to as the "returning data time") may be determined by various methods. In one implementation, the returning data time is calculated based on a read burst. This has traditionally been a programmable feature of a memory device such as a DRAM. During start up, the BIOS indicates to the DRAM how long a read burst is to last, i.e., how much data is to transferred in a burst. The recent trend is toward a fixed burst length-e.g., eight bytes-because it is easier to implement for a DRAM controller at high operating speed. However, variable and/or programmable burst lengths are also applicable.Although the above embodiments describe the buffering structure 120' as having a default data flow direction towards the memory devices, another embodiment may have the data buffers 123', 124' default to drive towards the memory controller. Such default direction facilitates a read because the direction of data flow through the data buffers need not to be switched when reading data from the memory devices to the memory controller. In such embodiment, instead of decoding for a read command, the embedded decoder 190 in the CMD/ADDR buffer 122' decodes for a write command. Whenever the embedded decoder 190 sees a write command, the embedded logic 195 implements a delay period for the write command to propagate to the memory devices. After the delay period has passed, the embedded logic 195 sends out a signal causing the data buffers 123', 124' to drive in the opposite direction. The data buffers 123', 124' continue to drive in the direction of the memory devices for the required amount of time it takes to write all the data to be written, or written-data, for that write command.In another embodiment, the decoder 190 and the embedded logic 195 may be located in the data buffers 123', 124' or elsewhere in the buffering structure 120'. In such embodiment, the command information 116 is routed to the data buffers 123', 124', so that it can be decoded by the decoder 190.FIG. 6 shows an illustrative example of a state diagram according to an embodiment of the present invention. The different states defined by the embedded logic 195 may be represented by a state-machine. The state-machine presumably starts at an "idle" state 10. In this "idle" state 10, the state-machine illustratively defaults in the direction of the memory devices. The direction of data flow through a buffer stays in the direction of the "idle" state 10 until the decoder 190 decodes a command that requires the direction of data flow to be changed. Assuming that the decoder 190 decodes a read command with a chip select signal and communicates the decoded command to the embedded logic 195, the embedded logic 195 causes the state-machine to change from the "idle" state 10 to a "wait-0" state 20. Depending on how long the delay period is for the data to be read, or read-data, to be ready for driving in the opposite direction, the state-machine goes through the different wait states until a "wait_N" state 25 is reached. The number "N" may, for example, represent the number of clocks for the full delay period, and each wait state may represent one clock. At the end of "wait_N" period, the state-machine reverses the direction of data flow through the buffer, in the direction of the memory controller. After the "wait_N" state 25, and after the direction of data flow is changed, the state-machine changes to "read-0" state 30. The state-machine again counts for a certain read period until the "read_N" state 35, allowing all the read-data to be transmitted from the buffer to the memory controller. The number of periods "N" may, for example, represent the number of clocks needed for the memory controller to receive all the read-data.The implementation of the state-machine may vary from a basic version shown in FIG. 6 to handle multiple memory banks with outstanding reads. It may also be implemented in an opposite fashion, i.e., to default to returning read-data and switch to driving data to the memory devices only when a write command is detected by the decoder 190. In this implementation, when the state-machine sees a write command, it waits either a programmed amount or a fixed amount of time before it starts driving to the memory devices. The programmed amount of time is preferably based on the protocol and capabilities of the memory system. After waiting for some time, the state-machine goes into write states, where it writes data to the memory devices. The length of the time that the state-machine stays in the write states depends on the length of a write burst. Like a read burst, the write burst could be fixed, programmable, and/or variable.In another embodiment, instead of having a decoder and logic located in the buffer 120, the decoder and logic are located in an external device. FIG. 7 shows an external device 300 for controlling the direction of data flow through a buffer 120'' according to an embodiment of the invention. The external device 300 contains a decoder 190' and logic 195'. The external device 300 is coupled to the memory controller 110 and the buffering structure 120''. The memory controller 110 transmits command information to the external device 300, where the command information is decoded by the decoder 190'. The decoded command is fed through the logic 195'. Depending on the default direction the data flow through the buffering structure 120'', or just the data buffers therein, and the kind of command, the logic 195' sends a control signal 126' to the buffering structure 120'' to change or maintain the direction of data flow. Similarly to the structure shown in FIG. 5b, one implementation of the control signal is an output enable control. The control signal 126' is sent to an output enable pin 121 on the buffering structure 120''. Depending on whether the output enable pin is high or low, the direction of data flow is controlled.In other embodiments, instead of having the buffer itself or an external device controlling the direction of data flow through the buffer, the memory controller 110 and/or the memory devices 130-145 set the direction of data flow through the buffer 120. Depending on the implementation, a signal either originates from the memory controller 110, or the memory devices 130-145, to the buffer 120'' indicating the present data flow direction of the buffer 120''. FIG. 8 shows a memory system with a data flow director residing in the memory controller 110. In this embodiment, a decoding device (not shown) within the memory controller 110 decodes a command from the memory controller 110. Based on the decoded command, the memory controller 110 then controls the direction of data flow through the buffer 120'' by sending out a signal directly to the output enable 121 on the buffer 120''. For example, a high signal on the output enable may represent that the direction of data flow is to be changed, while a low signal on the output enable may represent that no change is needed.FIG. 9 shows a memory system with a data flow director residing in the memory devices 130-145. In this embodiment, a decoding device (not shown) within the memory devices 130-145 decodes a command from the memory controller 110. Based on the decoded command, the memory devices 130-145 control the direction of data flow through data buffers in the buffer 120'' by sending out a signal directly to the output enable 121 on the buffer 120''. For example, a high signal on the output enable may represent that the direction of data flow is to be changed, while a low signal on the output enable may represent that no change is needed.FIG. 10 illustrates processes for operating a memory system according to an embodiment of the present invention. The memory system includes a memory controller, a buffer, and memory devices. The buffer is adapted to operate in a bi-directional manner for a direction of data flow therethrough. In block P600, data is transmitted from the memory controller to the memory devices via the buffer. The data may, for example, be write-data, address information, and/or command information. In other embodiments, data, such as status information and/or read-data, may be transmitted from the memory devices to the memory controller via the buffer. In block P610, the data is decoded. In block P620, the direction of data flow through the buffer is determined based on the decoded data. When the determined direction differs from a default direction, the direction of data flow through the buffer is changed. In block P630, assuming that the direction of data blow through the buffer is to be changed, a delay period is waited before the direction of data flow is changed. In block P640, a period for changing the direction of data flow through the buffer is determined. In block P650, the direction of data flow is changed for the determined period. In block P660, the direction of data flow is returned to the default direction after the determined period expires.Embodiments of the invention and method as set forth above provide the ability to inexpensively, reliably and efficiently control the flow of data in a memory interface such as a buffering structure. The embodiments related to a buffer controlling the direction of data flow also provide the additional advantage of saving pins between a memory controller and the buffer or between a memory device and the buffer. While embodiments related to an external device controlling the direction of data flow require additional pins to directly operate the output enables on the buffer, they facilitate the control of data flow through the buffer.While the description above refers to particular embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. For example, instead of using a finite state-machine to implement the embedded logic, the embedded logic could be stored in a Read Only Memory (ROM), or the embedded logic could simply be timers that count clocks. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention. The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
Method and apparatus for performing a shift and XOR operation. In one embodiment, an apparatus includes execution resources to execute a first instruction. In response to the first instruction, said execution resources perform a shift and XOR on at least one value.
CLAIMS What is claimed is: 1. A processor comprising: logic to perform a shift and XOR instruction, wherein a first value is shifted by a shift amount and the shifted value is XOR'ed with a second value. 2. The processor of claim 1, wherein the first value is to be shifted left. 3. The processor of claim 1, wherein the first value is to be shifted right. 4. The processor of claim 1, wherein the first value is shifted logically. 5. The processor of claim 1, wherein the first value is shifted arithmetically. 6. The processor of claim 1 , comprising a shifter and an XOR circuit. 7. The processor of claim 1, wherein the shift and XOR instruction includes a first field to store the second value. 8. The processor of claim 1, wherein the first value is a packed datatype. 9. A system comprising: a storage to store a first instruction to perform a shift and XOR operation; a processor to execute the logic to perform a shift and XOR instruction, wherein a first value is shifted by a shift amount and the shifted value is XOR'ed with a second value. 10. The system of claim 9, wherein the first value is to be shifted left. 11. The system of claim 9, wherein the first value is to be shifted right. 12. The system of claim 9, wherein the first value is shifted logically. 13. The system of claim 9, wherein the first value is shifted arithmetically. 14. The system of claim 9, comprising a shifter and an XOR circuit. 15. The system of claim 9, wherein the shift and XOR instruction includes a first field to store the second value. 16. The system of claim 9, wherein the first value is a packed datatype. 17. A method comprising: performing a shift and XOR instruction, wherein a first value is shifted by a shift amount and the shifted value is XOR'ed with a second value. 18. The method of claim 17, wherein the first value is to be shifted left. 19. The method of claim 17, wherein the first value is to be shifted right. 20. The method of claim 17, wherein the first value is shifted logically. 21. The method of claim 17, wherein the first value is shifted arithmetically. 22. The method of claim 17, comprising a shifter and an XOR circuit. 23. The method of claim 17, wherein the shift and XOR instruction includes a first field to store the second value. 24. The method of claim 17, wherein the first value is a packed datatype. 25. A machine-readable medium having stored thereon an instruction, which if executed by a machine causes the machine to perform a method comprising: shifting a first value is shifted by a shift amount; and XORing the shifted value is XOR'ed with a second value. 26. The method of claim 25, wherein the first value is to be shifted left. 27. The method of claim 25, wherein the first value is to be shifted right. 28. The method of claim 25, wherein the first value is shifted logically. 29. The method of claim 25, wherein the first value is shifted arithmetically. 30. The method of claim 25, comprising a shifter and an XOR circuit. 31. The method of claim 25, wherein the shift and XOR instruction includes a first field to store the second value. 32. The method of claim 25, wherein the first value is a packed datatype. 33. A method comprising: performing an exclusive OR (XOR) operation between a first shifted value and a second bit reflected value and storing the result in a first register; checking for a minimum number of leading zeros in the result. 34. The method of claim 33, wherein if the minimum number leading zeros is in the result, indicating that the result corresponds to a first chunk. 35. The method of claim 34, wherein the first shifted value is to be shifted left by one bit position. 36. The method of claim 34, wherein the first shifted value is to be shifted right by one bit position.
METHOD AND APPARATUS FOR PERFORMING A SHIFT AND EXCLUSIVE OR OPERATION IN A SINGLE INSTRUCTION FIELD OF THE INVENTION The present disclosure pertains to the field of computer processing. More particularly, embodiments relate to an instruction to perform a shift and exclusive OR (XOR) operation. DESCRIPTION OF RELATED ART Single-instruction-multiple data (SIMD) instructions are useful in various applications for processing numerous data elements (packed data) in parallel. Performing operations, such as a shift operation and an exclusive OR (XOR) operation, in series can decrease performance. BRIEF DESCRIPTION OF THE FIGURES The present invention is illustrated by way of example and not limitation in the Figures of the accompanying drawings: Figure 1A is a block diagram of a computer system formed with a processor that includes execution units to execute an instruction for a shift and XOR operation in accordance with one embodiment of the present invention; Figure IB is a block diagram of another exemplary computer system in accordance with an alternative embodiment of the present invention; Figure 1C is a block diagram of yet another exemplary computer system in accordance with another alternative embodiment of the present invention; Figure 2 is a block diagram of the micro-architecture for a processor of one embodiment that includes logic circuits to perform a shift and XOR operation in accordance with the present invention; Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention; Figure 3B illustrates packed data-types in accordance with an alternative embodiment; Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention; Figure 3D illustrates one embodiment of an operation encoding (opcode) format; Figure 3E illustrates an alternative operation encoding (opcode) format;Figure 3F illustrates yet another alternative operation encoding format; Figure 4 is a block diagram of one embodiment of logic to perform an instruction in accordance with the present invention. Figure 5 is a flow diagram of operations to be performed in conjunction with one embodiment. DETAILED DESCRIPTION The following description describes embodiments of a technique to perform a shift and XOR operation within a processing apparatus, computer system, or software program. In the following description, numerous specific details such as processor types, micro- architectural conditions, events, enablement mechanisms, and the like are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that embodiments of the invention may be practiced without such specific details. Additionally, some well known structures, circuits, and the like have not been shown in detail to avoid unnecessarily obscuring embodiments of the present invention. Although the following embodiments are described with reference to a processor, other embodiments are applicable to other types of integrated circuits and logic devices. The same techniques and teachings of the present invention can easily be applied to other types of circuits or semiconductor devices that can benefit from higher pipeline throughput and improved performance. The teachings of the present invention are applicable to any processor or machine that performs data manipulations. However, embodiments of the present invention is not limited to processors or machines that perform 256 bit, 128 bit, 64 bit, 32 bit, or 16 bit data operations and can be applied to any processor and machine in which manipulation of packed data is needed. Although the below examples describe instruction handling and distribution in the context of execution units and logic circuits, other embodiments of the present invention can be accomplished by way of software stored on tangible medium. In one embodiment, the methods of the present invention are embodied in machine-executable instructions. The instructions can be used to cause a general-purpose or special-purpose processor that is programmed with the instructions to perform the steps of the present invention. Embodiments of the present invention may be provided as a computer program product or software which may include a machine or computer-readable medium having stored thereoninstructions which may be used to program a computer (or other electronic devices) to perform a process according to the present invention. Alternatively, the steps of the present invention might be performed by specific hardware components that contain hardwired logic for performing the steps, or by any combination of programmed computer components and custom hardware components. Such software can be stored within a memory in the system. Similarly, the code can be distributed via a network or by way of other computer readable media. Thus a machine -readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), but is not limited to, floppy diskettes, optical disks, Compact Disc, Read-Only Memory (CD-ROMs), and magneto-optical disks, Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), magnetic or optical cards, flash memory, a transmission over the Internet, electrical, optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.) or the like. Accordingly, the computer-readable medium includes any type of media/machine-readable medium suitable for storing or transmitting electronic instructions or information in a form readable by a machine (e.g., a computer). Moreover, the present invention may also be downloaded as a computer program product. As such, the program may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client). The transfer of the program may be by way of electrical, optical, acoustical, or other forms of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem, network connection or the like). A design may go through various stages, from creation to simulation to fabrication. Data representing a design may represent the design in a number of manners. First, as is useful in simulations, the hardware may be represented using a hardware description language or another functional description language Additionally, a circuit level model with logic and/or transistor gates may be produced at some stages of the design process. Furthermore, most designs, at some stage, reach a level of data representing the physical placement of various devices in the hardware model. In the case where conventional semiconductor fabrication techniques are used, the data representing the hardware model may be the data specifying the presence or absence of various features on different masklayers for masks used to produce the integrated circuit. In any representation of the design, the data may be stored in any form of a machine readable medium. An optical or electrical wave modulated or otherwise generated to transmit such information, a memory, or a magnetic or optical storage such as a disc may be the machine readable medium. Any of these mediums may "carry" or "indicate" the design or software information. When an electrical carrier wave indicating or carrying the code or design is transmitted, to the extent that copying, buffering, or re-transmission of the electrical signal is performed, a new copy is made. Thus, a communication provider or a network provider may make copies of an article (a carrier wave) embodying techniques of the present invention. In modern processors, a number of different execution units are used to process and execute a variety of code and instructions. Not all instructions are created equal as some are quicker to complete while others can take an enormous number of clock cycles. The faster the throughput of instructions, the better the overall performance of the processor. Thus it would be advantageous to have as many instructions execute as fast as possible. However, there are certain instructions that have greater complexity and require more in terms of execution time and processor resources. For example, there are floating point instructions, load/store operations, data moves, etc. As more and more computer systems are used in internet and multimedia applications, additional processor support has been introduced over time. For instance, Single Instruction, Multiple Data (SIMD) integer/floating point instructions and Streaming SIMD Extensions (SSE) are instructions that reduce the overall number of instructions required to execute a particular program task, which in turn can reduce the power consumption. These instructions can speed up software performance by operating on multiple data elements in parallel. As a result, performance gains can be achieved in a wide range of applications including video, speech, and image/photo processing. The implementation of SIMD instructions in microprocessors and similar types of logic circuit usually involve a number of issues. Furthermore, the complexity of SIMD operations often leads to a need for additional circuitry in order to correctly process and manipulate the data. Presently a SIMD shift and XOR instruction is not available. Without the presence of a SIMD shift and XOR instruction, according to embodiments of the invention, a large number of instructions and data registers may be needed to accomplish the same results in applications such as audio/video/graphics compression, processing, and manipulation.Thus, at least one shift and XOR instruction in accordance with embodiments of the present invention can reduce code overhead and resource requirements. Embodiments of the present invention provide a way to implement a shift and XOR operation as an algorithm that makes use of SIMD related hardware. Presently, it is somewhat difficult and tedious to perform shift and XOR operations on data in a SIMD register. Some algorithms require more instructions to arrange data for arithmetic operations than the actual number of instructions to execute those operations. By implementing embodiments of a shift and XOR operation in accordance with embodiments of the present invention, the number of instructions needed to achieve shift and XOR processing can be drastically reduced. Embodiments of the present invention involve an instruction for implementing a shift and XOR operation. In one embodiment, a shift and XOR operation... A shift and XOR operation according to one embodiment as applied to data elements can be generically represented as: DEST1 SRC1 [SRC2]; In one embodiment, SRC1 stores a first operand having a plurality of data elements and SRC2 contains a value representing the value to be shifted by the shift and XOR instruction. In other embodiments, the shift and XOR value indicator may be stored in an immediate field. In the above flow, "DEST" and "SRC" are generic terms to represent the source and destination of the corresponding data or operation. In some embodiments, they may be implemented by registers, memory, or other storage areas having other names or functions than those depicted. For example, in one embodiment, DEST1 and DEST2 may be a first and second temporary storage area (e.g., "TEMPI" and "TEMP2" register), SRC1 and SRC3 may be first and second destination storage area (e.g., "DEST1" and "DEST2" register), and so forth. In other embodiments, two or more of the SRC and DEST storage areas may correspond to different data storage elements within the same storage area (e.g., a SIMD register). Figure 1 A is a block diagram of an exemplary computer system formed with a processor that includes execution units to execute an instruction for a shift and XOR operation in accordance with one embodiment of the present invention. System 100 includes a component, such as a processor 102 to employ execution units including logic to perform algorithms for process data, in accordance with the present invention, such as in theembodiment described herein. System 100 is representative of processing systems based on the PENTIUM® III, PENTIUM® 4, Xeon™, Itanium®, XScale™ and/or StrongARM™ microprocessors available from Intel Corporation of Santa Clara, California, although other systems (including PCs having other microprocessors, engineering workstations, set-top boxes and the like) may also be used. In one embodiment, sample system 100 may execute a version of the WINDOWS™ operating system available from Microsoft Corporation of Redmond, Washington, although other operating systems (UNIX and Linux for example), embedded software, and/or graphical user interfaces, may also be used. Thus, embodiments of the present invention is not limited to any specific combination of hardware circuitry and software. Embodiments are not limited to computer systems. Alternative embodiments of the present invention can be used in other devices such as handheld devices and embedded applications. Some examples of handheld devices include cellular phones, Internet Protocol devices, digital cameras, personal digital assistants (PDAs), and handheld PCs. Embedded applications can include a micro controller, a digital signal processor (DSP), system on a chip, network computers (NetPC), set-top boxes, network hubs, wide area network (WAN) switches, or any other system that performs shift and XOR operations on operands. Furthermore, some architectures have been implemented to enable instructions to operate on several data simultaneously to improve the efficiency of multimedia applications. As the type and volume of data increases, computers and their processors have to be enhanced to manipulate data in more efficient methods. Figure 1A is a block diagram of a computer system 100 formed with a processor 102 that includes one or more execution units 108 to perform an algorithm to shift and XOR a number of data elements in accordance with one embodiment of the present invention. One embodiment may be described in the context of a single processor desktop or server system, but alternative embodiments can be included in a multiprocessor system. System 100 is an example of a hub architecture. The computer system 100 includes a processor 102 to process data signals. The processor 102 can be a complex instruction set computer (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor, for example. The processor 102 is coupled to a processor bus 110 that can transmit data signalsbetween the processor 102 and other components in the system 100. The elements of system 100 perform their conventional functions that are well known to those familiar with the art. In one embodiment, the processor 102 includes a Level 1 (LI) internal cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. Alternatively, in another embodiment, the cache memory can reside external to the processor 102. Other embodiments can also include a combination of both internal and external caches depending on the particular implementation and needs. Register file 106 can store different types of data in various registers including integer registers, floating point registers, status registers, and instruction pointer register. Execution unit 108, including logic to perform integer and floating point operations, also resides in the processor 102. The processor 102 also includes a microcode (ucode) ROM that stores microcode for certain macroinstructions. For this embodiment, execution unit 108 includes logic to handle a packed instruction set 109. In one embodiment, the packed instruction set 109 includes a packed shift and XOR instruction for performing a shift and XOR on a number of operands. By including the packed instruction set 109 in the instruction set of a general-purpose processor 102, along with associated circuitry to execute the instructions, the operations used by many multimedia applications may be performed using packed data in a general-purpose processor 102. Thus, many multimedia applications can be accelerated and executed more efficiently by using the full width of a processor's data bus for performing operations on packed data. This can eliminate the need to transfer smaller units of data across the processor's data bus to perform one or more operations one data element at a time. Alternate embodiments of an execution unit 108 can also be used in micro controllers, embedded processors, graphics devices, DSPs, and other types of logic circuits. System 100 includes a memory 120. Memory 120 can be a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, flash memory device, or other memory device. Memory 120 can store instructions and/or data represented by data signals that can be executed by the processor 102. A system logic chip 116 is coupled to the processor bus 110 and memory 120. The system logic chip 116 in the illustrated embodiment is a memory controller hub (MCH).The processor 102 can communicate to the MCH 116 via a processor bus 110. The MCH 116 provides a high bandwidth memory path 118 to memory 120 for instruction and data storage and for storage of graphics commands, data and textures. The MCH 116 is to direct data signals between the processor 102, memory 120, and other components in the system 100 and to bridge the data signals between processor bus 110, memory 120, and system I/O 122. In some embodiments, the system logic chip 116 can provide a graphics port for coupling to a graphics controller 112. The MCH 116 is coupled to memory 120 through a memory interface 118. The graphics card 112 is coupled to the MCH 116 through an Accelerated Graphics Port (AGP) interconnect 114. System 100 uses a proprietary hub interface bus 122 to couple the MCH 116 to the I/O controller hub (ICH) 130. The ICH 130 provides direct connections to some I/O devices via a local I/O bus. The local I/O bus is a high-speed I/O bus for connecting peripherals to the memory 120, chipset, and processor 102. Some examples are the audio controller, firmware hub (flash BIOS) 128, wireless transceiver 126, data storage 124, legacy I/O controller containing user input and keyboard interfaces, a serial expansion port such as Universal Serial Bus (USB), and a network controller 134. The data storage device 124 can comprise a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device. For another embodiment of a system, an execution unit to execute an algorithm with a shift and XOR instruction can be used with a system on a chip. One embodiment of a system on a chip comprises of a processor and a memory. The memory for one such system is a flash memory. The flash memory can be located on the same die as the processor and other system components. Additionally, other logic blocks such as a memory controller or graphics controller can also be located on a system on a chip. Figure IB illustrates a data processing system 140 which implements the principles of one embodiment of the present invention. It will be readily appreciated by one of skill in the art that the embodiments described herein can be used with alternative processing systems without departure from the scope of the invention. Computer system 140 comprises a processing core 159 capable of performing SIMD operations including a shift and XOR operation. For one embodiment, processing core 159 represents a processing unit of any type of architecture, including but not limited to a CISC, a RISC or a VLIW type architecture. Processing core 159 may also be suitable formanufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate said manufacture. Processing core 159 comprises an execution unit 142, a set of register file(s) 145, and a decoder 144. Processing core 159 also includes additional circuitry (not shown) which is not necessary to the understanding of the present invention. Execution unit 142 is used for executing instructions received by processing core 159. In addition to recognizing typical processor instructions, execution unit 142 can recognize instructions in packed instruction set 143 for performing operations on packed data formats. Packed instruction set 143 includes instructions for supporting shift and XOR operations, and may also include other packed instructions. Execution unit 142 is coupled to register file 145 by an internal bus. Register file 145 represents a storage area on processing core 159 for storing information, including data. As previously mentioned, it is understood that the storage area used for storing the packed data is not critical. Execution unit 142 is coupled to decoder 144. Decoder 144 is used for decoding instructions received by processing core 159 into control signals and/or microcode entry points. In response to these control signals and/or microcode entry points, execution unit 142 performs the appropriate operations. Processing core 159 is coupled with bus 141 for communicating with various other system devices, which may include but are not limited to, for example, synchronous dynamic random access memory (SDRAM) control 146, static random access memory (SRAM) control 147, burst flash memory interface 148, personal computer memory card international association (PCMCIA)/compact flash (CF) card control 149, liquid crystal display (LCD) control 150, direct memory access (DMA) controller 151, and alternative bus master interface 152. In one embodiment, data processing system 140 may also comprise an I/O bridge 154 for communicating with various I/O devices via an I/O bus 153. Such I/O devices may include but are not limited to, for example, universal asynchronous receiver/transmitter (UART) 155, universal serial bus (USB) 156, Bluetooth wireless UART 157 and I/O expansion interface 158. One embodiment of data processing system 140 provides for mobile, network and/or wireless communications and a processing core 159 capable of performing SIMD operations including a shift and XOR operation. Processing core 159 may be programmed with various audio, video, imaging and communications algorithms including discrete transformations such as a Walsh-Hadamard transform, a fast Fourier transform (FFT), adiscrete cosine transform (DCT), and their respective inverse transforms; compression/decompression techniques such as color space transformation, video encode motion estimation or video decode motion compensation; and modulation/demodulation (MODEM) functions such as pulse coded modulation (PCM). Some embodiments of the invention may also be applied to graphics applications, such as three dimensional ("3D") modeling, rendering, objects collision detection, 3D objects transformation and lighting, etc. Figure 1C illustrates yet alternative embodiments of a data processing system capable of performing SIMD shift and XOR operations. In accordance with one alternative embodiment, data processing system 160 may include a main processor 166, a SIMD coprocessor 161, a cache memory 167, and an input/output system 168. The input/output system 168 may optionally be coupled to a wireless interface 169. SIMD coprocessor 161 is capable of performing SIMD operations including shift and XOR operations. Processing core 170 may be suitable for manufacture in one or more process technologies and by being represented on a machine readable media in sufficient detail, may be suitable to facilitate the manufacture of all or part of data processing system 160 including processing core 170. For one embodiment, SIMD coprocessor 161 comprises an execution unit 162 and a set of register file(s) 164. One embodiment of main processor 165 comprises a decoder 165 to recognize instructions of instruction set 163 including SIMD shift and XOR calculation instructions for execution by execution unit 162. For alternative embodiments, SIMD coprocessor 161 also comprises at least part of decoder 165B to decode instructions of instruction set 163. Processing core 170 also includes additional circuitry (not shown) which is not necessary to the understanding of embodiments of the present invention. In operation, the main processor 166 executes a stream of data processing instructions that control data processing operations of a general type including interactions with the cache memory 167, and the input/output system 168. Embedded within the stream of data processing instructions are SIMD coprocessor instructions. The decoder 165 of main processor 166 recognizes these SIMD coprocessor instructions as being of a type that should be executed by an attached SIMD coprocessor 161. Accordingly, the main processor 166 issues these SIMD coprocessor instructions (or control signals representing SIMD coprocessor instructions) on the coprocessor bus 166 where from they are received by any attached SIMD coprocessors. In this case, the SIMD coprocessor 161 will accept and execute any received SIMD coprocessor instructions intended for it.Data may be received via wireless interface 169 for processing by the SIMD coprocessor instructions. For one example, voice communication may be received in the form of a digital signal, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples representative of the voice communications. For another example, compressed audio and/or video may be received in the form of a digital bit stream, which may be processed by the SIMD coprocessor instructions to regenerate digital audio samples and/or motion video frames. For one embodiment of processing core 170, main processor 166, and a SIMD coprocessor 161 are integrated into a single processing core 170 comprising an execution unit 162, a set of register file(s) 164, and a decoder 165 to recognize instructions of instruction set 163 including SIMD shift and XOR instructions. Figure 2 is a block diagram of the micro-architecture for a processor 200 that includes logic circuits to perform a shift and XOR instruction in accordance with one embodiment of the present invention. For one embodiment of the shift and XOR instruction, the instruction can shift a floating point mantissa value to the right by the amount indicated by the exponent, XOR the shifted value by a value, and produce the final result. In one embodiment the in-order front end 201 is the part of the processor 200 that fetches macro-instructions to be executed and prepares them to be used later in the processor pipeline. The front end 201 may include several units. In one embodiment, the instruction prefetcher 226 fetches macro-instructions from memory and feeds them to an instruction decoder 228 which in turn decodes them into primitives called microinstructions or micro-operations (also called micro op or uops) that the machine can execute. In one embodiment, the trace cache 230 takes decoded uops and assembles them into program ordered sequences or traces in the uop queue 234 for execution. When the trace cache 230 encounters a complex macro-instruction, the microcode ROM 232 provides the uops needed to complete the operation. Many macro-instructions are converted into a single micro-op, whereas others need several micro-ops to complete the full operation. In one embodiment, if more than four micro-ops are needed to complete a macro-instruction, the decoder 228 accesses the microcode ROM 232 to do the macro-instruction. For one embodiment, a packed shift and XOR instruction can be decoded into a small number of micro ops for processing at the instruction decoder 228. In another embodiment, an instruction for a packed shift and XOR algorithm can be stored within the microcode ROM 232 should a number of micro-ops beneeded to accomplish the operation. The trace cache 230 refers to a entry point programmable logic array (PLA) to determine a correct micro-instruction pointer for reading the micro-code sequences for the shift and XOR algorithm in the micro-code ROM 232. After the microcode ROM 232 finishes sequencing micro-ops for the current macro- instruction, the front end 201 of the machine resumes fetching micro-ops from the trace cache 230. Some SIMD and other multimedia types of instructions are considered complex instructions. Most floating point related instructions are also complex instructions. As such, when the instruction decoder 228 encounters a complex macro-instruction, the microcode ROM 232 is accessed at the appropriate location to retrieve the microcode sequence for that macro-instruction. The various micro-ops needed for performing that macro-instruction are communicated to the out-of-order execution engine 203 for execution at the appropriate integer and floating point execution units. The out-of-order execution engine 203 is where the micro-instructions are prepared for execution. The out-of-order execution logic has a number of buffers to smooth out and re-order the flow of micro-instructions to optimize performance as they go down the pipeline and get scheduled for execution. The allocator logic allocates the machine buffers and resources that each uop needs in order to execute. The register renaming logic renames logic registers onto entries in a register file. The allocator also allocates an entry for each uop in one of the two uop queues, one for memory operations and one for non-memory operations, in front of the instruction schedulers: memory scheduler, fast scheduler 202, slow/general floating point scheduler 204, and simple floating point scheduler 206. The uop schedulers 202, 204, 206, determine when a uop is ready to execute based on the readiness of their dependent input register operand sources and the availability of the execution resources the uops need to complete their operation. The fast scheduler 202 of this embodiment can schedule on each half of the main clock cycle while the other schedulers can only schedule once per main processor clock cycle. The schedulers arbitrate for the dispatch ports to schedule uops for execution. Register files 208, 210, sit between the schedulers 202, 204, 206, and the execution units 212, 214, 216, 218, 220, 222, 224 in the execution block 211. There is a separate register file 208, 210, for integer and floating point operations, respectively. Each register file 208, 210, of this embodiment also includes a bypass network that can bypass or forwardjust completed results that have not yet been written into the register file to new dependent uops. The integer register file 208 and the floating point register file 210 are also capable of communicating data with the other. For one embodiment, the integer register file 208 is split into two separate register files, one register file for the low order 32 bits of data and a second register file for the high order 32 bits of data. The floating point register file 210 of one embodiment has 128 bit wide entries because floating point instructions typically have operands from 64 to 128 bits in width. The execution block 211 contains the execution units 212, 214, 216, 218, 220, 222, 224, where the instructions are actually executed. This section includes the register files 208, 210, that store the integer and floating point data operand values that the microinstructions need to execute. The processor 200 of this embodiment is comprised of a number of execution units: address generation unit (AGU) 212, AGU 214, fast ALU 216, fast ALU 218, slow ALU 220, floating point ALU 222, floating point move unit 224. For this embodiment, the floating point execution blocks 222, 224, execute floating point, MMX, SIMD, and SSE operations. The floating point ALU 222 of this embodiment includes a 64 bit by 64 bit floating point divider to execute divide, square root, and remainder micro-ops. For embodiments of the present invention, any act involving a floating point value occurs with the floating point hardware. For example, conversions between integer format and floating point format involve a floating point register file. Similarly, a floating point divide operation happens at a floating point divider. On the other hand, non- floating point numbers and integer type are handled with integer hardware resources. The simple, very frequent ALU operations go to the high-speed ALU execution units 216, 218. The fast ALUs 216, 218, of this embodiment can execute fast operations with an effective latency of half a clock cycle. For one embodiment, most complex integer operations go to the slow ALU 220 as the slow ALU 220 includes integer execution hardware for long latency type of operations, such as a multiplier, shifts, flag logic, and branch processing. Memory load/store operations are executed by the AGUs 212, 214. For this embodiment, the integer ALUs 216, 218, 220, are described in the context of performing integer operations on 64 bit data operands. In alternative embodiments, the ALUs 216, 218, 220, can be implemented to support a variety of data bits including 16, 32, 128, 256, etc. Similarly, the floating point units 222, 224, can be implemented to support a range of operands having bits of various widths. For one embodiment, the floating pointunits 222, 224, can operate on 128 bits wide packed data operands in conjunction with SIMD and multimedia instructions. The term "registers" is used herein to refer to the on-board processor storage locations that are used as part of macro-instructions to identify operands. In other words, the registers referred to herein are those that are visible from the outside of the processor (from a programmer's perspective). However, the registers of an embodiment should not be limited in meaning to a particular type of circuit. Rather, a register of an embodiment need only be capable of storing and providing data, and performing the functions described herein. The registers described herein can be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, combinations of dedicated and dynamically allocated physical registers, etc. In one embodiment, integer registers store thirty-two bit integer data. A register file of one embodiment also contains sixteen XMM and general purpose registers, eight multimedia (e.g., "EM64T" additions) multimedia SIMD registers for packed data. For the discussions below, the registers are understood to be data registers designed to hold packed data, such as 64 bits wide MMX™ registers (also referred to as 'mm' registers in some instances) in microprocessors enabled with MMX technology from Intel Corporation of Santa Clara, California. These MMX registers, available in both integer and floating point forms, can operated with packed data elements that accompany SIMD and SSE instructions. Similarly, 128 bits wide XMM registers relating to SSE2, SSE3, SSE4, or beyond (referred to generically as "SSEx") technology can also be used to hold such packed data operands. In this embodiment, in storing packed data and integer data, the registers do not need to differentiate between the two data types. In one embodiment, other registers or combination of registers may be used to store 256 bits or more data. In the examples of the following figures, a number of data operands are described. Figure 3A illustrates various packed data type representations in multimedia registers according to one embodiment of the present invention. Fig. 3A illustrates data types for a packed byte 310, a packed word 320, and a packed doubleword (dword) 330 for 128 bits wide operands. The packed byte format 310 of this example is 128 bits long and contains sixteen packed byte data elements. A byte is defined here as 8 bits of data. Information for each byte data element is stored in bit 7 through bit 0 for byte 0, bit 15 through bit 8 forbyte 1, bit 23 through bit 16 for byte 2, and finally bit 120 through bit 127 for byte 15. Thus, all available bits are used in the register. This storage arrangement increases the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in parallel. Generally, a data element is an individual piece of data that is stored in a single register or memory location with other data elements of the same length. In packed data sequences relating to SSEx technology, the number of data elements stored in a XMM register is 128 bits divided by the length in bits of an individual data element. Similarly, in packed data sequences relating to MMX and SSE technology, the number of data elements stored in an MMX register is 64 bits divided by the length in bits of an individual data element. Although the data types illustrated in Fig. 3 A are 128 bit long, embodiments of the present invention can also operate with 64 bit wide or other sized operands. The packed word format 320 of this example is 128 bits long and contains eight packed word data elements. Each packed word contains sixteen bits of information. The packed doubleword format 330 of Fig. 3 A is 128 bits long and contains four packed doubleword data elements. Each packed doubleword data element contains thirty two bits of information. A packed quadword is 128 bits long and contains two packed quad- word data elements. Figure 3B illustrates alternative in-register data storage formats. Each packed data can include more than one independent data element. Three packed data formats are illustrated; packed half 341, packed single 342, and packed double 343. One embodiment of packed half 341, packed single 342, and packed double 343 contain fixed-point data elements. For an alternative embodiment one or more of packed half 341, packed single 342, and packed double 343 may contain floating-point data elements. One alternative embodiment of packed half 341 is one hundred twenty-eight bits long containing eight 16- bit data elements. One embodiment of packed single 342 is one hundred twenty-eight bits long and contains four 32-bit data elements. One embodiment of packed double 343 is one hundred twenty-eight bits long and contains two 64-bit data elements. It will be appreciated that such packed data formats may be further extended to other register lengths, for example, to 96-bits, 160-bits, 192-bits, 224-bits, 256-bits or more. Figure 3C illustrates various signed and unsigned packed data type representations in multimedia registers according to one embodiment of the present invention. Unsigned packed byte representation 344 illustrates the storage of an unsigned packed byte in a SIMDregister. Information for each byte data element is stored in bit seven through bit zero for byte zero, bit fifteen through bit eight for byte one, bit twenty-three through bit sixteen for byte two, and finally bit one hundred twenty through bit one hundred twenty-seven for byte fifteen. Thus, all available bits are used in the register. This storage arrangement can increase the storage efficiency of the processor. As well, with sixteen data elements accessed, one operation can now be performed on sixteen data elements in a parallel fashion. Signed packed byte representation 345 illustrates the storage of a signed packed byte. Note that the eighth bit of every byte data element is the sign indicator. Unsigned packed word representation 346 illustrates how word seven through word zero are stored in a SIMD register. Signed packed word representation 347 is similar to the unsigned packed word in-register representation 346. Note that the sixteenth bit of each word data element is the sign indicator. Unsigned packed doubleword representation 348 shows how doubleword data elements are stored. Signed packed doubleword representation 349 is similar to unsigned packed doubleword in-register representation 348. Note that the necessary sign bit is the thirty-second bit of each doubleword data element. Figure 3D is a depiction of one embodiment of an operation encoding (opcode) format 360, having thirty-two or more bits, and register/memory operand addressing modes corresponding with a type of opcode format described in the "IA-32 Intel Architecture Software Developer's Manual Volume 2: Instruction Set Reference," which is which is available from Intel Corporation, Santa Clara, CA on the world-wide-web (www) at intel.com/design/litcentr. In one embodiment, a shift and XOR operation may be encoded by one or more of fields 361 and 362. Up to two operand locations per instruction may be identified, including up to two source operand identifiers 364 and 365. For one embodiment of the shift and XOR instruction, destination operand identifier 366 is the same as source operand identifier 364, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 366 is the same as source operand identifier 365, whereas in other embodiments they are different. In one embodiment of a shift and XOR instruction, one of the source operands identified by source operand identifiers 364 and 365 is overwritten by the results of the shift and XOR operations, whereas in other embodiments identifier 364 corresponds to a source register element and identifier 365 corresponds to a destination register element. For one embodiment of theshift and XOR instruction, operand identifiers 364 and 365 may be used to identify 32-bit or 64-bit source and destination operands. Figure 3E is a depiction of another alternative operation encoding (opcode) format 370, having forty or more bits. Opcode format 370 corresponds with opcode format 360 and comprises an optional prefix byte 378. The type of shift and XOR operation may be encoded by one or more of fields 378, 371, and 372. Up to two operand locations per instruction may be identified by source operand identifiers 374 and 375 and by prefix byte 378. For one embodiment of the shift and XOR instruction, prefix byte 378 may be used to identify 32-bit or 64-bit source and destination operands. For one embodiment of the shift and XOR instruction, destination operand identifier 376 is the same as source operand identifier 374, whereas in other embodiments they are different. For an alternative embodiment, destination operand identifier 376 is the same as source operand identifier 375, whereas in other embodiments they are different. In one embodiment, the shift and XOR operations shift and XOR one of the operands identified by operand identifiers 374 and 375 to another operand identified by the operand identifiers 374 and 375 is overwritten by the results of the shift and XOR operations, whereas in other embodiments the shift and XOR of the operands identified by identifiers 374 and 375 are written to another data element in another register. Opcode formats 360 and 370 allow register to register, memory to register, register by memory, register by register, register by immediate, register to memory addressing specified in part by MOD fields 363 and 373 and by optional scale- index-base and displacement bytes. Turning next to Figure 3F, in some alternative embodiments, 64 bit single instruction multiple data (SIMD) arithmetic operations may be performed through a coprocessor data processing (CDP) instruction. Operation encoding (opcode) format 380 depicts one such CDP instruction having CDP opcode fields 382 and 389. The type of CDP instruction, for alternative embodiments of shift and XOR operations, may be encoded by one or more of fields 383, 384, 387, and 388. Up to three operand locations per instruction may be identified, including up to two source operand identifiers 385 and 390 and one destination operand identifier 386. One embodiment of the coprocessor can operate on 8, 16, 32, and 64 bit values. For one embodiment, the shift and XOR operation is performed on floating point data elements. In some embodiments, a shift and XOR instruction may be executed conditionally, using selection field 381. For some shift and XOR instructionssource data sizes may be encoded by field 383. In some embodiments of shift and XOR instruction, Zero (Z), negative (N), carry (C), and overflow (V) detection can be done on SIMD fields. For some instructions, the type of saturation may be encoded by field 384. Figure 4 is a block diagram of one embodiment of logic to perform a shift and XOR operation on packed data operands in accordance with the present invention. Embodiments of the present invention can be implemented to function with various types of operands such as those described above. For simplicity, the following discussions and examples below are in the context of a shift and XOR instruction to process data elements. In one embodiment, a first operand 401 is shifted by shifter 410 by an amount specified by input 405. In one embodiment it is a right shift. However in other embodiments the shifter performs a left shift operation. In some embodiments the operand is a scalar value, whereas in other embodiments it is a packed data value having a number of different possible data sizes and types (e.g., floating point, integer). In one embodiment, the shift count 405 is a packed (or "vector") value, each element of which corresponds to an element of a packed operand to be shifted by the corresponding shift count element. In other embodiments, the shift count applies to all elements of the first data operand. Furthermore, in some embodiments, the shift count is specified by a field in the instruction, such as an immediate, r/m, or other field. In other embodiments, the shift count is specified by a register indicated by the instruction. The shifted operand is then XOR'ed by a value 430 by logic 420 and the XOR'ed result is stored in a destination storage location (e.g., register) 425. In one embodiment, the XOR value 430 is a packed (or "vector") value, each element of which corresponds to an element of a packed operand to be XOR'ed by the corresponding XOR element. In other embodiments, the XOR value 430 applies to all elements of the first data operand. Furthermore, in some embodiments, the XOR value is specified by a field in the instruction, such as an immediate, r/m, or other field. In other embodiments, the XOR value is specified by a register indicated by the instruction. Figure 5 illustrates the operation of a shift and XOR instruction according to one embodiment of the present invention. At operation 501, if a shift and XOR instruction is received, a first operand is shifted by a shift count at operation 505. In one embodiment it is a right shift. However in other embodiments the shifter performs a left shift operation. In some embodiments the operand is a scalar value, whereas in other embodiments it is apacked data value having a number of different possible data sizes and types (e.g., floating point, integer). In one embodiment, the shift count 405 is a packed (or "vector") value, each element of which corresponds to an element of a packed operand to be shifted by the corresponding shift count element. In other embodiments, the shift count applies to all elements of the first data operand. Furthermore, in some embodiments, the shift count is specified by a field in the instruction, such as an immediate, r/m, or other field. In other embodiments, the shift count is specified by a register indicated by the instruction. At operation 510, the shifted value is XOR'ed by an XOR value. In one embodiment, the XOR value 430 is a packed (or "vector") value, each element of which corresponds to an element of a packed operand to be XOR'ed by the corresponding XOR element. In other embodiments, the XOR value 430 applies to all elements of the first data operand. Furthermore, in some embodiments, the XOR value is specified by a field in the instruction, such as an immediate, r/m, or other field. In other embodiments, the XOR value is specified by a register indicated by the instruction. At operation 515, the shifted and XOR'ed value is stored in a location. In one embodiment, the location is a scalar register. In another embodiment, the location is a packed data register. In another embodiment, the destination location is also used as a source location, such as a packed data register specified by the instruction. In other embodiments the destination location is a different location than the source locations storing the initial operand or other values, such as the shift count or the XOR value. In one embodiment, the shift and XOR instruction is useful for performing data de- duplication in various computer applications. Data de-duplication attempts to find common blocks of data between files in order to optimize disk storage and/or network bandwidth. In one embodiment, a shift and XOR instruction is useful for improving performance in data de-duplication operations using operations, such as finding chunk boundaries using a rolling hash, hash digest (e.g., SHA1 or MD5) and compression of unique chunks (using fast Lempel-Ziv schemes). For example, one data de-duplication algorithm can be illustrated by the following pseudo-code: while (p < max) { v = (v » 1) XOR scramble[(unsigned char)*p]; if v has at least z trailing zeros {ret = 1; break; } p++; } In the above algorithm, a scramble table is a 256-entry array of random 32-bit constants and v is the rolling hash that has a hash-value of the past 32 bytes of the data. When a chunk boundary is found, the algorithm returns with ret=l and the position, p, denotes the boundary of the chunk. The value z can be a constant such as 12-15 that results in good chunk detection and can be application specific. In one embodiment, the shift and XOR instruction can help the above algorithm operate at rate of about 2 cycles/byte. In other embodiments, the shift and XOR instruction helps the algorithm to perform even faster or slower, depending on the use. At least one embodiment, in which the shift and XOR instruction is used can be illustrated by the following pseudo-code: while (p < max) { v = (v « 1) XOR brefl_scramble[(unsigned char)*p]; if v has at least z leading zeros { ret = 1; break; } p++; } In the above algorithm, each entry of the brefl scramble array contains the bit- reflected version of the corresponding entry in the original scramble array. In one embodiment, the above algorithm shifts v left instead of right and v contains a bit-reflected version of the rolling-hash. In one embodiment, the check for a chunk boundary is performed by checking a minimum number of leading zeros. In other embodiments, the shift and XOR instruction may be used in other useful computer operations and algorithms. Furthermore, embodiments help to improve the performance of many programs that use shift and XOR operations extensively. Thus, techniques for performing a shift and XOR instruction are disclosed. While certain exemplary embodiments have been described and shown in the accompanying drawings, it is to be understood that such embodiments are merely illustrative of and notrestrictive on the broad invention, and that this invention not be limited to the specific constructions and arrangements shown and described, since various other modifications may occur to those ordinarily skilled in the art upon studying this disclosure. In an area of technology such as this, where growth is fast and further advancements are not easily foreseen, the disclosed embodiments may be readily modifiable in arrangement and detail as facilitated by enabling technological advancements without departing from the principles of the present disclosure or the scope of the accompanying claims.
An apparatus may include an output pin, an output driver circuit communicatively coupled to the output pin and configured to output a rising signal or falling signal on the output pin, and a programmable current source communicatively coupled to the output driver circuit. The programmable current source may be configured to drive a speed of a rising signal or of a falling signal on the output pin and, based upon adjustment data, increase or decrease current to the output driver circuit to change the speed of the rising signal or of the falling signal on the output pin.
CLAIMSWe claim:1. An apparatus, comprising:a first output pin;a first output driver circuit communicatively coupled to the first output pin and configured to output a rising signal or falling signal on the first output pin; anda first programmable current source communicatively coupled to the first output driver circuit and configured to:drive a speed of a rising signal or of a falling signal on the first output pin; andbased upon first adjustment data, increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin.2. The apparatus of Claim 1, further comprising:a second output pin;a second output driver circuit communicatively coupled to the second output pin and configured to output a rising signal or falling signal on the second output pin; and a second programmable current source communicatively coupled to the second output driver circuit and configured to:drive a speed of a rising signal or of a falling signal on the second output pin; andbased upon second adjustment data, increase or decrease current to the second output driver circuit to change the speed of the rising signal or of the falling signal on the second output pin.3. The apparatus of Claim 2, wherein:the first output pin is a DM pin of a universal serial bus (USB) connection; and the second output pin is a DP pin of the USB connection.4. The apparatus of any of Claims 1-3, further comprising a second programmable current source communicatively coupled to the first output driver circuit and configured to:drive a speed of the rising signal on the first output pin; andbased upon second adjustment data, increase or decrease current to the first output driver circuit to change a speed of the rising signal;wherein the first programmable current source is further configured to drive the speed of the falling signal on the first output pin.5. The apparatus of any of Claims 1-4, wherein the first programmable current source is configured to use the first adjustment data to increase or decrease current to the first output driver circuit in order to more closely match a speed of a rising signal or of a falling signal on a second output pin.6. The apparatus of any of Claims 1-5, wherein the first adjustment data includes information of a relative increase or decrease of the speed of a rising signal or of a falling signal.7. The apparatus of any of Claims 1-6, wherein the first programmable current source is configured to increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin in response to a change in a load on the first output pin.8. A system, comprising:a first output pin;a second output pin;a first output driver circuit communicatively coupled to the first output pin and configured to output a rising signal or falling signal on the first output pin; anda first programmable current source communicatively coupled to the first output driver circuit and configured to:drive a speed of a rising signal or of a falling signal on the first output pin; and based upon first adjustment data, increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin.9. The system of Claim 8, further comprising:a second output driver circuit communicatively coupled to the second output pin and configured to output a rising signal or falling signal on the second output pin; and a second programmable current source communicatively coupled to the second output driver circuit and configured to:drive a speed of a rising signal or of a falling signal on the second output pin; andbased upon second adjustment data, increase or decrease current to the second output driver circuit to change the speed of the rising signal or of the falling signal on the second output pin.10. The system of Claim 9, wherein:the first output pin is a DM pin of a universal serial bus (USB connection; and the second output pin is a DP pin of the USB connection.11. The system of any of Claims of 8-10, further comprising a second programmable current source communicatively coupled to the first output driver circuit and configured to:drive a speed of the rising signal on the first output pin; andbased upon second adjustment data, increase or decrease current to the first output driver circuit to change a speed of the rising signal;wherein the first programmable current source is further configured to drive the speed of the falling signal on the first output pin.12. The system of any of Claims of 8-11, wherein the first programmable current source is configured to use the first adjustment data to increase or decrease current to the first output driver circuit in order to more closely match a speed of a rising signal or of a falling signal on the second output pin.13. The system of any of Claims of 8-12, wherein the first adjustment data includes information of a relative increase or decrease of the speed of a rising signal or of a falling signal.14. The system of any of Claims of 8-13, wherein the first programmable current source is configured to increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin in response to a change in a load on the first output pin.15. A method of operating a system for matching slew rates of a connection, the connection comprising a first output pin and a second output pin, the method comprising:providing current from a first programmable current source to output a rising signal or falling signal on the first output pin;driving a speed of the rising signal or of the falling signal on the first output pin; andbased upon first adjustment data, increasing or decreasing current to change the speed of the rising signal or of the falling signal on the first output pin.16. The method of Claim 15, further comprising:providing current from a second programmable current source to output a rising signal or falling signal on the second output pin;driving a speed of a rising signal or of a falling signal on the second output pin; andbased upon second adjustment data, increasing or decreasing current to change the speed of the rising signal or of the falling signal on the second output pin.17. The method of Claim 16, wherein:the first output pin is a DM pin of a universal serial bus (USB connection; and the second output pin is a DP pin of the USB connection.18. The method of any of Claims 15-17, further comprising:providing current from a second programmable current source to output a rising signal or falling signal on the first output pin; andbased upon second adjustment data, increasing or decreasing current to change a speed of the rising signal;wherein the first programmable current source is further configured to drive the speed of the falling signal on the first output pin.19 The method of any of Claims 15-18, further comprising using the first adjustment data to increase or decrease current in order to more closely match a speed of a rising signal or of a falling signal on the second output pin.20. The method of any of Claims 15-19, further comprising increasing or decreasing current to change the speed of the rising signal or of the falling signal on the first output pin in response to a change in a load on the first output pin.
RISE AND FALL TIME MISMATCH ADJUSTMENT CIRCUIT FOR USB-ON-THE-GO MODULESFIELD OF THE INVENTIONThe present disclosure relates to serial communications and, more particularly, to rise and fall time mismatch adjustment circuit for universal serial bus (USB) on-the-go (OTG) modules.BACKGROUNDA physical USB connector or interface may include four shielded wires terminated by pins. Pin 1, which is the VBUS, is used to power any connected peripheral by supplying a voltage such as a +5V voltage from the USB host. Pin 2 is the negative data terminal denoted as D- (DM), while Pin 3 is the positive data terminal denoted as D+ (DP). Pin 4 is the ground connection (GND).USB OTG enables a USB host— such as a head unit, computer, or other suitable electronic device— to allow other USB devices— such as USB flash drives, digital cameras, mice or keyboards— to be attached to them. Use of USB OTG allows those devices to switch back and forth between the roles of host and device. For instance, a mobile phone may read from removable media as the host device, but present itself as a USB Mass Storage Device when connected to a host computer. USB OTG allows a USB element to perform both master and slave roles. Whenever two USB devices are connected and one of them is a USB OTG device, they establish a communication link. The device controlling the link is called the master or host, while the other is called the slave or peripheral.USB OTG defines two roles for devices: OTG A-device and OTG B-device, specifying which side supplies power to the link, and which initially is the host. The OTG A- device is a power supplier, and an OTG B-device is a power consumer. In the default link configuration, the A-device acts as a USB host with the B-device acting as a USB peripheral. The host and peripheral modes may be exchanged later by using a host negotiation protocol (HNP). The initial role of each device is defined by which mini plug a user inserts into its receptacle/Standard USB uses a master/slave architecture; a host acts as the master device for the entire bus, and a USB device acts as a slave. If implementing standard USB, devices must assume one role or the other, with computers generally set up as hosts, while (for example) printers normally function as slaves. In the absence of USB OTG, cell phones often implemented slave functionality to allow easy transfer of data to and from computers. Such phones, as slaves, could not readily be connected to printers as they also implemented the slave role.When a device is plugged into the USB bus, the master device, or host, sets up communications with the device and handles service provisioning (the host's software enables or does the needed data-handling such as file managing or other desired kind of data communication or function). That allows the devices to be greatly simplified compared to the host; for example, a mouse contains very little logic and relies on the host to do almost all of the work. The host controls all data transfers over the bus, with the devices capable only of signaling (when polled) that they require attention. To transfer data between two devices, for example from a phone to a printer, the host first reads the data from one device, then writes it to the other.While the master-slave arrangement works for some devices, many devices can act either as master or as slave depending on what else shares the bus. For instance, a computer printer is normally a slave device, but when a USB flash drive containing images is plugged into the printer's USB port with no computer present (or at least turned off), it would be useful for the printer to take on the role of host, allowing it to communicate with the flash drive directly and to print images from it.USB OTG recognizes that a device can perform both master and slave roles, and so subtly changes the terminology. With OTG, a device can be either a host when acting as a link master, or a "peripheral" when acting as a link slave. The choice between host and peripheral roles is handled entirely by which end of the cable the device is connected to. The device connected to the "A" end of the cable at start-up, known as the "A-device", acts as the default host, while the "B" end acts as the default peripheral, known as the "B-device".After initial startup, setup for the bus operates as it does with the normal USB standard, with the A-device setting up the B-device and managing all communications. However, when the same A-device is plugged into another USB system or a dedicated host becomes available, it can become a slave.SUMMARYEmbodiments of the present disclosure include an apparatus. The apparatus may include a first output pin, a first output driver circuit communicatively coupled to the first output pin and configured to output a rising signal or falling signal on the first output pin, and a first programmable current source communicatively coupled to the first output driver circuit. The first programmable current source may be configured to drive a speed of a rising signal or of a falling signal on the first output pin and, based upon first adjustment data, increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin. In combination with any of the above embodiments, the apparatus may further include a second output pin, a second output driver circuit communicatively coupled to the second output pin and configured to output a rising signal or falling signal on the second output pin, and a second programmable current source communicatively coupled to the second output driver circuit. The second programmable current source may be configured to drive a speed of a rising signal or of a falling signal on the second output pin, and, based upon second adjustment data, increase or decrease current to the second output driver circuit to change the speed of the rising signal or of the falling signal on the second output pin. In combination with any of the above embodiments, the first output pin may be a DM pin of a USB connection. In combination with any of the above embodiments, the second output pin may be a DP pin of the USB connection. In combination with any of the above embodiments, the second programmable current source may be communicatively coupled to the first output driver circuit and configured to drive a speed of the rising signal on the first output pin and, based upon second adjustment data, increase or decrease current to the first output driver circuit to change a speed of the rising signal. In combination with any of the above embodiments, the first programmable current source may be further configured to drive the speed of the falling signal on the first output pin. In combination with any of the above embodiments, the first programmable current source may be configured to use the first adjustment data to increase or decrease current to the first output driver circuit in order to more closely match a speed of a rising signal or of a falling signal on a second output pin. In combination with any of the above embodiments, the first adjustment data may include information of a relative increase or decrease of the speed of a rising signal or of a falling signal. In combination with any of the above embodiments, the first programmable current source may be configured to increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin in response to a change in a load on the first output pin.Embodiments of the present disclosure may include a system with a first output pin, a second output pin, a first output driver circuit communicatively coupled to the first output pin and configured to output a rising signal or falling signal on the first output pin, and a first programmable current source communicatively coupled to the first output driver circuit. The first programmable current source may be configured to drive a speed of a rising signal or of a falling signal on the first output pin and, based upon first adjustment data, increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin. In combination with any of the above embodiments, the system may include a second output driver circuit communicatively coupled to the second output pin and configured to output a rising signal or falling signal on the second output pin and a second programmable current source communicatively coupled to the second output driver circuit. The second programmable current source may be configured to drive a speed of a rising signal or of a falling signal on the second output pin and, based upon second adjustment data, increase or decrease current to the second output driver circuit to change the speed of the rising signal or of the falling signal on the second output pin. In combination with any of the above embodiments, the first output pin is a DM pin of a USB connection and the second output pin is a DP pin of the USB connection. In combination with any of the above embodiments, the second programmable current source may be communicatively coupled to the first output driver circuit and configured to drive a speed of the rising signal on the first output pin and, based upon second adjustment data, increase or decrease current to the first output driver circuit to change a speed of the rising signal. In combination with any of the above embodiments, the first programmable current source may be further configured to drive the speed of the falling signal on the first output pin. In combination with any of the above embodiments, the first programmable current source may be configured to use the first adjustment data to increase or decrease current to the first output driver circuit in order to more closely match a speed of a rising signal or of a falling signal on the second output pin. In combination with any of the above embodiments, the first adjustment data includes information of a relative increase or decrease of the speed of a rising signal or of a falling signal. In combination with any of the above embodiments, the first programmable current source may be configured to increase or decrease current to the first output driver circuit to change the speed of the rising signal or of the falling signal on the first output pin in response to a change in a load on the first output pin.Embodiments of the present disclosure may include USB hosts, USB hubs, USB master devices, USB controllers, or electronic devices including any of the apparatuses or systems of the above embodiments.Embodiments of the present disclosure may include methods performed by any of the embodiments of apparatuses, systems, USB hosts, USB hubs, USB master devices, USB controllers, or electronic devices described above.BRIEF DESCRIPTION OF THE DRAWINGSFIGURE 1 is an illustration of a circuit configured to perform mismatch adjustment, according to embodiments of the present disclosure.FIGURE 2 is an illustration of a circuit for implementing control of pins for a USB connection, according to embodiments of the present disclosure.FIGURES 3 A and 3B are a more detailed schematic of a circuit configured to perform mismatch adjustment, according to embodiments of the present disclosure.FIGURES 4A and 4B are an illustration of another, more detailed schematic of a circuit configured to perform mismatch adjustment, according to embodiments of the present disclosure.DETAILED DESCRIPTIONFIGURE 1 is an illustration of a circuit 100 configured to perform mismatch adjustment, according to embodiments of the present disclosure. In one embodiment, circuit 100 may be configured to perform mismatch adjustment for rise and fall times on pins, connector, or other interfaces. In another embodiment, circuit 100 may be configured to perform mismatch adjustment for USB modules, circuits, devices, elements, pins, packages, or other entities. In yet another embodiment, circuit 100 may be configured to perform mismatch adjustment for USB OTG entities.Circuit 100 may be applied for respective DP and DM pins on a given USB connection. Accordingly, multiple instances of circuit 100 may be included within a given USB entity. For example, an instance of circuit 100 may be used for each of the respective DP and DM pins on a USB connection, resulting in at least two instances of circuit 100 for a USB connection.Circuit 100 may include a pin 106. Pin 106 may implement an instance of a DP or an instance of a DM, depending upon the use of circuit 100. Circuit 100 may include an output driver 108. Output driver 108 may be implemented in any suitable manner, such as by the serial or cascade connection of a PMOS and an NMOS transistor. Output driver 108 may include capacitors between the gate, source, and drain pins of the transistors. The capacitors may be a symbolic representation of the intrinsic capacitance of the transistors, rather than physical capacitors placed within circuit 100. The capacitance may be a relatively large value due to the large size of the MOS device. Pin 106 may be coupled to the intersection of the transistors in output driver 108. Circuit 100 may also be configured to implement programable line termination for both low and high-speed host applications and tri-state capability. The configuration of circuit 100 for low-speed or high-speed operation may be determined by a unit current, shown below current bias source (I ref bias) 202 in FIGURE 2. Current bias source 202 may be reduced by a certain factor to reduce or increase the rise and fall time.In one embodiment, circuit 100 may be configured to drive the rise time and fall time of voltages or signals on pin 106 through a fall time current control circuit (FTCCC) 102 communicatively coupled to driver 108. FTCCC 102 may be connected to driver 108 through a gate of the NMOS component 116 of driver 108. The signal may be appropriately switched by a switch 110 depending upon the mode of operation of circuit 100. Switch 110 may be configured to connect the gate of the NMOS 116 of driver 108 to either FTCCC 102 to or to ground. Switch 110 may be configured to connect the gate of the NMOS 116 of driver 108 to FTCCC 102 to turn on the NMOS 116. Switch 110 may be configured to connect the gate of the NMOS 116 of driver 108 to ground to turn off the NMOS 116.In another embodiment, circuit 100 may be configured to drive the rise time and fall time of voltages or signals on pin 106 through a rise time current control circuit (RTCCC) 104 communicatively coupled to driver 108. RTCCC 104 may be connected to driver 108 through a gate of the PMOS component 114 of driver 108. The signal may be appropriately switched by a switch 112 depending upon the mode of operation of circuit 100. Switch 112 may be configured to connect the gate of the PMOS 114 of driver 108 to either RTCCC 104 to or to VDD. Switch 112 may be configured to connect the gate of the PMOS 114 to VDD to turn the PMOS 114 off. Switch 112 may be configured to connect the gate of the PMOS 114 to RTCCC 104 to turn the PMOS 114 on.The signal controlling switch 112 and switch 110 may be configured to apply respective ones of RTCCC 104/FTCCC 102 and VDD to the gates of the respective MOS elements of driver 108. If switch 110 is connected to VDD, switch 112 may be connected to RTCCC 104. If switch 1 12 is connected to VDD, switch 110 may be connected to FTCCC 102. When driver 108 is to be shut off, both switch 110 and switch 112 may be connected to VDD.FTCCC 102 and RTCCC 104 may be implemented in any suitable manner. In one embodiment, FTCCC 102 and RTCCC 104 may be implemented as programmable current sources. In another embodiment, FTCCC 102 and RTCCC 104 may include any suitable range or resolution for adjustment of current produced therefrom.Circuit 100, or a system in which circuit 100 is implemented, may include any suitable mechanism for issuing control or adjustment commands, signals, or instructions to FTCCC 102 and RTCCC 104. Such signals may include signals for defining an output level, an adjustment to an output level (positive or negative), or a signal specifying any of these in absolute or relative terms. In one embodiment, FTCCC 102 and RTCCC 104 may include trimming bits to adjust the respective current output up or down. In a further embodiment, the trimming bits may specify a code, the value of which may represent a percentage by which the output current should be adjusted. In yet a further embodiment, the mapping of a code to a percentage adjustment might not be linear.For example, FTCCC 102 and RTCCC 104 may include two trimming bits, though any suitable number of bits may be used. The trimming bits, given as [1 :0], may thus allow representation of four adjustment levels. In one embodiment, the four adjustment levels may be linearly spaced between each other. In another embodiment, the four adjustment levels might not be linearly spaced between each other.In one embodiment, FTCCC 102 and RTCCC 104 may be independently adjusted from one another. FTCCC 102 and RTCCC 104 might retain their own separate trimming bits. However, the separate trimming bits for FTCCC 102 and RTCCC 104 might be stored together or adjacently. The trimming bits for FTCCC 102 and RTCCC 104 might be stored together with other information, such that the combined four bits are together a word or part of a word stored in, for example, a configuration register. As multiple pins may be needed to implement a USB connection, such as a DM and a DP pin, multiple instances of circuit 100 may exist and thus multiple instances of the separate trimming bits for instances of FTCCC 102 and RTCCC 104. Each of these trimming bit pairs may be independently controlled. The various pairs of bits may be collected in an amalgamation and controlled, for example, in a single command, instruction, register value, or other instruction or control signal from a system, microcontroller, or other electronic device that is utilizing circuit 100.In one embodiment, for a given USB connection with a DP pin and a DM pin (and thus two instances of circuit 100 and pin 106), two trimming bits may be provided to individually adjust the falling edge of the DP pin (DP fall); two trimming bits to adjust the rising edge of the DP pin (DP rise); two trimming bits to adjust the falling edge of the DM pin (DM fall); and two trimming bits to adjust the rising edge of the DM pin (DP rise). Thus, the USB connection rising and falling adjustment may be specified by a total of eight programmable bits. These may be specified, for example, in a configuration register, an electronic fuse, or other suitable mechanisms. Setting of the specification may be made by a command, instruction, or other signal.In one embodiment, the values of the output level that are adjusted by the trimming bits may be specified according to a value of the output current of a respective one of FTCCC 102 and RTCCC 104. In another embodiment, the values of the output level that are adjusted by the trimming bits may be specified according to the effect of the output current on a respective one of FTCCC 102 and RTCCC 104. For example, a trimming bit code of“10” might be used to increase current of FTCCC 102 by a specified percentage. However, in other implementations the trimming bit code of“10” might be used to increase current of FTCCC 102 sufficient to change the rise time of the output on pin 106 by a specified percentage.In one embodiment, FTCCC 102 and RTCCC 104 may be implemented as programmable current sources that are centered on the [00] trim bit which can be adjusted in both directions for faster or slower rise and fall times across pin 106.The standards promulgated by USB specifications include rise (Tr) and fall (Tf) times of the DM and DP pins. Various USB devices may be designed to meet these specifications by being designed to match tolerances of transistors and other elements within the USB device. Such designs are often made using simulations of theoretical models However, in various USB devices, the final module implementation as physically produced may yield variances in area or power consumption. Furthermore, computer or theoretical models used during design may have been inaccurate. This may cause the USB device to fail to meet USB specifications. Even if a circuit implementing an internal part of a USB device meets all requirements through successful implementation of symmetrical layout, analog matching rules, or reduced parasitics, other portions external to the circuit— such as the bonding, package, application board, and the load to the drivers— may cause a mismatch in the rise and fall times of the DM and DP pins. Embodiments of the present disclosure may be used to apply a mismatch parameter for rise and fall times. The parameter may be used to center rise and fall times of the DM and DP pins. The adjustments may be used to compensate for inaccurate transistor models, mismatch of the device load to which the user will connect the USB cable, or other sources of variances in rise and fall times.In one embodiment, the rise and fall time across a given pin may be individually varied by +15% to -5%. This range may be available for both DP and DM pins. The adjustment may be defined according to the trim bits. These adjustments may also be used in case the slope of the rise or fall time has to be made slower to avoid quiescent current during switching and then have an impact on electromagnetic characteristics of the USB device. The adjustment may be performed after manufacture and during product test. In such a case, the adjustment might be made once and set permanently or semi-permanently such that the adjustment is available to fuse with the USB device. Such an adjustment may be set by values, for example, in an electronic fuse ore register. In another embodiment, the adjustment might be made upon a detection of a rise or fall time that is out of specification. Such an adjustment may be made after detecting the change due to environmental changes or a new load attached to the USB device. In such embodiments, the adjustment may be made dynamically. These adjustments may be made by software in a microcontroller or a system in which the control circuit resides.As discussed above, embodiments of the present disclosure add control capability of the bias current to charge and discharge the (effective) gate capacitor of output driver 108. The control over the bias current implemented by using two trim bits may be described by four targeted parameters Trdp, Tfdp, Trdm, Tfdm. Driver 108 may be turned on or off by charging or discharging the gate oxide of its transistors. Thus, by controlling charge or discharge rate of the gate oxide the slew for rise or fall could be modified. The following table illustrates example trimming bit values and corresponding rise and fall time adjustments.BITCODE Adjustment<l :0> dm fall dm rise dp fall dp rise00 0% 0% 0% 0%01 +10% -5% +10% -5%10 -5% -10% -5% -10%11 +5% +10% +5% +10%TABLE 1 Thus, a“00” code may represent no change for the output. Each of these adjustments may be made with a tolerance of 20% of the percentages represented therein. USB specifications may require that the rise time and the fall time should generally match. In particular, the rise time and fall time must be within 10% of each other. The rise time and the fall time may themselves be between a range of 5ns to 20ns. Thus, for example, if rise time for the DM pin is lOns, the fall time for the DM pin must be between 9-11 ns. If the fall time is beyond this range, the circuit might be outside the USB specification requirements. Embodiments of the present disclosure may, given a detection that the fall time is 9 ns, adjust the fall time to be 10% faster. Consequently, the rise time (10 ns) and the fall time (9.9 ns) may more closely match. Furthermore, the fall time could be adjusted to be 5% faster and the rise time could be adjusted to be 5% slower. In addition, the rise time could be adjusted to be 10% slower.Setting of the adjustment bits may be performed dynamically or statically. Dynamically, a system with USB circuits may measure the rise time and fall time for a given USB connection and issue updated adjustment bits to circuit 100. These may be issued by, for example, commands writing values to registers. The adjustment may be in response to, for example, an attachment of a USB element to the connection. The cable for the attachment may cause mismatched rise and fall time. Thus, in one embodiment a measurement of rise and fall time and subsequent adjustment may be performed upon attachment of a USB element to a connection. The measurement may be made using any suitable combination of analog and digital circuitry. The adjustment to be selected may be calculated through, for example, analog and digital circuitry, a look-up table or instructions for execution by a processor. Statically, a system with USB circuits may be measured during manufacture, test, or validation of the system. The adjustment values may be written once for the lifetime of the USB circuit.FIGURE 2 is an illustration of a circuit 200 for implementing control of a DM and DP pin for a USB connection, according to embodiments of the present disclosure. The circuit may include two instances of circuit 100— 100 A and 100B. Circuit 200 may include a current bias source 202 (I ref bias) for reference. Current bias source 202 may provide a constant reference current that is independent of temperature changes and may be used by other current sources that are adjustable.Circuit 100 A may be applied to a DP pin 106 A of the USB connection. In circuitIOOA, a capacitor may be attached between DP pin 106A and ground. The switches of circuit 100A may be set so that RTCCC 104A may be applied to the gate of the top transistor of driver 108 A. Current bias source 202 may be the original source current that is used to generate other current sources inside RTCCC 104/FTCCC 102. These other current sources within RTCCC 104/FTCCC 102 may be current mirrors (102B/C, 104A/C), while current bias source 202 is the source of current to such current mirrors. RTCCC 104A may be adjusted by trim bits such that its current may be increased, decreased, or maintained by a factor of A2. The value of (1-A2) may correspond to a value, for example, in Table 1. Furthermore, the value of the voltage between the top of driver 108 A and the gate of the first transistor of driver 108 A may be equal to (A2*Cgs), wherein Cgs is the capacitance between the source and gate of the first transistor of driver 108 A.Circuit 100B may be applied to a DM pin 106B of the USB connection. In circuitIOOB, a capacitor may be attached between DM pin 106B and ground. The switches of circuit 100B may be set so that FTCCC 104B may be applied to the gate of the bottom transistor of driver 108B. FTCCC 104B may be adjusted by trim bits such that its current may be increased, decreased, or maintained by a factor of Al . The value of (1-A1) may correspond to a value, for example, in Table 1. Furthermore, the value of the voltage between ground and the gate of the second transistor of driver 108B may be equal to (Al *Cgs), wherein Cgs is the capacitance between the source and gate of the second transistor of driver 108B.FIGURES 3A and 3B are a more detailed illustration of a schematic implementing circuit 100, according to embodiments of the present disclosure. Fall time control current 102 (B/C), control bits associated with fall time control current 102, rise time control current 104 (A/C), control bits associated with rise time control current 104, driver 108 (A/B), and I_ref_bias 202 are shown.FIGURES 4A and 4B are an illustration of another, more detailed schematic for implementing circuit 100, according to embodiments of the present disclosure. DP pin 106A, DM pin 106B, and switches 110/112 are shown.As shown in FIGURES 4A and 4B, a bias current circuit 402 may implement a current source to implement, fully or in-part, I ref bias 202. Bias current circuit 402 may provide a constant reference current independent of temperature changes or other interference.A manager circuit 404 (edgemgr) may be configured to adjust current sources as applied to drivers in implementations of circuit 100. Manager circuit 404 may read adjustment bits, or may receive adjustment bits in a command, and may cause the associated changes in rise or fall time through current skewing to the gates of transistors in implementations of driver 108. Manager circuit 404 may implement elements 102 and 104.A switching circuit 406 (predrv) may be configured to provide switching between current sources and driver circuit 420. The switching may apply current sufficient to make the rise and fall of signals on DM pin 418 and DP pin 416. Switching circuit 406 may implement elements 110 and 112.Match circuit 408 may be configured to charage a parasitic capacitor due to bond pad and electrostatic discharge elements needed to implement circuit 100.Circuit 414 is a representation of circuit 100 as implemented in a suitable package or other semiconductor device. Inputs and outputs are shown, and circuit 414 may be implemented within a larger USB circuit and may control the rise and fall of signals across the DM and DP pins therein.The present disclosure has been described in terms of one or more embodiments, and it should be appreciated that many equivalents, alternatives, variations, and modifications, aside from those expressly stated, are possible and within the scope of the disclosure. While the present disclosure is susceptible to various modifications and alternative forms, specific example embodiments thereof have been shown in the drawings and are herein described in detail. It should be understood, however, that the description herein of specific example embodiments is not intended to limit the disclosure to the particular forms disclosed herein.
Some disclosed devices include an inertial sensor system, a proximity sensor system, an antenna system configured to transmit and receive radio signals and a control system. The control system may be configured for receiving inertial sensor data from the inertial sensor system and controlling the proximity sensor system and/or the antenna system based, at least in part, on the inertial sensor data. In some examples, the control system may be configured for controlling the proximity sensor system and/or the antenna system based, at least in part, on whether the inertial sensor data indicates that the device is being held, is being carried or is on a person's body (e.g., is in the person's pocket).
CLAIMS1. An apparatus, comprising: an inertial sensor system including at least one inertial sensor; a proximity sensor system including at least one proximity sensor; an antenna system configured to transmit and receive radio signals; and a control system configured for: receiving inertial sensor data from the inertial sensor system; and controlling the proximity sensor system and the antenna system based, at least in part, on the inertial sensor data. 2. The apparatus of claim 1, wherein the control system is further configured for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’ s body.3. The apparatus of claim 2, wherein the control system is further configured for lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body. 4. The apparatus of claim 2, wherein determining whether the apparatus is on the person’s body involves determining whether at least some of the apparatus is within a pocket of the person.5. The apparatus of claim 1, wherein the control system is further configured for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the apparatus is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the apparatus; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the apparatus.6. The apparatus of claim 1, wherein the proximity sensor system includes at least one radar sensor.7. The apparatus of claim 1, wherein the antenna system is configured to transmit at least some radio signals at frequencies of 6 gigahertz or more.8. The apparatus of claim 1, wherein the antenna system is configured to transmit beamformed radio signals.9. The apparatus of claim 1, wherein the control system is further configured for: determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.10. The apparatus of claim 1, wherein the control system is further configured for: determining whether the inertial sensor data indicates micro-motions characteristic of human contact; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.11. The apparatus of claim 1, wherein the control system is further configured for: implementing, via the control system, a neural network trained to determine whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried, or is on the person’s body.12. The apparatus of claim 1, wherein the inertial sensor system includes at least one accelerometer or at least one gyroscope.13. The apparatus of claim 1, wherein the apparatus is a mobile device.14. An method of controlling a mobile device, comprising: receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device; determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body; and controlling, by the control system, a proximity sensor system and an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.15. The method of claim 14, further comprising deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.16. The method of claim 15, further comprising lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.17. The method of claim 15, wherein determining whether the mobile device is on the person’ s body involves determining whether at least some of the mobile device is within a pocket of the person.18. The method of claim 14, wherein the method further comprises: obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the mobile device; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.19. The method of claim 14, wherein the method further comprises: determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.20. The method of claim 14, wherein the method further comprises: determining whether the inertial sensor data indicates micro-motions characteristic of human contact; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.21. The method of claim 14, wherein the method further comprises: implementing a neural network trained to determine whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried, or is on the person’s body.22. One or more non-transitory media having software stored thereon, the software including instructions for implementing a method of controlling a mobile device, the method comprising: receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device; determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body; and controlling, by the control system, a proximity sensor system and an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.23. The one or more non-transitory media of claim 22, wherein the method involves deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.24. The one or more non-transitory media of claim 22, wherein the method involves lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. 25. The one or more non-transitory media of claim 22, wherein determining whether the mobile device is on the person’ s body involves determining whether at least some of the mobile device is within a pocket of the person.26. The one or more non-transitory media of claim 22, wherein the method involves: obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the mobile device; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.27. An apparatus, comprising: an inertial sensor system including at least one inertial sensor; a proximity sensor system including at least one proximity sensor; an antenna system configured to transmit and receive radio signals; and control means for: receiving inertial sensor data from the inertial sensor system; and controlling the proximity sensor system and the antenna system based, at least in part, on the inertial sensor data.28. The apparatus of claim 27, wherein the control means includes means for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system if the control means determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.29. The apparatus of claim 28, wherein the control means includes means for lowering a transmission power of the antenna system if the control means determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.
ENHANCED RADIO WAVE EXPOSURE MITIGATION USING A COMBINATION OF PROXIMITY & INERTIALSENSOR DATAPRIORITY CLAIM[0001] This application claims priority to United States Patent Application no 17/224,715, filed on April 7, 2021 and entitled, “ENHANCED RADIO WAVE EXPOSURE MITIGATION USING A COMBINATION OF PROXIMITY & INERTIAL SENSOR DATA,” which is hereby incorporated by reference.TECHNICAL FIELD[0002] This disclosure relates generally to devices and methods for controlling human exposure to radio frequencies used for cellular systems.DESCRIPTION OF THE RELATED TECHNOLOGY[0003] Fifth generation (5G) cellular systems use various high-frequency bands of the electromagnetic spectrum, including frequency bands in the millimeter wave (mmW) region, to exploit the availability of large bandwidth and thereby to achieve unprecedented data rates. Radio transmissions at or above 6 GHz need to comply with the Maximum Permissible Exposure (MPE) requirements of the Federal Communications Commission (FCC), which sets a limit of 1 mW/cm2. Although existing methods for controlling human exposure to high-frequency bands of the electromagnetic spectrum have merit, it would be desirable to develop improved methods and devices.SUMMARY[0004] The systems, methods and devices of the disclosure each have several innovative aspects, no single one of which is solely responsible for the desirable attributes disclosed herein.[0005] One innovative aspect of the subject matter described in this disclosure may be implemented in an apparatus. The apparatus may include an inertial sensor system, a proximity sensor system, an antenna system and a control system that is configured for communication with the inertial sensor system, the proximity sensor system and the antenna system. The inertial sensor system may include one or more inertial sensors. The antenna system may be configured to transmit and receive radio signals. In some implementations, a mobile device may be, or may include, an apparatus as disclosed herein.[0006] The control system may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof. According to some examples, the control system may be configured for receiving inertial sensor data from the inertial sensor system and for controlling the proximity sensor system and the antenna system based, at least in part, on the inertial sensor data.[0007] In some examples, the control system may be configured for determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. In some such examples, the control system may be configured for deactivating the proximity sensor system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’ s body. In some examples, the control system may be configured for lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’ s body. In some instances, determining whether the apparatus is on the person’s body may involve determining whether at least some of the apparatus is within a pocket of the person.[0008] According to some examples, the control system may be configured for determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’ s body and for obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the apparatus is being held, is being carried or is on the person’s body. In some such examples, the control system may be configured for determining whether the proximity sensor signals indicate that a target object is proximate the apparatus and for controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the apparatus.[0009] In some implementations, the inertial sensor system may include at least one accelerometer or at least one gyroscope. According to some implementations, the proximity sensor system may include at least one radar sensor. In some examples, the antenna system may be configured to transmit at least some radio signals at frequencies of 6 gigahertz or more. According to some implementations, the antenna system may be configured to transmit beamformed radio signals.[0010] In some examples, the control system may be configured for determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold and for deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold. According to some implementations, the control system may be configured for determining whether the inertial sensor data indicates micro-motions characteristic of human contact and for deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.[0011] According to some examples, the control system may be configured for implementing, via the control system, a neural network trained to determine whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. In some such implementations, the control system may be configured for deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried, or is on the person’s body.[0012] Still other innovative aspects of the subject matter described in this disclosure can be implemented in a method of controlling a mobile device. The method may involve receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device. The method may involve determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body. The method may involve controlling, by the control system, a proximity sensor system and/or an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.[0013] According to some examples, the method may involve deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. In some examples, the method may involve lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. According to some examples, determining whether the mobile device is on the person’s body may involve determining whether at least some of the mobile device is within a pocket of the person.[0014] In some examples, the method may involve obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body. According to some examples, the method may involve determining whether the proximity sensor signals indicate that a target object is proximate the mobile device. In some examples, the method may involve controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.[0015] According to some examples, the method may involve determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold. In some examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.[0016] In some instances, the method may involve determining whether the inertial sensor data indicates micro-motions characteristic of human contact. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.[0017] According to some examples, the method may involve implementing a neural network trained to determine whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried, or is on the person’s body.[0018] Some or all of the operations, functions and/or methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, some innovative aspects of the subject matter described in this disclosure can be implemented in a non-transitory medium having software stored thereon.[0019] For example, the software may include instructions for controlling one or more devices to perform a method. The method may involve receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device. The method may involve determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’ s body. The method may involve controlling, by the control system, a proximity sensor system and an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.[0020] According to some examples, the method may involve deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. In some examples, the method may involve lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. According to some examples, determining whether the mobile device is on the person’s body may involve determining whether at least some of the mobile device is within a pocket of the person.[0021] In some examples, the method may involve obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body. According to some examples, the method may involve determining whether the proximity sensor signals indicate that a target object is proximate the mobile device. In some examples, the method may involve controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.[0022] According to some examples, the method may involve determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold. In some examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.[0023] In some instances, the method may involve determining whether the inertial sensor data indicates micro-motions characteristic of human contact. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.[0024] According to some examples, the method may involve implementing a neural network trained to determine whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried, or is on the person’s body. BRIEF DESCRIPTION OF THE DRAWINGS[0025] Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale. Like reference numbers and designations in the various drawings generally indicate like elements.[0026] Figure 1 is a block diagram that shows example components of an apparatus according to some disclosed implementations.[0027] Figure 2 shows an example of a mobile device implementation of the apparatus of Figure 1.[0028] Figures 3 A and 3B show further examples of the apparatus of Figure 1.[0029] Figure 4 is a flow diagram that shows blocks of a method according to one example.[0030] Figures 5, 6, 1 and 8 are graphs that show examples of inertial sensor data corresponding to various use cases.[0031] Figure 9 is a flow diagram that shows blocks of a method of controlling a mobile device according to one example.[0032] Figure 10 illustrates an example operating environment 1000 for proximity detection based on an electromagnetic field perturbation.DETAILED DESCRIPTION[0033] The following description is directed to certain implementations for the purposes of describing the innovative aspects of this disclosure. However, a person having ordinary skill in the art will readily recognize that the teachings herein may be applied in a multitude of different ways. The described implementations may be implemented in any device, apparatus, or system that includes a plurality of transmitter/receiver pairs such as those disclosed herein. In addition, it is contemplated that the described implementations may be included in or associated with a variety of electronic devices such as, but not limited to: mobile telephones, multimedia Internet enabled cellular telephones, mobile television receivers, wireless devices, smartphones, smart cards, wearable devices such as bracelets, armbands, wristbands, rings, headbands, patches, etc., Bluetooth® devices, personal data assistants (PDAs), wireless electronic mail receivers, hand-held or portable computers, netbooks, notebooks, smartbooks, tablets, printers, copiers, scanners, facsimile devices, global positioning system (GPS) receivers/navigators, cameras, digital media players (such as MP3 players), camcorders, game consoles, wrist watches, clocks, calculators, television monitors, flat panel displays, electronic reading devices (e.g., e-readers), mobile health devices, computer monitors, auto displays (including odometer and speedometer displays, etc.), cockpit controls and/or displays, camera view displays (such as the display of a rear view camera in a vehicle), electronic photographs, electronic billboards or signs, projectors, architectural structures, microwaves, refrigerators, stereo systems, cassette recorders or players, DVD players, CD players, VCRs, radios, portable memory chips, washers, dryers, washer/dryers, parking meters, packaging (such as in electromechanical systems (EMS) applications including microelectromechanical systems (MEMS) applications, as well as non-EMS applications), aesthetic structures (such as display of images on a piece of jewelry or clothing) and a variety of EMS devices. The teachings herein also may be used in applications such as, but not limited to, electronic switching devices, radio frequency filters, sensors, accelerometers, gyroscopes, motion-sensing devices, magnetometers, inertial components for consumer electronics, parts of consumer electronics products, steering wheels or other automobile parts, varactors, liquid crystal devices, electrophoretic devices, drive schemes, manufacturing processes and electronic test equipment. Thus, the teachings are not intended to be limited to the implementations depicted solely in the Figures, but instead have wide applicability as will be readily apparent to one having ordinary skill in the art.[0034] The Maximum Permissible Exposure (MPE) limit may be exceeded when, for example, a device such as a cellular telephone is transmitting at high transmission power. To overcome this potential hazard issue, some 5G transceivers developed by the present assignee include a proximity sensor to detect the presence of nearby targets. If a nearby target is detected, some such devices are configured to reduce their transmission power level.[0035] Such devices are generally capable of detecting nearby targets that are moving relative to the device that includes the proximity sensor(s). However, devices equipped with some such proximity sensors are not capable of determining when the cellular telephone is being held by a user or when the cellular telephone is on the user’ s body (e.g., in the user’s pocket). These limitations could potentially result in human exposure above the MPE limit.[0036] Some disclosed devices include an inertial sensor system, a proximity sensor system, an antenna system configured to transmit and receive radio signals and a control system. The control system may be configured for receiving inertial sensor data from the inertial sensor system and determining whether the inertial sensor data indicates that the device is being held, is being carried or is on a person’s body (e.g., is in the person’s pocket). In some examples, the control system may be configured for lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body. According to some examples, the control system may be configured for deactivating the proximity sensor system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.[0037] Particular implementations of the subject matter described in this disclosure may be implemented to realize one or more of the following potential advantages. Some disclosed implementations can enhance user safety by detecting additional instances during which antenna system transmission powers may result in exposure above the MPE limit. Some disclosed implementations may also reduce power consumption by de-activating the proximity sensor system during times when the proximity sensor system may not be able to determine whether a user’s body part is near the device.[0038] Figure 1 is a block diagram that shows example components of an apparatus according to some disclosed implementations. In this example, the apparatus 101 includes an antenna system 102, an inertial sensor system 103, a proximity sensor system 105 and a control system 106. Some implementations of the apparatus 101 may include an interface system 104. In some examples, the apparatus 101 may include a memory 108, in addition to any memory that the control system 106 may include.[0039] Various examples of antenna systems 102 are disclosed herein. In some examples, the antenna system 102 may be implemented via antennas that are configured to transmit and/or receive millimeter wave (mmWave) signals. Some examples of the antenna system 102 may be configurable for use in 5G communication systems, e.g., as set forth in the 3rd Generation Partnership Project (3GPP) fifth generation new radio (5G NR) Releases 15 and 16. In some examples, the antenna system 102 may be configured to transmit at least some radio signals at frequencies of 6 gigahertz (GHz) or more. For example, some such antennas may be configured to transmit beamformed radio signals, e.g., according to instructions from the control system 106. Some disclosed antenna systems 102 may include microstrip antennas (a/k/a “patch” antennas), which can be printed directly onto a circuit board.[0040] Other implementations of the antenna system 102 may include one or more other suitable types of antennas and/or may be configurable for different purposes. For example, in some implementations the antenna system 102 may be configured as a proximity sensor system or an object position estimation system based on mmWave RADAR. Some such implementations may not include a separate proximity sensor system 105. In some such examples, the control system may be configured for obtaining, via a first transmitter/receiver pair of the antenna system 102, a first round- trip time for a first reflection from an object proximate the apparatus and for obtaining, via a second transmitter/receiver pair of the antenna system 102, a second round-trip time for a second reflection from the object. The control system may be configured for determining a position of the object based, at least in part, on the first round-trip time and the second round-trip time. In some implementations, the control system may be configured for determining a first ellipse based on the first round-trip time, for determining a second ellipse based on the second round-trip time and for determining an intersection of the first ellipse and the second ellipse. The position of the object may be based, at least in part, on the intersection of the first ellipse and the second ellipse. According to some such implementations, the control system may be configured for obtaining, via a third transmitter/receiver pair, a third round-trip time for a third reflection from the object and for determining the position of the object based, at least in part, on the first round-trip time, the second round-trip time and the third round-trip time. According to some examples, the control system may be configured for determining a first ellipsoid based on the first round-trip time, determining a second ellipsoid based on the second round-trip time, determining a third ellipsoid based on the third round-trip time and determining an intersection of the first ellipsoid, the second ellipsoid and the third ellipsoid. The position of the object may be based, at least in part, on the intersection of the first ellipsoid, the second ellipsoid and the third ellipsoid. Additional examples are described below with reference to Figure 10.[0041] In some implementations, the inertial sensor system 103 may include one or more gyroscopes and one or more accelerometers. However, the inertial sensor system 103 may vary according to the particular implementation. Some or all of the sensors in the inertial sensor system 103 may be discrete components or integrated into one or more sensor packages located within a housing of the apparatus 101, depending on the particular implementation. In some implementations, the inertial sensor system 103 may include three linear accelerometers, each of which is configured to measure linear acceleration, velocity and/or displacement along a particular axis of an apparatus coordinate system. In some other implementations, the functions of multiple (e.g., three) linear accelerometers may be integrated into a single (e.g., three-axis) accelerometer. According to some implementations, the inertial sensor system 103 may include three gyroscopes, each of which is configured to measure angular acceleration, angular velocity and/or rotation about a particular axis of an apparatus coordinate system. In some other implementations, the functions of multiple (e.g., three) gyroscopes may be combined or integrated into a single (e.g., three-axis) gyroscope.[0042] The proximity sensor system 105 may include one or more sensors that are configurable to detect objects near the apparatus 101. In some examples, the proximity sensor system 105 may be configured to detect objects within a predetermined distance of the apparatus 101, e.g., within approximately 1 meter, within approximately 50 centimeters, within approximately 20 centimeters, etc. Other implementations of the proximity sensor system 105 may be configured to detect objects within larger or smaller distances of the apparatus 101, e.g., within 5 meters or within 15 centimeters.In some implementations, the proximity sensor system 105 may include one or more transmitters, one or more receivers, or one or more transceivers. According to some implementations, the proximity sensor system 105 may include one or more radio wave transmitters, one or more radio wave receivers, or one or more radio wave transceivers. In some implementations, the proximity sensor system 105 may include one or more acoustic wave transmitters (e.g., one or more ultrasonic transmitters), one or more acoustic wave receivers, or one or more acoustic wave transceivers. Alternatively, or additionally, the proximity sensor system 105 may include one or more other types of sensors, such as optical sensors.[0043] The control system 106 may include one or more general purpose single- or multi-chip processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gates or transistor logic, discrete hardware components, or combinations thereof. The control system 106 also may include (and/or be configured for communication with) one or more memory devices, such as one or more random access memory (RAM) devices, read-only memory (ROM) devices, etc. Accordingly, the apparatus 101 may have a memory system that includes one or more memory devices, though the memory system 108 is shown as an optional element in Figure 1. The control system 106 may be capable of receiving and processing data from the antenna system 102, e.g., as described below. In some implementations, functionality of the control system 106 may be partitioned between one or more controllers or processors, such as a dedicated sensor controller and an applications processor of a mobile device.[0044] Some implementations of the apparatus 101 may include an interface system 104. In some examples, the interface system 104 may include a wireless interface system. In some implementations, the interface system 104 may include a user interface system, one or more network interfaces, one or more interfaces between the control system 106 and the optional memory system 108, one or more interfaces between the control system 106 and the antenna system 102, one or more interfaces between the control system 106 and the inertial sensor system 103, one or more interfaces between the control system 106 and the proximity sensor system 105 and/or one or more interfaces between the control system 106 and one or more external device interfaces (e.g., ports or applications processors).[0045] The interface system 104 may be configured to provide communication (which may include wired or wireless communication, such as electrical communication, radio communication, etc.) between components of the apparatus 101 and/or between the apparatus 101 and one or more other devices. In some such examples, the interface system 104 may be configured to provide communication between the control system 106 and the antenna system 102, between the control system 106 and the inertial sensor system 103, and between the control system 106 and the proximity sensor system 105. According to some such examples, a portion of the interface system 104 may couple at least one or more portions of the control system 106 to the antenna system 102, to the inertial sensor system 103 and to the proximity sensor system 105, e.g., via electrically conducting material. According to some examples, the interface system 104 may be configured to provide communication between the apparatus 101 and other devices and/or human beings. In some such examples, the interface system 104 may include one or more user interfaces. The interface system 104 may, in some examples, include one or more network interfaces and/or one or more external device interfaces (such as one or more universal serial bus (USB) interfaces).[0046] The apparatus 101 may be used in a variety of different contexts, some examples of which are disclosed herein. For example, in some implementations a mobile device may include at least a portion of the apparatus 101. The control system 106 may be configured for controlling the antenna system 102 for communication with one or more devices over a network, such as a cellular telephone network, a local area network and/or the Internet. Accordingly, the control system 106 may be configured for controlling the apparatus, including but not limited to the antenna system 102, to provide cellular telephone functionality.[0047] In some implementations, a wearable device may include at least a portion of the apparatus 101. The wearable device may, for example, be a bracelet, an armband, a wristband, a ring, a headband or a patch. In some implementations, the control system 106 may reside in more than one device. For example, a portion of the control system 106 may reside in a wearable device and another portion of the control system 106 may reside in another device, such as a mobile device (e.g., a smartphone or a tablet computer). The interface system 104 also may, in some such examples, reside in more than one device.[0048] Figure 2 shows an example of a mobile device implementation of the apparatus of Figure 1. As with other disclosed implementations, the types, numbers and arrangements of elements shown in Figure 2 are merely made by way of example. In this example, the apparatus 101 includes antenna system portions 102a and 102b, as well as other antenna system portions that are not visible in Figure 2. In some such examples, the apparatus 101 may include an antenna system portion on a side of the apparatus 101 that is opposite from the side 204 on which the antenna system portion 102a resides. In some such examples, the apparatus 101 may include an antenna system portion on a side of the apparatus 101 that is opposite from the side 206 on which the antenna system portion 102b resides. According to this example, each of the antenna system portions includes multiple antenna elements 202. The multiple antenna elements 202 may, for example, be configurable for beam forming.[0049] According to this implementation, the apparatus 101 includes a proximity sensor system 105 with at least proximity sensor system elements 205a and 205b. According to some implementations, the proximity sensor system 105 may include two or more radio wave transmitters, two or more radio wave receivers, or two or more radio wave transceivers. In some implementations, the proximity sensor system 105 may include two or more acoustic wave transmitters (e.g., one or more ultrasonic transmitters), two or more acoustic wave receivers, or two or more acoustic wave transceivers. In some implementations, the proximity sensor system 105 may also include a RADAR based scheme where the time between the transmitted and reflected pulse is computed to estimate the distance between an object and the apparatus 101. Alternatively, or additionally, the proximity sensor system 105 may include two or more other types of sensors, such as optical sensors. In this example, the apparatus 101 includes an inertial sensor system 103 and a control system 106 that are disposed within the housing 210 and are therefore not visible in Figure 2.[0050] Figures 3 A and 3B show further examples of the apparatus of Figure 1. In the examples of Figures 3A and 3B, the inertial sensor system 103 resides within the housing 210 and is therefore not visible from the exterior. Therefore, the inertial sensor system 103 is depicted via dashed lines.[0051] In some implementations, the inertial sensor system 103 shown in Figure 3A may include three linear accelerometers, each of which is configured to measure linear acceleration, velocity and/or displacement (which may be referred to herein as “displacement data” or “accelerometer data”) along a particular x, y or z axis of a Cartesian coordinate system. In some other implementations, the functions of the three linear accelerometers may be combined or integrated into a single three- axis accelerometer. [0052] As shown in Figure 3B, in some implementations the inertial sensor system 103 may include three gyroscopes, each of which is configured to measure angular acceleration, angular velocity and/or rotation (which may be referred to herein as “rotation data” or “gyroscope data”) about a particular axis of an apparatus coordinate system. In some examples, a first gyroscope may be configured to measure rotation data about the x axis, a second gyroscope may be configured to measure rotation data about the y axis a third gyroscope may be configured to measure rotation data about the z axis. Such rotation data also can be expressed in terms of pitch, roll and yaw. In some other implementations, the functions of three gyroscopes may be combined or integrated into a single three-axis gyroscope.[0053] Figure 4 is a flow diagram that shows blocks of a method according to one example. The method 400 may, for example, be implemented at least in part by an apparatus such as the apparatus 101 that is shown in Figure 1 and described above (or one of the other examples disclosed herein), having an inertial sensor system, a proximity sensor system, an antenna system configured to transmit and receive radio signals, and a control system. As with other disclosed methods, the blocks of method 400 are not necessarily performed in the order shown in Figure 4. Moreover, alternative methods may include more or fewer blocks.[0054] According to this example, block 405 involves receiving inertial sensor data from the inertial sensor system. Block 405 may, for example, involve the control system 106 of Figure 1 receiving gyroscope data and/or accelerometer data from the inertial sensor system 103 of Figure 1.[0055] In this example, block 410 involves controlling the proximity sensor system and/or the antenna system based, at least in part, on the inertial sensor data. In some examples, block 410 may involve determining (e.g., by the control system) whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. Some such examples may involve controlling the proximity sensor system and/or the antenna system based, at least in part, on whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.[0056] As noted above, the MPE limit may be exceeded when an apparatus, such as a cellular telephone configured for 5G communication, is transmitting at a high transmission power level. In some implementations of the apparatus 101, the control system 106 may be configured to detect the presence of nearby targets (e.g., targets that are within a threshold distance, such as 15 cm, 20 cm, etc.) according to proximity sensor data from the proximity sensor system 105. If a nearby target is detected, the control system 106 may be configured to reduce the transmission power level of the antenna system 102.[0057] Accordingly, some implementations of method 400 may involve determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body and obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the apparatus is being held, is being carried or is on the person’ s body. Some such implementations may involve determining whether the proximity sensor signals indicate that a target object is proximate the apparatus (e.g., within a threshold distance, such as 15 cm, 20 cm, etc.) and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the apparatus.[0058] However, some such proximity sensors are not capable of determining, e.g., when the apparatus 101 is being held by a user, is in a user’s pocket, etc. In some disclosed implementations, the control system 106 may be configured to reduce power consumption by de-activating the proximity sensor system during times when the proximity sensor system may not be effective or necessary. According to some such examples, block 410 may involve deactivating the proximity sensor system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. Alternatively, or additionally, in some examples block 410 may involve lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates the apparatus is being held, is being carried or is on a person’s body.[0059] Figures 5, 6, 1 and 8 are graphs that show examples of inertial sensor data corresponding to various use cases. In Figures 5, 6, 7 and 8, the inertial sensor data are accelerometer data corresponding to linear accelerations along the x, y and z axes, as described above with reference to Figure 3 A. [0060] Figure 5 shows examples of inertial sensor data corresponding to a user picking up a cellular telephone and holding the cellular telephone to the user’s head. In the example shown in Figure 5, element 505 is a plot of linear acceleration along the x axis over a time interval, element 510 is a plot of linear acceleration along the y axis over the same time interval and element 515 is a plot of linear acceleration along the z axis over the same time interval. The largest acceleration values along all axes occur during the first three or four seconds, during which the cellular telephone is being picked up and positioned next to the user’s head.[0061] However, even when the cellular telephone is being held next to the user’s head, there are still persistent, low-amplitude accelerations along all three axes. Both the higher-amplitude accelerations and the low-amplitude accelerations are examples of inertial sensor data indicating that the apparatus is being held. The persistent, low- amplitude accelerations are examples of what may be referred to herein as tremors or “micro-motions characteristic of human contact.” Accordingly, some disclosed methods may involve determining (e.g., by the control system 106) whether inertial sensor data indicates micro-motions characteristic of human contact. Some such methods may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates micro-motions characteristic of human contact.[0062] Figure 6 shows examples of inertial sensor data corresponding to a user walking while holding a cellular telephone. In the example shown in Figure 6, element 605 is a plot of linear acceleration along the x axis over a time interval, element 610 is a plot of linear acceleration along the y axis over the same time interval and element 615 is a plot of linear acceleration along the z axis over the same time interval. The acceleration values corresponding to walking occur after the first three or four seconds and end at approximately 23 seconds. One may observe that the acceleration values corresponding to a user walking while holding a cellular telephone are significantly greater than the acceleration values corresponding to the user holding the cellular telephone to the user’ s head. One may also observe that there are relatively larger accelerations along the x axis that correspond in time with relatively larger accelerations along the z axis, e.g., at approximately 10 seconds and at approximately 15 seconds.[0063] Figure 7 shows examples of inertial sensor data corresponding to a seated user having a cellular telephone in the user’s shirt pocket. In the example shown in Figure 7, element 705 is a plot of linear acceleration along the x axis over a time interval, element 710 is a plot of linear acceleration along the y axis over the same time interval and element 715 is a plot of linear acceleration along the z axis over the same time interval. Figure 7 shows additional examples of persistent, low-amplitude accelerations that may be referred to herein as “micro-motions characteristic of human contact.” According to some examples, determining whether an apparatus is “on the person’s body” may involve determining whether at least some of the apparatus is within the person’s pocket.[0064] Figure 8 shows examples of inertial sensor data corresponding to cellular telephone at rest on a table. In the example shown in Figure 8, element 805 is a plot of linear acceleration along the x axis over a time interval, element 810 is a plot of linear acceleration along the y axis over the same time interval and element 815 is a plot of linear acceleration along the z axis over the same time interval. One may observe that the linear accelerations along the x and y axes are even lower-amplitude than the linear accelerations along the z axis. Figure 8 shows examples of accelerations that are even lower-amplitude than those referred to herein as “micro-motions characteristic of human contact.”[0065] With the acceleration data of Figures 5-8 in mind, one may see that accelerations equal to or exceeding an acceleration threshold (such as a maximum acceleration of the “at rest” cellular telephone of Figure 8) may, in some instances, indicate the apparatus is being held, is being carried or is on a person’ s body. Accordingly, some disclosed methods may involve determining (e.g., by the control system) whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates accelerations equal to or exceeding the acceleration threshold. However, the method may involve activating the proximity sensor system, allowing the proximity sensor system to remain active and/or allowing a transmission power of the antenna system to be optimized for cellular communication if the control system determines that the inertial sensor data does not indicate accelerations equal to or exceeding the acceleration threshold.[0066] Some disclosed methods may involve characterizing inertial sensor data, e.g., by applying some form of artificial intelligence to make correlations between types of inertial sensor data and different device use cases. Some disclosed methods may, for example, involve determining whether inertial sensor data indicates micro-motions characteristic of human contact.[0067] For example, some disclosed methods may involve training a neural network to determine whether or not inertial sensor data indicates that an apparatus is being held, is being carried or is on a person’s body. In some such examples, the neural network may be trained by inputting sets of inertial sensor data, such as that shown in Figures 5-8, and indicating the use case corresponding to each of the sets of input inertial sensor data. Some disclosed methods may involve implementing (e.g., via the control system 106) a neural network trained to determine whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. Some such methods may involve deactivating a proximity sensor system and/or lowering a transmission power of an antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried, or is on the person’s body.[0068] Figure 9 is a flow diagram that shows blocks of a method of controlling a mobile device according to one example. The method 900 may, for example, be implemented at least in part by an apparatus such as the apparatus 101 that is shown in Figure 1 and described above (or one of the other examples disclosed herein), having an inertial sensor system, a proximity sensor system, an antenna system configured to transmit and receive radio signals and a control system. As with other disclosed methods, the blocks of method 900 are not necessarily performed in the order shown in Figure 9. Moreover, alternative methods may include more or fewer blocks.[0069] According to this example, block 905 involves receiving by a control system of a mobile device, inertial sensor data from an inertial sensor system of the mobile device. Block 905 may, for example, involve the control system 106 of Figure 1 receiving gyroscope data and/or accelerometer data from the inertial sensor system 103 of Figure 1. [0070] In this example, block 910 involves determining (e.g., by the control system) whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body. According to this example, block 915 involves controlling the proximity sensor system and/or the antenna system based, at least in part, on whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.[0071] In some examples, method 900 may involve deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. Alternatively, or additionally, method 900 may involve lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body. In some instances, determining whether the mobile device is on the person’s body involves determining whether at least some of the mobile device is within the person’s pocket.[0072] According to some examples, method 900 may involve obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body. Some such methods may involve determining whether the proximity sensor signals indicate that a target object is proximate the mobile device and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.[0073] In some examples, method 900 may involve determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold.Some such methods may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates accelerations equal to or exceeding the acceleration threshold.[0074] According to some examples, method 900 may involve determining whether the inertial sensor data indicates micro-motions characteristic of human contact. Some such methods may involve deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates micro-motions characteristic of human contact.[0075] According to some examples, method 900 may involve implementing a neural network trained to determine whether the inertial sensor data indicates micro-motions characteristic of human contact. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates micro-motions characteristic of human contact.[0076] In some examples, method 900 may involve implementing a neural network trained to determine whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body. According to some such examples, the method may involve deactivating the proximity sensor system and/or lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried, or is on the person’s body.[0077] Figure 10 illustrates an example operating environment 1000 for proximity detection based on an electromagnetic field perturbation. In the example environment 1000, a hand 1014 of a user holds an implementation the apparatus 101 that is described above with reference to Figure 1. In one aspect, the apparatus 101 communicates with the base station 1001 by transmitting an uplink signal 1002 (UL signal 1002) or receiving a downlink signal 1004 (DL signal 1004) via the antennas 1024. A user’s thumb, however, may represent a proximate object 1006 that may be exposed to radiation via the uplink signal 1002.[0078] To detect whether the object 1006 exists or is within a detectable range, the apparatus 101 generates an electromagnetic (EM) field 1008 via at least one of the antennas 1024. The electromagnetic field 1008 can be generated by transmitting a predetermined proximity detection signal or the uplink signal 1002. In some cases, the proximity detection signal may be generated such that it includes a single frequency or tone or multiple frequencies or tones. For example, the proximity detection signal can include an orthogonal frequency-division multiplexing (OFDM) signal having multiple sub-carriers of different frequencies. As another example, the proximity detection signal can include a frequency-modulated continuous wave (FMCW) signal (e.g., a linear frequency-modulated (LFM) continuous wave signal or chirp signal, a triangular frequency-modulated continuous wave signal, a sawtooth frequency-modulated continuous wave signal, and so forth). As yet another example, the proximity detection signal can include a continuous -wave signal having a relatively constant frequency.[0079] In Figure 10, a resulting amplitude of the electromagnetic field 1008 is represented with different shades of grey, where darker shades represent higher amplitudes and lighter shades represent lower amplitudes. If the object 1006 is proximate to another one of the antennas 1024, interactions of the object 1006 with the electromagnetic field 1008 produce one or more perturbations (e.g., disturbances or changes) in the electromagnetic field 1008, such as perturbation 1010. The perturbation 1010 represents a variation in a magnitude or phase of the electromagnetic field 1008 due to the object 1006 causing different constructive or destructive patterns to occur within the electromagnetic field 1008.[0080] In some implementations, the antennas 1024 may comprise at least two different antennas, at least two antenna elements 1012 of an antenna array 1016, at least two antenna elements 1012 associated with different antenna arrays 1016, or any combination thereof. As shown in Figure 10, the antennas 1024 correspond to at least two of the antenna elements 1012 within the antenna array 1016. The antenna array 1016 can include multiple antenna elements 1012-1 to 1012-N, where N represents a positive integer greater than one. In the depicted example, a first antenna element 1012 1 emits the electromagnetic field 1008 and the perturbation 1010 is sensed via a second antenna element 1012-2. The second antenna element 1012-2 may be co-located with respect to the first antenna element 1012-1 as part of the antenna array 1016 or otherwise proximate to the first antenna element 1012-1. In some cases, the second antenna element 1012-2 is adjacent to the first antenna element 1012-1 within a same antenna array 1016 (e.g., there are no antenna elements 1012 physically located between the first antenna element 1012-1 and the second antenna element 1012-2). A distance between the antenna elements 1012 in the antenna array 1016 can be based on frequencies that the wireless transceiver 1020 emits. For example, the antenna elements 1012 in the antenna array 1016 can be spaced by approximately half a wavelength from each other (e.g., by approximately a centimeter (cm) apart for frequencies around 30 GHz).[0081] A response of the second antenna element 1012-2 to the electromagnetic field 1008 is affected by the object 1006 reflecting or absorbing the electromagnetic field 1008 and also by any mutual coupling or interference produced by the first antenna element 1012-1. In general, energy from the electromagnetic field 1008 induces a current in the second antenna element 10122, which is used to measure the perturbation 1010 or the resulting electromagnetic field 1008 that is disturbed by the object 1006.By sensing the perturbation 1010, a determination can be made as to whether the object 1006 is present or outside a detectable range (e.g., not present). The detectable range may be within approximately 40 cm from the antennas 1024, between 0 and 10 cm from the antennas 1024, and so forth. In general, the detectable range can vary based on transmission power or sensitivity of the wireless transceiver 1020. A duration for which the electromagnetic field 1008 is generated can also be based on the detectable range. Example durations can range from approximately one microsecond to several tens of microseconds.[0082] In some cases, the detectable range can include ranges that are not readily measured using radar-based techniques. For example, the radar-based techniques can be limited to ranges that are farther than a minimum range, which is proportional to a bandwidth of the FMCW signal. Example minimum ranges include 4 cm or 2 cm for a FMCW signal having a bandwidth of 4 GHz or 8 GHz, respectively. Therefore, to detect closer distances using radar-based techniques, the wireless transceiver 1020 generates larger bandwidth signals at an expense of increased design complexity or increased cost of the wireless transceiver 1020. Using the described techniques, however, the range to the object 1006 can be measured at distances closer than these minimum ranges. In this way, the described techniques can be used to augment close- range detection even if radar-based techniques are used for far-range detection.[0083] In some implementations, the wireless transceiver 1020 can generate the electromagnetic field 1008 via the first antenna element 1012-1 during a same time that the second antenna element 1012-2 is used to sense the electromagnetic field 1008. The antennas 1024 and/or elements thereof may be implemented using any type of antenna, including patch antennas, dipole antennas, bowtie antennas, or a combination thereof. [0084] Implementation examples are described in the following numbered clauses:[0085] 1. An apparatus, comprising: an inertial sensor system including at least one inertial sensor; a proximity sensor system including at least one proximity sensor; an antenna system configured to transmit and receive radio signals; and a control system configured for: receiving inertial sensor data from the inertial sensor system; and controlling the proximity sensor system and the antenna system based, at least in part, on the inertial sensor data.[0086] 2. The apparatus of clause 1, wherein the control system is further configured for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.[0087] 3. The apparatus of clause 2, wherein the control system is further configured for lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.[0088] 4. The apparatus of clause 2, wherein determining whether the apparatus is on the person’s body involves determining whether at least some of the apparatus is within a pocket of the person.[0089] 5. The apparatus of any of clauses 1-4, wherein the control system is further configured for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the apparatus is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the apparatus; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the apparatus.[0090] 6. The apparatus of any of clauses 1-5, wherein the proximity sensor system includes at least one radar sensor. [0091] 7. The apparatus of any of clauses 1-6, wherein the antenna system is configured to transmit at least some radio signals at frequencies of 6 gigahertz or more.[0092] 8. The apparatus of any of clauses 1-7, wherein the antenna system is configured to transmit beamformed radio signals.[0093] 9. The apparatus of any of clauses 1-8, wherein the control system is further configured for: determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.[0094] 10. The apparatus of any of clauses 1-9, wherein the control system is further configured for: determining whether the inertial sensor data indicates micro motions characteristic of human contact; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.[0095] 11. The apparatus of any of clauses 1-10, wherein the control system is further configured for: implementing, via the control system, a neural network trained to determine whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the apparatus is being held, is being carried, or is on the person’s body.[0096] 12. The apparatus of any of clauses 1-11, wherein the inertial sensor system includes at least one accelerometer or at least one gyroscope.[0097] 13. The apparatus of any of clauses 1-12, wherein the apparatus is a mobile device.[0098] 14. An method of controlling a mobile device, comprising: receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device; determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body; and controlling, by the control system, a proximity sensor system and an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.[0099] 15. The method of clause 14, further comprising deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.[0100] 16. The method of clause 15, further comprising lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.[0101] 17. The method of clause 15 or clause 16, wherein determining whether the mobile device is on the person’s body involves determining whether at least some of the mobile device is within a pocket of the person.[0102] 18. The method of any of clauses 14-17, wherein the method further comprises: obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the mobile device; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.[0103] 19. The method of any of clauses 14-18, wherein the method further comprises: determining whether the inertial sensor data indicates accelerations equal to or exceeding an acceleration threshold; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more accelerations equal to or exceeding the acceleration threshold.[0104] 20. The method of any of clauses 14-19, wherein the method further comprises: determining whether the inertial sensor data indicates micro-motions characteristic of human contact; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates one or more micro-motions characteristic of human contact.[0105] 21. The method of any of clauses 14-20, wherein the method further comprises: implementing a neural network trained to determine whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body; and deactivating the proximity sensor system and lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried, or is on the person’s body.[0106] 22. One or more non-transitory media having software stored thereon, the software including instructions for implementing a method of controlling a mobile device, the method comprising: receiving, by a control system of the mobile device, inertial sensor data from an inertial sensor system of the mobile device; determining, by the control system, whether the inertial sensor data indicates that the mobile device is being held, is being carried or is on a person’s body; and controlling, by the control system, a proximity sensor system and an antenna system of the mobile device based, at least in part, on whether the inertial sensor data indicates the mobile device is being held, is being carried or is on the person’s body.[0107] 23. The one or more non-transitory media of clause 22, wherein the method involves deactivating, by the control system, the proximity sensor system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.[0108] 24. The one or more non-transitory media of clause 22 or clause 23, wherein the method involves lowering a transmission power of the antenna system if the control system determines that the inertial sensor data indicates that the mobile device is being held, is being carried or is on the person’s body.[0109] 25. The one or more non-transitory media of any of clauses 22-24, wherein determining whether the mobile device is on the person’ s body involves determining whether at least some of the mobile device is within a pocket of the person. [0110] 26. The one or more non-transitory media of any of clauses 22-25, wherein the method involves: obtaining proximity sensor signals from the proximity sensor system if the control system determines that the inertial sensor data does not indicate that the mobile device is being held, is being carried or is on the person’s body; determining whether the proximity sensor signals indicate that a target object is proximate the mobile device; and controlling a transmission power of the antenna system according to whether the control system determines that the target object is proximate the mobile device.[0111] 27. An apparatus, comprising: an inertial sensor system including at least one inertial sensor; a proximity sensor system including at least one proximity sensor; an antenna system configured to transmit and receive radio signals; and control means for: receiving inertial sensor data from the inertial sensor system; and controlling the proximity sensor system and the antenna system based, at least in part, on the inertial sensor data.[0112] 28. The apparatus of clause 27, wherein the control means includes means for: determining whether the inertial sensor data indicates that the apparatus is being held, is being carried or is on a person’s body; and deactivating the proximity sensor system if the control means determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’ s body.[0113] 29. The apparatus of clause 28, wherein the control means includes means for lowering a transmission power of the antenna system if the control means determines that the inertial sensor data indicates that the apparatus is being held, is being carried or is on the person’s body.[0114] As used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.[0115] The various illustrative logics, logical blocks, modules, circuits and algorithm processes described in connection with the implementations disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. The interchangeability of hardware and software has been described generally, in terms of functionality, and illustrated in the various illustrative components, blocks, modules, circuits and processes described above. Whether such functionality is implemented in hardware or software depends upon the particular application and design constraints imposed on the overall system.[0116] The hardware and data processing apparatus used to implement the various illustrative logics, logical blocks, modules and circuits described in connection with the aspects disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some implementations, particular processes and methods may be performed by circuitry that is specific to a given function.[0117] In one or more aspects, the functions described may be implemented in hardware, digital electronic circuitry, computer software, firmware, including the structures disclosed in this specification and their structural equivalents thereof, or in any combination thereof. Implementations of the subject matter described in this specification also may be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a computer storage media for execution by, or to control the operation of, data processing apparatus.[0118] If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium, such as a non- transitory medium. The processes of a method or algorithm disclosed herein may be implemented in a processor-executable software module which may reside on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that may be enabled to transfer a computer program from one place to another. Storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, non-transitory media may include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer. Also, any connection may be properly termed a computer-readable medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and instructions on a machine readable medium and computer-readable medium, which may be incorporated into a computer program product.[0119] Various modifications to the implementations described in this disclosure may be readily apparent to those having ordinary skill in the art, and the generic principles defined herein may be applied to other implementations without departing from the spirit or scope of this disclosure. Thus, the disclosure is not intended to be limited to the implementations shown herein, but is to be accorded the widest scope consistent with the claims, the principles and the novel features disclosed herein. The word “exemplary” is used exclusively herein, if at all, to mean “serving as an example, instance, or illustration.” Any implementation described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other implementations.[0120] Certain features that are described in this specification in the context of separate implementations also may be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation also may be implemented in multiple implementations separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination may in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.[0121] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems may generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims may be performed in a different order and still achieve desirable results.[0122] It will be understood that unless features in any of the particular described implementations are expressly identified as incompatible with one another or the surrounding context implies that they are mutually exclusive and not readily combinable in a complementary and/or supportive sense, the totality of this disclosure contemplates and envisions that specific features of those complementary implementations may be selectively combined to provide one or more comprehensive, but slightly different, technical solutions. It will therefore be further appreciated that the above description has been given by way of example only and that modifications in detail may be made within the scope of this disclosure.
Electrostatic discharge (ESD) protection devices can protect electronic circuits. In the context of radio frequency (RF) circuits and the like, the insertion loss of conventional ESD protection devices can be undesirable. The amounts of parasitic capacitances at nodes of devices of an ESD protection device are not necessarily symmetrical, with respect to the substrate. Disclosed are techniques which decrease the parasitic capacitances at signal nodes, which improve the insertion loss characteristics of ESD protection devices.
An apparatus comprising an integrated circuit, the apparatus comprising:a first node (102) configured to carry a signal;a second node (104) configured to carry a voltage reference;a first voltage clamp (210) of the integrated circuit, the first voltage clamp having an anode and a cathode, wherein a first parasitic capacitance associated with the anode is less than a second parasitic capacitance associated with the cathode, wherein the anode is operatively coupled to the first node (102), and the cathode is operatively coupled to the second node, the first voltage clamp comprising at least a first rectifier; anda second voltage clamp (220) of the integrated circuit, the second voltage clamp having an anode and a cathode, wherein a third parasitic capacitance associated with the cathode is less than a fourth parasitic capacitance associated with the anode, wherein the anode is operatively coupled to the second node, and the cathode is operatively coupled to the first node (102), the second voltage clamp comprising at least a second rectifier.The apparatus of Claim 1, further comprising a termination resistance (530) operatively coupled between the first node and the second node.The apparatus of Claim lor 2, wherein the first voltage clamp consists only of the first rectifier, wherein the second voltage clamp consists only of the second rectifier, and wherein the first rectifier and the second rectifier each consists of a diode.The apparatus of Claim 1 or 2, wherein the first voltage clamp consists only of the first rectifier, and wherein the first rectifier consists of a diode.The apparatus of Claim 1 or 2, wherein the first voltage clamp (408) comprises a first plurality of diodes (408(1) ... 408(n)) arranged in series, wherein each diode of the first plurality has an anode and a cathode, wherein the anode of each diode of the first plurality has less parasitic capacitance than its cathode, wherein the second voltage clamp (410) comprises a second plurality of diodes (410(1) ... 410(n)) arranged in series, wherein each diode of the second plurality has an anode and a cathode, wherein the cathode of each diode of the second plurality has less parasitic capacitance than its anode.The apparatus of Claim 1 or 2, wherein the first voltage clamp comprises a first plurality of diodes arranged in series, wherein each diode of the first plurality has an anode and a cathode, wherein the anode of each diode of the first plurality has less parasitic capacitance than its cathode.The apparatus of Claim 1 or 2, wherein the second voltage clamp comprises a second plurality of diodes arranged in series, wherein each diode of the second plurality has an anode and a cathode, wherein the cathode of each diode of the second plurality has less parasitic capacitance than its anode.The apparatus of Claim 1 or 2, wherein the first and second voltage clamps each further comprises a thyristor.A method of protecting a radio frequency (RF) circuit from electrostatic discharge (ESD), the method comprising:carrying a signal with a first node;carrying a voltage reference with a second node;clamping a voltage of the first node with a first voltage clamp of the integrated circuit, the first voltage clamp having an anode and a cathode, wherein a first parasitic capacitance associated with the anode is less than a second parasitic capacitance associated with the cathode, wherein the anode is operatively coupled to the first node, and the cathode is operatively coupled to the second node, the first voltage clamp comprising at least a first rectifier; andclamping the voltage of the first node with a second voltage clamp of the integrated circuit, the second voltage clamp having an anode and a cathode, wherein a third parasitic capacitance associated with the cathode is less than a fourth parasitic capacitance associated with the anode, wherein the anode is operatively coupled to the second node, and the cathode is operatively coupled to the first node, the second voltage clamp comprising at least a second rectifier.The method of Claim 9, further comprising terminating a transmission line operatively coupled to the first node and the second node with a termination resistance.The method of Claim 9 or 10, wherein the first voltage clamp consists only of the first rectifier, wherein the second voltage clamp consists only of the second rectifier, and wherein the first rectifier and the second rectifier each consists of a diode.The method of Claim 9 or 10, wherein the first voltage clamp comprises a first plurality of diodes arranged in series, wherein each diode of the first plurality has an anode and a cathode, wherein the anode of each diode of the first plurality has less parasitic capacitance than its cathode, wherein the second voltage clamp comprises a second plurality of diodes arranged in series, wherein each diode of the second plurality has an anode and a cathode, wherein the cathode of each diode of the second plurality has less parasitic capacitance than its anode.The method of Claim 9 or 10, wherein the first and second voltage clamps each further comprises a thyristor.An apparatus comprising an integrated circuit, the apparatus comprising:a first node configured to carry a signal;a second node configured to carry a voltage reference;a means for clamping a voltage of the first node with a first voltage clamp of the integrated circuit, the first voltage clamp having an anode and a cathode, wherein a first parasitic capacitance associated with the anode is less than a second parasitic capacitance associated with the cathode, wherein the anode is operatively coupled to the first node, and the cathode is operatively coupled to the second node; anda means for clamping the voltage of the first node with a second voltage clamp of the integrated circuit, the second voltage clamp having an anode and a cathode, wherein a third parasitic capacitance associated with the cathode is less than a fourth parasitic capacitance associated with the anode, wherein the anode is operatively coupled to the second node, and the cathode is operatively coupled to the first node.The apparatus of Claim 14, further comprising a termination resistance operatively coupled between the first node and the second node.
BACKGROUND OF THE INVENTION Field of the Invention Embodiments of the invention relate to electronic systems, and more particularly, to transient electrical event protection circuits. Description of the Related Technology Certain electronic systems can be exposed to a transient electrical event, or an electrical signal of a relatively short duration having rapidly changing voltage and high power. Transient electrical events can include, for example, electrostatic discharge (ESD) events.Transient electrical events can damage integrated circuits (ICs) of an electronic system due to overvoltage conditions and/or high levels of power dissipation over relatively small areas of the ICs. High power dissipation can increase IC temperature. ESD can lead to numerous problems, such as shallow junction damage, narrow metal damage, and surface charge accumulation.SUMMARY OF THE INVENTIONAccording to a first aspect of the present invention there is provided an apparatus including an integrated circuit, wherein the apparatus includes: a first node configured to carry a signal; a second node configured to carry a voltage reference be the ground reference for the signal of the first node; a first voltage clamp of the integrated circuit, the first voltage clamp having an anode and a cathode, wherein a first parasitic capacitance associated with the anode is less than a second parasitic capacitance associated with the cathode, wherein the anode is operatively coupled to the first node, and the cathode is operatively coupled to the second node, the first voltage clamp comprising at least a first rectifier; and a second voltage clamp of the integrated circuit, the second voltage clamp having an anode and a cathode, wherein a third parasitic capacitance associated with the cathode is less than a fourth parasitic capacitance associated with the anode, wherein the anode is operatively coupled to the second node, and the cathode is operatively coupled to the first node, the second voltage clamp comprising at least a second rectifier.According to a second aspect of the present invention there is provided a method of protecting a radio frequency (RF) circuit from electrostatic discharge (ESD), wherein the method includes carrying a signal with a first node; carrying a voltage reference with a second node; clamping a voltage of the first node with a first voltage clamp of the integrated circuit, the first voltage clamp having an anode and a cathode, wherein a first parasitic capacitance associated with the anode is less than a second parasitic capacitance associated with the cathode, wherein the anode is operatively coupled to the first node, and the cathode is operatively coupled to the second node, the first voltage clamp comprising at least a first rectifier; and clamping the voltage of the first node with a second voltage clamp of the integrated circuit, the second voltage clamp having an anode and a cathode, wherein a third parasitic capacitance associated with the cathode is less than a fourth parasitic capacitance associated with the anode, wherein the anode is operatively coupled to the second node, and the cathode is operatively coupled to the first node, the second voltage clamp comprising at least a second rectifier.BRIEF DESCRIPTION OF THE DRAWINGSThese drawings and the associated description herein are provided to illustrate specific embodiments of the invention and are not intended to be limiting.Figure 1 is a schematic block diagram of an electrostatic discharge (ESD) protection device implemented in an integrated circuit.Figure 2 is a schematic diagram of an embodiment of the ESD protection device.Figure 3 is a schematic diagram of a model of an ESD protection device.Figure 4 illustrates an embodiment where the ESD protection device is implemented using several diodes in series.Figure 5 illustrates an embodiment where the ESD protection device is in parallel with RF termination resistance.Figure 6 is a cross-sectional view of an example of a physical layout for the high side protection circuitry of the ESD protection device of Figure 2 .Figure 7 is a cross-sectional view of an example of a physical layout for the low side protection circuitry of the ESD protection device of Figure 2 .Figure 8 illustrates a model of an embodiment where the ESD protection device is implemented using both diodes and thyristorsFigure 9 is a cross-sectional view of the physical layout of the high side protection circuitry of the ESD protection device of Figure 8 .Figure 10 is a cross-sectional view of the physical layout of the low side protection circuitry of the ESD protection device of Figure 8 .Figure 11 is a plot of return loss ratio (RLR) versus frequency comparing a conventional device with an embodiment of the invention.DETAILED DESCRIPTION OF PREFERRED EMBODIMENTSThe following detailed description of certain embodiments presents various descriptions of specific embodiments of the invention. However, the invention can be embodied in a multitude of different ways as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals may indicate identical or functionally similar elements.Terms such as above, below, over and so on as used herein refer to a device orientated as shown in the figures and should be construed accordingly. It should also be appreciated that because regions within a semiconductor device (such as a transistor) are defined by doping different parts of a semiconductor material with differing impurities or differing concentrations of impurities, discrete physical boundaries between different regions may not actually exist in the completed device but instead regions may transition from one to another. Indeed, the higher dopant concentration regions of semiconductor devices are known as diffusion regions because the dopants tend to at least be partially defined by diffusion and thus by their very nature do not have sharp boundaries. Some boundaries as shown in the accompanying figures are of this type and are illustrated as abrupt structures merely for the assistance of the reader. In the embodiments described below, p-type regions can include a semiconductor material with a p-type dopant, such as boron. Further, n-type regions can include a semiconductor material with an n-type dopant, such as phosphorous. Further, gate dielectric can include insulators, such as high k-dielectric. Further, gates can include conductive regions with variable work functions, such as variable work-function metal or polysilicon. A skilled artisan will appreciate various concentrations of dopants, conductive materials and insulating material can be used in regions described below.Electronic circuit reliability is enhanced by providing protection devices to the certain nodes of an IC, such as the IC's pins or pads. The protection devices can maintain the voltage level at the nodes within a predefined safe range by transitioning from a high-impedance state to a low-impedance state when the voltage of the transient signal reaches a trigger voltage. Thereafter, the protection device can shunt at least a portion of the current associated with the transient signal to prevent the voltage of the transient signal from reaching a positive or negative failure voltage that is one of the most common causes of IC damage.Figure 1 is a block diagram showing an ESD protection device 106 incorporated in an integrated circuit 100 to protect the internal devices 108 from sudden spikes in current or voltage due to an electrostatic discharge event occurring between a signal node 102 and a ground or return path node 104. Introduction of an ESD protection device is desirable, provided the ESD circuitry does not significantly distort the input signal. An ideal ESD protection device would be entirely invisible to the internal circuitry and the input; and as such would allow the entire range of input to pass through to the internal circuitry unaltered. However, in practice, ESD protection circuits contain parasitic elements that alter the input signals. Reducing these parasitic elements helps to avoid signal distortion.An ESD protection device will introduce parasitic elements; most notably, junction and substrate capacitances. The junction capacitance is the result of a depletion region formed between oppositely doped implant regions and can be relatively small, such as on the order of 10 to 15 femtofarads (fF). A substrate capacitance is formed between some implanted regions and the substrate and can be many times larger than the junction capacitance. In certain embodiments, the ESD protection devices are arranged such that the signal encounters a junction capacitance first, and less of a substrate capacitance. This generates a total effective capacitance always smaller than the already small junction capacitance. This arrangement reduces signal loss to the substrate as compared to conventional approaches.Figure 2 illustrates an embodiment of an ESD protection device that includes elements defined as voltage clamps which prevent the voltage level on a signal node from reaching undesirable values. These voltage clamps of the ESD protection device are arranged such that a signal applied to a signal node 102 encounters junction capacitances, and less of the substrate capacitances. The ESD protection device includes a signal node 102 configured to carry a signal and a ground or signal return path node 104 configured to carry a voltage reference, for example, ground. The ESD protection device further includes a first voltage clamp 210 and a second voltage clamp 220. The first voltage clamp 210 has an anode and a cathode, wherein the anode is operatively coupled to the signal node 102, and the cathode is operatively coupled to the ground or signal return path node 104. The voltage clamp 210 provides protection against positive voltage transients. The first voltage clamp 210 may include at least one rectifier, for example, a diode 212. The diode 212 has a junction capacitance and a substrate capacitance, and the diode 212 is arranged such that the anode end has the junction capacitance.The ESD protection device further includes a second voltage clamp 220. The second voltage clamp 220 provides protection against negative voltage transients. The second voltage clamp 220 has an anode and a cathode, wherein the anode is operatively coupled to the ground or signal return path node 104, and the cathode is operatively coupled to the signal node 102. The second voltage clamp 220 may include at least one rectifier, for example, a diode 222. The diode 222 has a junction capacitance and a substrate capacitance and is arranged so that the cathode end has the junction capacitance. In alternative embodiments, the voltage clamps 210, 220 can include multiple diodes and/or thyristors, such as, but not limited to, 3 diodes and/or thyristors. Another name for a thyristor is a silicon-controlled rectifier (SCR). When multiple diodes and/or thyristors are used, the diodes and/or thyristors should be arranged such that the junction capacitances face towards the signal node 102 in the signal path and the substrate capacitances face away from the signal node 102 in the signal path.Figure 3 is a schematic diagram of a model corresponding to the embodiment illustrated in Figure 2 implemented with single diodes 212, 222 for the first and second voltage clamps 210, 220 Examples of parasitic capacitances are shown. Parasitic capacitances not shown can also be present. In particular, the diodes 212, 222 can have asymmetric parasitic capacitances due to the substrate. Thus, while the diodes 212, 222 are nominally 2-terminal devices, the amount of capacitance seen at the anode and at the cathode can vary due to parasitic capacitances to the substrate. The first diode 212 can be modeled as an ideal diode 312, a junction capacitance 314, substrate capacitance 316, asubstrateresistance 318, and an oxide capacitance 319. The second diode 222 can be modeled as ideal diode 322, a junction capacitance 324, a substrate capacitance 326, a substrate resistance 328, and an oxide capacitance 329.Figure 4 illustrates an embodiment of an ESD protection device intended for relatively large radio frequency (RF) signals. To provide ESD protection for relatively large RF signals, voltage clamps 408, 410 can include multiple diodes or thyristors 408(1)-408(n), 410(1)-410(n) arranged in series to increase the forward voltage drop and arranged such that a signal at the signal node 102 encounters junction capacitances and less of the substrate capacitances. Substrate capacitances are formed away from the signal, for example, closer to ground or signal return path node 104.Figure 5 illustrates an embodiment where the ESD protection device includes a termination resistance 530 for the RF circuit for terminating a transmission line carrying the RF signal. Termination Resistance 530 may be implemented by two resistors in parallel (not shown), one from the signal 102 to ground 104 in high side protection circuitry 210 and one from signal 102 to ground 104 in low side protection circuitry 220. The value of the termination resistance can vary in a very broad range. A common value for the termination resistance 530 is about 50 ohms (Ω). Another common value is 75 ohms. Other values will be readily determined by one of ordinary skill in the art.The impedance ZCdue to the parasitic capacitance is given by equation 1. Z C = 1 j ⁢ ω ⁢ C ESDThis impedance appears in parallel with the termination resistance 530. If parasitic capacitances are high, the impedance ZCwill be a relatively low value, and the overall impedance seen by the RF signal input will be less than the termination resistance. If the impedance ZCis too low, the impedance matching of the termination resistance will be disturbed. Lower values of parasitic capacitances and higher values of ZCare desirable to help keep the overall impedance close to the value of termination resistance.Figure 6 is a cross-sectional view of an example of a layout for an embodiment of the diode 212 ( Figure 2 ) implemented in an integrated circuit with a p-type substrate, which can be a p-type epitaxial layer formed on a silicon substrate. An n-type substrate can alternatively be used (with the polarity of the diode, and all voltages and currents reversed). The diode 212 can be fabricated using a base-collector junction ordinarily used for a PNP transistor of the Integrated circuit fabrication process technology. A buried oxide layer (BOX layer) 602 can be used to provide isolation between different devices within the same region. Other methods of providing isolation might also be employed. The P-epi layer regions 606a, 606b correspond to the p-type substrate. An Nplug 610 connects an N buried layer (NBL) 604 to a contact 624, which is formed from metal layer 1 (MET1). The NBL 604 is a heavily doped region diffused relatively deeply into the substrate to help reduce the overall series resistance of the diode formed between a P+ region 616 and an N-epi layer region 612. Deep trench isolation regions 608a, 608b provide appropriate lateral isolation between the ESD protection device and surrounding regions. Deep trenches isolation can encircle the ESD protection device or devices. Other means of providing lateral isolation, not limited to deep trench regions, may also be used. The deep trench isolation regions 608a, 608b may be formed from silicon oxide (SiO2) or any other customary type of isolation material, such as deep n-well, deep p-well, etc. The shallow trench isolation (STI) 614 regions provide isolation between different diffusion regions in integrated circuit fabrication process technology.With respect to Figure 6 , the first and second P-epi layer regions 606a, 606b are disposed above a portion of the BOX layer 602 and adjacent to the deep trench isolation regions 608a, 608b. Deep trench isolation region 608a is above a portion of the BOX layer 602, to the right of the first P-epi layer region 606a, below a portion of the STI region 614a and to the left of a portion of the NBL 604 and the N-epi layer region 612. The NBL 604 is on top of a portion of the BOX layer 602, to the right of a portion of the deep trench isolation region 608a, below the N-epi layer region 612 and the N-plug 610, and to the left of a portion of the Nplug 610 and a portion of the deep trench isolation region 608b. The N-epi layer region 612 is on top of a portion of the NBL 604, to the right of a portion of the deep trench isolation region 608a, under a portion of the STI region 614a, under the P+ region 616, under a portion of the STI region 614b, and to the left of a portion of the Nplug 610.The Nplug 610 is both above and to the right of a portion of the NBL 604, to the right of the N-epi layer region 612, and under a portion of the STI region 614b, under the N+ region 626, under a portion of the STI region 614c, and to the left of a portion of the deep trench isolation region 608b. The contact 618 for the anode is above the P+ region 616 and electrically connects the P+ region 616 to the signal node 102. The N+ region 626 is above the Nplug 610, flanked by the STI regions 614b, 614c to the left and right, and below the cathode contact 624. The cathode contact 624 is above N+ region 626 and electrically connects the N+ region 626 to ground or signal return path node 104.The anode of the diode 212 is formed by the P+ region 616. The cathode of the diode 212 is formed by the arrangement of regions 612, 604, 610, 626. With the arrangement explained in Figure 6 , the signal node 102 is adjacent the junction capacitance formed between the P+ region 616 and the N-epi layer region 612. The substrate capacitance across the NBL layer 604, is formed away from the signal node 102.The diode 212 is illustrated in the context of a p-type substrate 606. However, the teachings herein are applicable to other types of substrates. For example, the teachings herein are applicable to configurations using an n-type substrate in which the polarity of the illustrated active and well regions uses the opposite doping type.Figure 7 is a cross-sectional view of an example of a layout for an embodiment of the diode 222 ( Figure 2 ) implemented with a p-type substrate. An n-type substrate can alternatively be used (with the polarity of the diode reversed). The diode 222 can be fabricated using a base-collector junction ordinarily used for an NPN transistor of the Integrated circuit fabrication process technology. A Pplug region 710 connects the P+ region 726 with the P buried layer (PBL) 704. The PBL 704 is a heavily doped region that is diffused relatively deeply into the substrate to help reduce the resistance of the diode 212 formed between the N+ region 716 and the P-epi layer region 706c. The P-epi layer regions 706a, 706b, 706c correspond to the p-type substrate.With respect to Figure 7 , the first and second P-epi layer regions 706a, 706b are disposed above a portion of the BOX layer 702 and adjacent to deep trench isolation regions 708a, 708b. Deep trench isolation region 708a is above a portion of the BOX layer 702, to the right of the first P-epi layer region 706a, below a portion of the STI region 714a and to the left of a portion of the PBL 704 and the P-epi layer region 706c. The PBL 704 is on top of a portion of the BOX layer 702, to the right of a portion of the deep trench isolation region 708a, below the P-epi layer region 706c and the Pplug 710, and to the left of a portion of the Pplug 710 and a portion of the deep trench isolation region 708b. The P-epi layer region 706c is on top of a portion of the PBL 704, to the right of a portion of the deep trench isolation region 708a, under a portion of the STI region 714a, under the N+ region 716, under a portion of the STI region 714b, and to the left of a portion of the Pplug 710.The Pplug 710 is both above and to the right of a portion of the PBL 704, to the right of the P-epi layer region 706c, under a portion of the STI region 714b, under the P+ region 726, under a portion of the STI region 714c, and to the left of a portion of the deep trench isolation region 708b. A contact 718 for the cathode is above the N+ region 716 and electrically couples the N+ region 716 to the signal node 102. The P+ region 726 is above the Pplug 710, flanked by the STI regions 714b, 714c to the left and right, and below an anode contact 724. The anode contact 724 is above the P+ region 726 and electrically connects the P+ region 726 to ground or signal return path node 104.The anode of the diode 222 is formed by the arrangement of regions 726, 710, 704, 706c. The cathode of the diode 222 is formed by N+ region 716. With the arrangement explained in Figure 7 , the signal node 102 is adjacent the junction capacitance formed between the N+ region 716 and the P-epi layer region 706c. The substrate capacitance formed across the PBL 704 is formed away from the signal node 102.Figure 8 illustrates a model of an embodiment for ESD protection in relatively large signal applications, such as from -3V to +3V. The model includes a first voltage clamp 810, a second voltage clamp 850, and parasitics. With relatively large signals, a relatively large number of diodes may be used to cover the voltage range. However, relatively large numbers of diodes can increase the size of the ESD protection device on an integrated circuit so that it can be advantageous to include thyristors.A thyristor can provide a larger voltage drop given the same amount of chip area than a forward-biased diode. The first voltage clamp 810 comprises a first diode 812, a thyristor 814 and a second diode 816 arranged in series, and the second voltage clamp 850 comprises a first diode 852, a thyristor 854, and a second diode 856 arranged in series. The anode of the first voltage clamp 810, which is also the anode of the first diode 812, is coupled to the signal node 102, and the cathode of the first voltage clamp 810, which is also the cathode of the second diode 816, is coupled to the signal return path node 104. The anode of the second voltage clamp 850, which is also the anode of the first diode 852, is coupled to the signal return path node 104, and the cathode of the second voltage clamp 850, which is also the cathode of the second diode 856, is coupled to the signal node 102.The first diode 812 introduces a junction capacitance 813 and substrate capacitance 818. A thyristor 814 introduces a junction capacitance 815. The second diode 816 introduces a junction capacitance 817 and a substrate capacitance 820. The first diode 852 introduces a junction capacitance 853 and a substrate capacitance 858. A thyristor 854 introduces a junction capacitance 855. The second diode 856 introduces a junction capacitance 857 and a substrate capacitance 860. As further discussed below, the diodes and thyristors of the first and second voltage clamps may be arranged for the signal node 102 to be adjacent the junction capacitances 813, 857. The substrate capacitances 818, 820, 860 and 858 may be positioned away from the signal node 102 to reduce signal loss.Figure 9 illustrates an example of a cross-sectional view for the first voltage clamp 810 implemented with a p-type substrate. Figure 9 also illustrates symbols for the diodes 812, 816, and the thyristor 814 to illustrate the corresponding regions for the devices. In this view, the first voltage clamp 810 is above a BOX layer 900 and is flanked by P-epi layer regions 901a, 901b to the left and right. A NBL region 902 is formed above a portion of the BOX layer 900, to the right of a portion of a deep trench isolation region 903a, below an N-epi layer region 908, Nplug regions 909, 910, and to the left of a portion of a deep trench isolation region 903b. The NBL 904 is above a portion of the BOX layer 900, to the right of a portion of the deep trench isolation region 903b, below the N-epi layer region 912, the Pplug 914, the N-epi layer region 916, the Pplug 918, the N-epi layer region 920, and to the left of a portion of the deep trench isolation region 903c. The NBL 906 is above a portion of the BOX layer 900, to the right of a portion of the deep trench isolation region 903c, below the Nplug 922, the N-epi layer region 924, the Nplug 925 and to the left of a portion of the deep trench isolation region 903d.In this view, the deep trench isolation region 903a is formed above the BOX layer 900, to the right of a portion of the P-epi layer region 901a, under the STI region 926, and to the left of a portion of the Nplug 909. The STI region 926 is formed above the deep trench isolation region 903a, to the right of the P-epi layer region 901a, and to the left of a portion of Nplug 909 and the N+ region 928. The N-epi layer region 908 is above a portion of the NBL 902, flanked by Nplug regions 909, 910, both under and to the right of the bottom and right lateral surfaces of the STI region 930, both under and to the left of the bottom and left lateral surfaces of the STI region 934, and below the P+ region 932.The N+ region 928 is formed above the Nplug region 909 and is flanked by the STI regions 926, 930 to the left and right. The STI region 930 is both above and to the left of the N-epi layer region 908 on its bottom and right lateral surfaces, to the right of a portion of region Nplug 909, the N+ region 928 and to the left of the P+ region 932. The P+ region 932 is above the N-epi layer region 908 and is flanked by STI regions 930, 934 to the left and right. The STI region 934 is both above and to the right of the N-epi layer region 908 on its bottom and left lateral surfaces, to the right of the P+ region 932 and to the left of a portion of the Nplug 910 and the N+ region 936. The N+ region 936 is above Nplug 910, to the right of the STI region 934 and to the left of the STI region 938. The STI region 938 is above the deep trench isolation region 903b, to the right of a portion of the Nplug 910, N+ region 936 and to the left of N+ region 940. The deep trench isolation region 903b is above the BOX layer 900 to the right of the NBL 902, a portion of the Nplug 910, under the STI region 938, to the left of the NBL 904 and a portion of the N-epi layer region 912.The Nplug 909 is above the NBL 902, to the right of a portion of the deep trench isolation region 903a and the STI region 926, under N+ region 928, to the left of a portion of the STI region 930 and the N-epi layer region 908. The Nplug 910 is above a portion of the NBL 902, to the right of the N-epi layer region 908, the STI region 934, under the N+ region 936, to the left of a portion of the deep trench isolation region 903b and the STI region 938. The N-epi layer region 912 is above a portion of the NBL 904, to the right of a portion of the deep trench isolation region 903b and the STI region 938, under the N+ region 940, both under and to the left of the STI region 942. The N+ region 940 is above the N-epi layer region 912 and is flanked by the STI regions 938, 942 to the left and right. The STI region 942 is both above and to the right of the N-epi layer region 912 on its bottom and left lateral surfaces, is flanked to the left and right by the N+ region 940 and the P+ region 944 and is to the left of the Pplug 914.The Pplug 914 is above a portion of the NBL 904, to the right of the N-epi layer region 912 and the STI region 942, below the P+ region 944 and to the left of the STI region 946 and the N-epi layer region 916. The P+ region 944 is above the Pplug 914 and flanked by the STI regions 942, 946 to the left and right. The STI region 946 is both above and to the left of the N-epi layer region 916 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 914, the P+ region 944 and below a portion of a silicon oxide region 948.The N-epi layer region 916 is above a portion of the NBL 904, to the right of a portion of the Pplug 914, under and to the right of the right lateral surface of the STI region 946, below a portion of the silicon oxide regions 948, 952, below a P-SiGe: epi layer region 950, under and to the left of the left lateral surface of the STI region 954, and to the left of a portion of the Pplug 918. The STI region 954 is both above and to the right of the N-epi layer region 916 on its left lateral and bottom surfaces, to the left of the P+ region 956 and a portion of the Pplug 918.The Pplug 918 is above a portion of the NBL 904, to the right of a portion of the N-epi layer region 916, the STI region 954, below the P+ region 956, to the left of a portion of the STI region 958 and the N-epi layer region 920. The P+ region 956 is above the Pplug 918, to the right of a portion of the STI region 954, and to the left of a portion of the STI region 958. The STI region 958 is both above and to the left of the N-epi layer region 920 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 918, the P+ region 956, and to the left of the N+ region 960. The N+ region 960 is above the N-epi layer region 920, to the right of a portion of the STI region 958, and to the left of a portion of the STI region 962.The N-epi layer region 920 is above a portion of the NBL 904, to the right of a portion of the Pplug 918, both under and to the right of bottom and right lateral surfaces of the STI region 958, below the N+ region 960, and to the left of a portion of the STI region 962 and the deep trench isolation region 903c. The deep trench isolation region 903c is above a portion of the BOX layer 900, to the right of the NBL 904, and a portion of N-epi 920, under the STI region 962, to the left of NBL 906 and a portion of Nplug 922. The STI region 962 is above the deep trench isolation region 903c and flanked by the N-epi layer region 920 and the N+ region 960 on the left and the Nplug 922 and the N+ region 964 on the right.The N+ region 964 is above the Nplug 922, to the right of a portion of the STI region 962 and to the left of a portion of the STI region 966. The Nplug 922 is above the NBL region 906, to the right of a portion of the deep trench isolation region 903c, the STI region 962, below the N+ region 964 and to the left of a portion of the STI region 966. The N-epi layer region 924 is above a portion of the NBL 906, flanked by the Nplug regions 922, 925, both under and to the right of the bottom and right lateral surfaces of the STI region 966, both under and to the left of the bottom and left lateral surfaces of the STI region 970, and below the P+ region 968.The STI region 966 is both above and to the left of the N-epi layer region 924 on its bottom and right lateral surfaces, to the right of a portion of the Nplug 922, the N+ region 964, and to the left of the P+ region 968. The P+ region 968 is above a portion of the N-epi layer region 924 and flanked by the STI region 966 and the STI region 970. The STI region 970 is both above and to the right of the N-epi layer region 924 on its bottom and left lateral surfaces, to the right of the P+ region 968, to the left of the N+ region 972 and a portion of the Nplug 925.The N+ region 972 is above the Nplug 925 and is flanked by the STI region 970 and the STI region 974. The Nplug 925 is above a portion of the NBL 906, to the right of a portion of the N-epi layer region 924, the STI region 970, under the N+ region 972, to the left of a portion of the STI region 974 and the deep trench isolation region 903d. The STI region 974 is above the deep trench isolation region 903d, to the right of a portion of the Nplug 925, the N+ region 972, and to the left of the P-epi layer region 901b. The deep trench isolation region 903d is formed above a portion of the BOX layer 900, to the right of the NBL 906, the Nplug 925, under the STI region 974, and to the left of the P-epi layer region 901b.In this view, the silicon oxide region 948 is above a portion of the STI region 946, a portion of the N-epi layer region 916, and to the left of the P-SiGe: epi layer region 950 and the P-poly region 976. The P-poly regions 976, 980 and N-Poly region 978 are surrounded by the P-SiGe: epi layer region 950 on their bottom and lateral surfaces. The P-SiGe: epi layer region 950 is above a portion of the N-epi layer region 916, flanked by the silicon oxide regions 948, 952, under and around left and right lateral surfaces of the P-poly regions 976, 980 and N-poly region 978. The silicon oxide region 952 is above a portion of the STI region 954, a portion of the N-epi layer region 916, and to the right of the P-SiGe: epi layer region 950 and the P-poly region 980.The signal node 102 is electrically connected to the P+ region 932. The ground or signal return path node 104 is electrically connected to the N+ regions 964, 972. The N+ region 928, the N+ region 936, the N+ region 940, the P+ region 944, the P+ region 956 and the N+ region 960 are electrically connected. The P+ region 968 and the N-poly region 978 are also electrically connected. The P-poly regions 976 and 980 may be electrically connected and left floating 984. The anode of the first diode 812 is formed by the P+ region 932. The cathode of the first diode 812 is formed by the arrangement of regions 908, 910, 936. The anode of the thyristor 814 is formed by the regions 944, 914. The cathode of the thyristor 814 is formed by the region 978. The anode of the second diode 816 is formed by the P+ region 968 and its cathode is formed by the regions 964, 922, 924, 925 and 972.With the arrangement explained in Figure 9 , the signal node 102 is adjacent the junction capacitance formed between the P+ region 932 and the N-epi layer region 908. The substrate capacitance 818 across the P-epi layer region 901a, the deep trench isolation region 903a, the NBL 902 and the Nplug 909 is formed away from the signal node 102. Likewise, the substrate capacitance 820 across the P-epi layer region 901b, the deep trench isolation region 903d, the NBL 906 and the Nplug 925 is formed away from the signal node 102.Figure 10 illustrates an example of a cross-sectional view for the second voltage clamp 850 implemented with a p-type substrate. Figure 10 also illustrates symbols for the diodes 852, 856 and the thyristor 854 to illustrate the corresponding regions for the devices. In this view, the second voltage clamp 850 is formed above the BOX layer 1000, flanked by the P-epi layer regions 1001a and 1001b to the left and right. The PBL region 1002 is formed above a portion of the BOX layer 1000, to the right of a portion of the deep trench isolation region 1003a, below the P-epi layer region 1008, the Pplug regions 1009, 1010, and to the left of a portion of the deep trench isolation region 1003b. The NBL 1004 is above a portion of the BOX layer 1000, to the right of a portion of the deep trench isolation region 1003b, below the N-epi layer region 1012, the Pplug 1014, the N-epi layer region 1016, the Pplug 1018 and the N-epi layer region 1020, and to the left of a portion of the deep trench isolation region 1003c. The PBL 1006 is above a portion of the BOX layer 1000, to the right of a portion of the deep trench isolation region 1003c, below the Pplug 1022, the P-epi layer region 1024, and the Pplug 1025 and to the left of a portion of the deep trench isolation region 1003d.In this view, the deep trench isolation region 1003a is formed above the BOX layer 1000, to the right of a portion of the P-epi layer region 1001 a, under the STI region 1026, and to the left of a portion of the Pplug 1009. The STI region 1026 is formed above the deep trench isolation region 1003a, to the right of the P-epi layer region 1001a, and to the left of a portion of the Pplug 1009 and the P+ region 1028. The P-epi layer region 1008 is above a portion of the PBL 1002, flanked by the Pplug regions 1009, 1010, both under and to the right of the bottom and right lateral surfaces of the STI region 1030, both under and to the left of the bottom and left lateral surfaces of the STI region 1034, and below the N+ region 1032.The P+ region 1028 is formed above the Pplug 1009 and is flanked by the STI regions 1026, 1030 to the left and right. The STI region 1030 is both above and to the left of the P-epi layer region 1008 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 1009, the P+ region 1028 and to the left of the N+ region 1032. The N+ region 1032 is above the P-epi layer region 1008 and flanked by the STI regions 1030, 1034 to the left and right. The STI region 1034 is both above and to the right of the P-epi layer region 1008 on its bottom and left lateral surfaces, to the right of the N+ region 1032 and to the left of a portion of the Pplug 1010 and the P+ region 1036. The P+ region 1036 is above the Pplug 1010, to the right of the STI region 1034 and to the left of the STI region 1038. The STI region 1038 is above the deep trench isolation region 1003b, to the right of a portion of the Pplug 1010, the P+ region 1036 and to the left of the N+ region 1040. The deep trench isolation region 1003b is above the BOX layer 1000 to the right of the PBL 1002, a portion of the Pplug 1010, under the STI region 1038, to the left of the NBL 1004 and a portion of the N-epi layer region 1012.The Pplug 1009 is above the PBL 1002, to the right of a portion of the deep trench isolation region 1003a and the STI region 1026, under the P+ region 1028, to the left of a portion of the STI region 1030 and the P-epi layer region 1008. The Pplug 1010 is above a portion of the PBL 1002, to the right of the P-epi layer region 1008, STI region 1034, under the P+ region 1036, to the left of a portion of the deep trench isolation region 1003b and the STI region 1038. The N-epi layer region 912 is above a portion of the NBL 1004, to the right of a portion of the deep trench isolation region 1003b and the STI region 1038, under the N+ region 1040, both under and to the left of the STI region 942. The N+ region 1040 is above the N-epi layer region 1012 and flanked by the STI regions 1038, 1042 to the left and right. The STI region 1042 is both above and to the right of the N-epi layer region 1012 on its bottom and left lateral surfaces, is flanked to the left and right by the N+ region 1040, and the P+ region 1044 and is to the left of the Pplug 1014.The Pplug 1014 is above a portion of the NBL 1004, to the right of the N-epi layer region 1012 and the STI region 1042, below the P+ region 1044 and to the left of the STI region 1046 and the N-epi layer region 1016. The P+ region 1044 is above the Pplug 1014 and flanked by the STI regions 1042, 1046 to the left and right. The STI region 1046 is both above and to the left of the N-epi layer region 1016 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 1014 and the P+ region 1044 and below a portion of the silicon oxide region 1048.The N-epi layer region 1016 is above a portion of the NBL 1004, to the right of a portion of the Pplug 1014, under and to the right of the right lateral surface of the STI region 1046, below a portion of the silicon oxide regions 1048, 1052, below the P-SiGe: Epi layer region 1050, under and to the left of the left lateral surface of the STI region 1054, and to the left of a portion of the Pplug 1018. The STI region 1054 is both above and to the right of the N-epi layer region 1016 on its left lateral and bottom surfaces, to the left of the P+ region 1056 and a portion of the Pplug 1018.The Pplug 1018 is above a portion of the NBL 1004, to the right of a portion of the N-epi layer region 1016, the STI region 1054, below the P+ region 1056, to the left of a portion of the STI region 1058 and the N-epi layer region 1020. The P+ region 1056 is above the Pplug 1018, to the right of a portion of the STI region 1054, and to the left of a portion of the STI region 1058. The STI region 1058 is both above and to the left of the N-epi layer region 1020 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 1018, the P+ region 1056, and to the left of the N+ region 1060. The N+ region 1060 is above the N-epi layer region 1020, to the right of a portion of the STI region 1058, and to the left of a portion of the STI region 1062.The N-epi layer region 1020 is above a portion of the NBL 1004, to the right of a portion of the Pplug 1018, both under and to the right of bottom and right lateral surfaces of the STI region 1058, below the N+ region 1060, and to the left of a portion of the STI region 1062 and the deep trench isolation region 1003c. The deep trench isolation region 1003c is above a portion of the BOX layer 1000, to the right of the NBL 1004, and a portion of the N-epi layer region 1020, under the STI region 1062, to the left of the PBL 1006 and a portion of the Pplug 1022. The STI region 1062 is above the deep trench isolation region 1003c and flanked by the N-epi layer region 1020 and the N+ region 1060 on the left and the Pplug 1022 and the P+ region 1064 on the right.The P+ region 1064 is above the Pplug 1022, to the right of a portion of the STI region 1062 and to the left of a portion of the STI region 1066. The Pplug 1022 is above the PBL 1006, to the right of a portion of the deep trench isolation region 1003c, the STI region 1062, below the P+ region 1064 and to the left of a portion of the STI region 1066. The P-epi layer region 1024 is above a portion of the PBL 1006, flanked by the Pplug regions 1022, 1025, both under and to the right of the bottom and right lateral surfaces of the STI region 1066, both under and to the left of the bottom and left lateral surfaces of the STI region 1070, and below the N+ region 1068.The STI region 1066 is both above and to the left of the P-epi layer region 1024 on its bottom and right lateral surfaces, to the right of a portion of the Pplug 1022, the P+ region 1064, and to the left of the N+ region 1068. The N+ region 1068 is above a portion of the P-epi layer region 1024 and flanked by the STI region 1066 and the STI region 1070. The STI region 1070 is both above and to the right of the P-epi layer region 1024 on its bottom and left lateral surfaces, to the right of the N+ region 1068, to the left of the P+ region 1072 and a portion of the Pplug 1025.The P+ region 1072 is above the Pplug 1025 and is flanked by the STI region 1070 and the STI region 1074. The Pplug 1025 is above a portion of the PBL 1006, to the right of a portion of the P-epi layer region 1024, the STI region 1070, under the P+ region 1072, to the left of a portion of the STI region 1074 and the deep trench isolation region 1003d. The STI region 1074 is above the deep trench isolation region 1003d, to the right of a portion of the Pplug 1025, the P+ region 1072, and to the left of the P-epi layer region 1001b. The deep trench isolation region 1003d is formed above a portion of the BOX layer 1000, to the right of the PBL region 1006, the Pplug region 1025, under the STI region 1074, and to the left of the P-epi layer region 1001b. The BOX layer 1000 can be the same layer as the BOX layer 602 ( Figure 6 ), the BOX layer 702 ( Figure 7 ), and the BOX layer 900 ( Figure 9 ).In this view, the silicon oxide region 1048 is above a portion of the STI region 1046, a portion of the N-epi layer region 1016, and to the left of the P-SiGe: epi layer region 1050 and the P-poly region 1076. The P-poly regions 1076, 1080 and N-poly region 1078 are surrounded by the P-SiGe: epi layer region 1050 on their bottom and lateral surfaces. The P-SiGe: epi layer region 1050 is above a portion of the N-epi layer region 1016, flanked by the silicon oxide regions 1048, 1052, under and around left and right lateral surfaces of the P-poly regions 1076 1080 and N-poly region 1078. The silicon oxide region 1052 is above a portion of the STI region 1054, a portion of the N-epi layer region 1016, and to the right of the P-SiGe: epi layer region 1050 and the P-poly region 1080.The RF I/O or signal node 102 is electrically connected to the N+ region 1068. The ground or signal return path node 104 is electrically connected to the P+ regions 1028, 1036. The N+ region 1032, the N+ region 1040, the P+ region 1044, the P+ region 1056 and the N+ region 1060 are electrically connected. The P+ region 1064, the P+ region 1072, and the N-poly region 1078 are also electrically connected. The P-poly regions 1076 and 1080 may be electrically connected and left floating 1084. The anode of the diode 852 is formed by the arrangement of the regions 1028, 1009, 1008, 1010 and 1036. The cathode of the first diode 852 is formed by the N+ region 1032. The anode of the thyristor 854 is formed by the regions 1044, 1014. The cathode of the thyristor 854 is formed by the region 1078. The anode of the second diode 856 is formed by the regions 1064, 1022, 1024, 1025, 1072; its cathode is formed by the N+ region 1068.With the arrangement explained in Figure 10 , the signal node 102 is adjacent the junction capacitance formed between the N+ region 1068 and the P-epi layer region 1024. The substrate capacitance 860 formed between the P-epi layer region 1001b, the deep trench isolation region 1003d, the PBL 1006 and the Pplug 1025 is away from the signal node 102. Likewise, the substrate capacitance 858 between the P-epi layer region 1001a, the deep trench isolation region 1003a, the PBL 1002 and the Pplug 1025 is formed away from the signal node 102.The embodiments of the present invention are not limited to a particular process technology. Persons with ordinary skill in the art can envision embodiments of the present invention implemented using different process technologies. For example, proper substrate isolation may be achieved using a bulk bi-CMOS process or by a process offering deep N-well or P-well isolation.Return Loss Ratio (RLR) is a metric used to describe the deleterious characteristics of an ESD protection device inserted into an RF circuit. RLR is the ratio of the amount of power passed through to the internal circuitry versus power reflected back to the signal input. RLR is generally expressed in decibels (dB). In the RF realm and several applications, an RLR of less than -10 dB is desirable.Figure 11 is a graph of RLR versus frequency for two different ESD protection devices. A first curve 1102 represents the performance of an ESD protection device according to one embodiment of the invention. A second curve 1104 represents the performance of a conventional ESD protection device. As illustrated by the curves 1102, 1104, the ESD protection device according to an embodiment of the invention outperforms the conventional ESD protection device. In the illustrated example, the embodiment can achieve a RLR of less than -10 dB for frequencies up to about 34 gigahertz (GHz), which is a higher frequency range than the conventional ESD protection.Devices employing the above described schemes can be implemented into various electronic devices. Examples of the electronic devices can include, but are not limited to, consumer electronic products, parts of the consumer electronic products, electronic test equipment, etc. Examples of the electronic devices can also include circuits of optical networks or other communication networks, including, for example base stations, serializer/deserializers, routers, modems, and the like. The consumer electronic products can include, but are not limited to, an automobile, a camcorder, a camera, a digital camera, a portable memory chip, desktop computers, workstations, servers, tablets, laptop computers, digital cameras, video cameras, digital media players, personal digital assistants, smart phones, mobile phones, navigation devices, non-volatile storage products, kiosks, modems, cable set-top boxes, satellite television boxes, gaming consoles, home entertainment systems, and the like. Further, the electronic device can include unfinished products, including those for industrial, medical and automotive applications.The foregoing description and following claims may refer to elements or features as being "connected" or "coupled" together. As used herein, unless expressly stated to the contrary, "connected" means that one element/feature is directly or indirectly connected to another element/feature, and not necessarily mechanically. Likewise, unless expressly stated to the contrary, "coupled" means that one element/feature is directly or indirectly coupled to another element/feature, and not necessarily mechanically. Thus, although the drawings illustrate various examples of arrangements of elements and components, additional intervening elements, devices, features, or components may be present in an actual embodiment.As used herein, a "node" refers to any internal or external reference point, connection point, junction, signal line, conductive element, or the like at which a given signal, logic level, voltage, data pattern, current, or quantity is present.Various embodiments have been described above. Although described with reference to these specific embodiments, the descriptions are intended to be illustrative and are not intended to be limiting. Various modifications and applications may occur to those skilled in the art.
In some embodiments, an apparatus includes: a first layer (145) with a first surface (144); and a second surface opposite to the first surface. The apparatus also includes a second layer (140) having:a third surface interfacing the second surface; and a fourth surface opposite the third surface. The apparatus further includes a third layer (150) having: a fifth surface interfacing the fourth surface; and a sixth surface opposite the fifth surface. The apparatus also includes a fourth layer (160) having a seventh surface interfacing the sixth surface to form a heteroj unction, which generatesa two-dimensional electron gas channel formed in the fourth layer. Further, the apparatus includes a recess (146, 147, 149) that extends from the first surface to the fifth surface.
1.A solid-state device, which includes:A first layer, which has a first surface and a second surface opposite to the first surface;A second layer, which has a third surface interfacing with the second surface and a fourth surface opposite to the third surface;A third layer, which has a fifth surface interfacing with the fourth surface and a sixth surface opposite to the fifth surface;A fourth layer having a seventh surface interfacing the sixth surface to form a heterojunction, the heterojunction generating a two-dimensional electron gas channel formed in the fourth layer; andA recess extending from the first surface to the fifth surface.2.The apparatus of claim 1, wherein the recesses represent discontinuities in the second layer, and wherein the second layer is a cap layer.3.3. The device of claim 2, further comprising: a dielectric layer positioned along the recess and the first surface; and a gate layer deposited on the dielectric layer.4.The device of claim 2, wherein a dielectric layer is located in the discontinuous portion, wherein the thickness of the dielectric layer is substantially equal to the thickness of the second layer.5.The apparatus of claim 1, wherein the second layer contains a plurality of discontinuous portions.6.The device according to claim 1, wherein the fourth layer comprises a group III-V compound semiconductor.7.The device according to claim 1, wherein the third layer comprises a group III-V compound semiconductor.8.A device including:A first layer, which has a first surface and a second surface opposite to the first surface;A second layer, which has a third surface interfacing with the second surface and a fourth surface opposite to the third surface;A third layer, which has a fifth surface interfacing with the fourth surface, and a sixth surface opposite to the fifth surface; andA fourth layer, which forms a heterojunction with the third layer, wherein the dielectric material is positioned along the recess and the first surface,The gate layer extends through the first layer and is placed on the dielectric material.9.The device according to claim 8, wherein the fourth layer comprises a group III-V compound semiconductor.10.The device of claim 8, wherein the fourth layer includes gallium nitride.11.The device according to claim 8, wherein the third layer comprises a group III-V compound semiconductor.12.The device of claim 8, wherein the dielectric material comprises silicon nitride.13.The device of claim 8, wherein the thickness of the dielectric layer is substantially equal to the thickness of the second layer.14.The apparatus according to claim 8, wherein the second layer contains a plurality of discontinuous portions.15.A method including:Providing a device having a first layer and a second layer forming a heterojunction with the first layer, the heterojunction generating a two-dimensional electron gas channel in the second layer;Depositing a third layer on the first layer, wherein the third layer includes a first surface and a second surface opposite to the first surface;Depositing a fourth layer on the first surface of the third layer; andA recess extending from the outer surface of the first layer to the first surface of the third layer is generated, wherein the recess represents a discontinuous portion in the third layer.16.The method of claim 15, further comprising:Depositing a dielectric material on the surface of the fourth layer, on the additional surfaces of the first layer, and in the discontinuous portion; andDepositing a gate layer extending through the first layer and placed on the dielectric material.17.The method according to claim 15, wherein the first layer comprises a group III-V compound semiconductor.18.The method according to claim 15, wherein the second layer comprises a group III-V compound semiconductor.19.The method according to claim 15, wherein the third layer comprises a group III-V compound semiconductor.20.The method of claim 15, wherein the dielectric material comprises silicon nitride.
Recessed solid state deviceBackground techniqueSilicon-based integrated circuits (ICs) are used in different areas of solid-state electronic devices. One such area is power electronics. In order to improve the system-level efficiency of power electronic systems, research is being conducted to find other types of semiconductor materials that can replace silicon as power electronic semiconductors.Summary of the inventionAccording to an example, a device includes a first layer having a first surface and a second surface opposite the first surface. The device also includes a second layer having a third surface interfacing with the second surface and a fourth surface opposite to the third surface. The device further includes a third layer having a fifth surface interfacing the fourth surface and a sixth surface opposite the fifth surface. The device also includes a fourth layer having a seventh surface that interfaces with a sixth surface to form a heterojunction that creates a two-dimensional electron gas channel formed in the fourth layer. Additionally, the device includes a recess extending from the first surface to the fifth surface.According to another example, a device includes a first layer having a first surface and a second surface opposite the first surface. The device includes a second layer having a third surface interfacing the second surface and a fourth surface opposite the third surface. The device further includes a third layer having a fifth surface interfacing the fourth surface and a sixth surface opposite the fifth surface. The device further includes a fourth layer forming a heterojunction with the third layer, wherein a dielectric material is positioned along the recess and the first surface, and wherein a gate layer extends through the first layer and is placed On the dielectric material.According to yet another example, a method includes providing a device having a first layer and a second layer forming a heterojunction with the first layer, the heterojunction creating a two-dimensional electron gas channel in the second layer Tao. The method further includes depositing a third layer on the first layer, wherein the third layer includes a first surface and a second surface opposite the first surface. The method then includes depositing a fourth layer on the first surface of the third layer. Additionally, the method includes creating a recess that extends from the outer surface of the first layer to the first surface of the third layer, wherein the recess represents a discontinuity in the third layer.Description of the drawingsFigure 1 is a side view of an illustrative AlGaN/GaN heterostructure field effect transistor according to various examples.2a and 2b are side views of the illustrative gate portion of FIG. 1 according to various examples.Figure 2c is a graph depicting the results of an illustrative stress test performed on the device depicted in Figures 1 and 2a according to various examples.Figure 3a depicts an illustrative method to selectively etch a portion of a GaN cap layer according to various examples.3b to 3h depict illustrative flowcharts depicting selective etching of a portion of the GaN cap layer below the gate layer according to various examples.detailed descriptionGroup III nitrides are materials that are being investigated to replace silicon as semiconductors in power electronic devices. Certain properties (such as polarization) of Group III nitrides can be engineered by changing their material composition. For example, depositing a III-nitride material with a wider band gap (such as AlN) on a III-nitride material with a narrower band gap (such as gallium nitride ("GaN")) can lead to Al(X ) Ga(Y)N(Z) (where X, Y, and Z are the percentage components of each of the corresponding elements) layer formation. In some cases, the material composition of the Al(X)Ga(Y)In(Z)N(1-X-Y-Z) layer can be customized to adjust the band gap of the Al(X)Ga(Y)N(Z) layer. When grown on top of Group III nitrides (such as GaN), Al(X)Ga(Y)N(Z) can lead to the formation of 2D electron gas ("2DEG"), which has a high current carrying density and mobility Sex. These characteristics, together with the excellent electrical breakdown strength of III nitrides, make III nitride materials a strong candidate for power electronic semiconductors.2DEG is realized by GaN's large conduction band offset and polarization induced charge. The spontaneous and piezoelectric polarization in the tensioned AlGaN/GaN heterostructure results in a substantially high value of the electronic sheet charge density at the interface of the AlGaN/GaN heterostructure. The interface can also be referred to as a "heterojunction." AlGaN grown on GaN can be called an "AlGaN/GaN" heterostructure. In some cases, AlGaN/GaN heterostructures can be grown on sapphire substrates. In other cases, the substrate may be silicon carbide or gallium nitride.Compared with silicon, GaN has a wider band gap. In addition, a GaN-based heterostructure field effect transistor ("HFET") can form a 2DEG-based conductive path between the source and drain of the HFET. Therefore, HFETs are preferred over silicon-based MOSFETs, especially for power electronics applications. However, before using HFETs commercially, several difficulties of HFETs should be overcome, such as high voltage instability, carrier retention, and reliability. In some cases, AlGaN/GaN HFETs are susceptible to unstable threshold voltages, which are generated (in the enhancement mode) or depleted (in the depletion mode) of the conductive path existing between the source terminal and the gate terminal in the HFET The minimum required gate-to-source voltage. The AlGaN/GaN HFET described herein operates in a depletion mode.In some cases, the AlGaN layer of the AlGaN/GaN HFET is covered with a GaN layer (also referred to as a "GaN cap layer"). The GaN cap layer prevents carrier retention in the top layer (for example, AlGaN) in the AlGaN/GaN heterostructure. However, in some cases, the GaN cap layer may cause instability of the threshold voltage of the HFET, that is, threshold voltage drift (for example, decrease). The unstable threshold voltage causes the off-state leakage in the HFET. In order to prevent the instability of the threshold voltage, researchers have experimented with GaN cap layers of different thicknesses to cover the top layer, but there are still unstable threshold voltages and off-state leakage. The embodiments described herein reduce the degree of threshold voltage drift relative to the voltage threshold by etching a part of the GaN cap layer and creating a discontinuity in the GaN cap layer. The example embodiment further includes selectively etching a portion of the GaN cap layer under the gate of the HFET, which results in a reduction in off-state leakage current and provides a relatively stable threshold voltage.FIG. 1 is a side view of an illustrative AlGaN/GaN HFET 100. Although the following description assumes that the transistor includes AlGaN/GaN with 2DEG formed at the AlGaN/GaN interface, the example embodiments are also applicable to transistors made of another Group III nitride, which may also be classified as 2DEG is formed due to the combination with different Group III nitrides. In some examples, the AlGaN/GaN HFET 100 includes a GaN layer 160, an AlGaN layer 150, and a GaN cap layer 140. In some examples, the thickness of the AlGaN layer may be 30 nm. The thickness of the GaN cover may be 10 nm, and the thickness of the GaN layer 160 may range from tens of nanometers to several micrometers. The AlGaN/GaN combination leads to the accumulation of 2DEG at the AlGaN/GaN interface/heterojunction. As mentioned above, 2DEG is achieved through the large conduction band offset and polarization induced charge of GaN. The spontaneous and piezoelectric polarization in the tensioned AlGaN/GaN heterostructure results in a substantially high value of the electronic sheet charge density at the interface of the AlGaN/GaN heterostructure. In some embodiments, the GaN cap layer 140 is referred to as a "barrier layer." The chemical composition of the barrier layer is not limited to gallium nitride. In some embodiments, the barrier layer may include a group III nitride or a group V nitride. In some embodiments, the barrier layer may also include a combined Group III-Group V nitride (for example, Al(X)Ga(Y)ln(X)N(1-XYZ), where X, Y and Z is the concentration of the corresponding element).The AlGaN/GaN HFET 100 further includes a source 110, a gate layer 120, and a drain 130. In some examples, the source 110 and the drain 130 pass through ohmic contacts with the GaN cap layer 140, the AlGaN layer 150, the GaN layer 160, and the 2DEG formed at the interface of the AlGaN/GaN heterostructure (not explicitly shown) Come in contact. The gate layer 120 (described in further detail in FIG. 2a below) has a gate dielectric layer between the AlGaN layer 150 and the gate layer 120 (not explicitly labeled in FIG. 1). For example, a part of the GaN cap layer 140 is etched, and the gate layer 120 does not have direct contact with the GaN cap layer 140 (not explicitly shown in FIG. 1). Selectively etching the GaN cap layer 140 under the gate layer 120 may result in a relatively stable threshold voltage and a constant off-state leakage current.FIG. 2a depicts the gate portion marked with the number 200 in FIG. 1, and it depicts in detail the layers existing under the gate layer 120. FIG. 2a also illustrates the position of the gate layer 120 relative to other layers present in the AlGaN/GaN HFET 100. The gate portion 200 includes a gate layer 120, a gate dielectric layer 155, and a SiN (silicon nitride) layer 145. In some examples, the gate dielectric layer 155 may include silicon nitride, aluminum oxide, silicon dioxide, and the like. In some examples, the dielectric layer 155 may have a thickness of 100 nm. The gate portion 200 also includes a portion of the GaN cap layer 140, the AlGaN layer 150, and the GaN layer 160 existing under the gate layer 120. In some examples, the silicon nitride layer 145 includes recesses that extend from the outer surface 144 of the silicon nitride layer 145 to the top surface 149 of the AlGaN layer 150.The recesses form a discontinuity 148 in the GaN cap layer 140. The gate dielectric layer 155 is located on the outer surface 144 of the silicon nitride layer 145 and extends to the discontinuous portion 148 along a plurality of additional surfaces labeled 146 and 147 of the silicon nitride layer 145. The gate dielectric layer 155 fills some or all of the discontinuous portion 148. The thickness of the gate dielectric layer 155 may be substantially equal to (ie, with an error range of 10% to 15%) the thickness of the GaN cap layer 140. In some examples, the thickness of the gate dielectric layer 155 may be different from the thickness of the GaN cap layer 140. The gate layer 120 is deposited on the gate dielectric layer 155, and in some examples, a T shape or a Y shape is adopted. The gate layer 120 may take any other shape. In some examples, the GaN cap layer 140 may include multiple discontinuous portions. For example, as depicted in FIG. 2b, multiple discontinuous portions may be located between each of the individual segments of the GaN cap layer 140 and the portion 151 located within the GaN cap layer 140. In some examples, the portion 151 present in the GaN cap layer 140 may include an unetched portion of the GaN cap layer 140. In some examples, the portion 151 may include another dielectric (eg, SiN) that is deposited to form a plurality of discrete portions in the GaN cap layer 140. In some embodiments, chemical vapor deposition may be used to deposit additional discontinuities. In some examples, other types of deposition methods, such as atomic layer deposition or epitaxial deposition, may be used to deposit the portion 151. In some examples, the recess may etch a certain portion of the AlGaN layer (not explicitly shown), which may reduce the distance between the gate layer 120 and the 2DEG formed at the AlGaN/GaN interface.The shape of the recess is not limited to the shape or size shown in FIG. 2a. The recesses can take any size or shape (e.g., square, rectangular, triangular, trapezoidal), and the manufacturing process used can be adapted to produce recesses of any such size and/or shape. In some examples, the shape of the recess may depend on the type of etching technique used to form the recess (for example, plasma etching, dry etching, chemical etching, etc.). The GaN cap layer 140 covering the completed HFET and the discontinuous portion (or several discontinuities) in the GaN cap layer 140 generate a relatively stable threshold voltage of the AlGaN/GaN HFET 100. This is because selectively etching the GaN cap layer 140 under the gate layer 120 may cause the threshold voltage to become more positive with respect to the threshold voltage of a transistor manufactured using a conventional unetched GaN cap layer. Structurally, this can be because the recess (and the removal of the GaN cap layer 140) can reduce the distance between the gate and the 2DEG. This reduction in distance can further result in a relatively positive threshold voltage of HFET 100, which can provide additional drift margin to the threshold voltage. In other words, the threshold voltage of the HFET 100 (including the etched GaN cap layer 140) can still drift, but the selective etching of the GaN cap layer 140 can be the threshold voltage by making the threshold voltage of the HFET 100 relatively positive. Drift provides extra margin.When a hard switching stress test is performed on AlGaN/GaN HFET 100, a stable and controlled threshold voltage can be understood. FIG. 2c depicts results from one such test performed on the AlGaN/GaN HFET 100 with the discontinuity 148 of FIG. 2a present under the gate layer 120. Figure 2c depicts a graph 220 that includes time on the x-axis and normalized drain current on the y-axis. The hard switching stress test includes an off-state stress test with the following conditions: V (gate to source)=-14V, V (drain to source)=600V. The 2DEG channel is depleted under the gate layer 120 (and therefore, the channel is in an off state), under the gate layer 120, because V (gate-to-source) is less than that of the AlGaN/GaN HFET 100 Threshold voltage. The hard switching stress test also includes a hot carrier stress test, which turns on the AlGaN/GaN HFET 100 for a few nanoseconds while applying a V (drain to source) of 600V to the drain 130. The hard switching stress test calculates the degradation in the drain-to-source resistance during the off-state stress test and the heat carrier stress test. Figure 2c depicts the output 210 of the hard switching stress test, which shows a constant leakage current depicting a controlled and stable threshold voltage. 2c further depicts that the GaN cap layer 140 can still be used to protect the top surface of the HFET, and selectively etching the GaN cap layer 140 below the gate layer 120 can result in a relatively stable threshold voltage and constant off-state leakage current .3a depicts an illustrative method 300 to selectively etch a portion of the GaN cap layer 140 below the gate layer 120. The method 300 of FIG. 3a is described in conjunction with FIGS. 3b to 3h. The method 300 starts at step 310, where a device having a first layer and a second layer forming a heterojunction is provided. In some examples, the device may be a transistor including a group III nitride. In some examples, the first layer is an AlGaN layer 150, and the second layer is a GaN layer 160 that forms a heterojunction at the AlGaN/GaN interface. When AlN is deposited on the GaN layer 160 using chemical vapor deposition, the AlGaN layer 150 is formed. In some examples, other types of deposition methods, such as atomic layer deposition or epitaxial deposition, may be used to deposit AlN on GaN. Due to the polarization discontinuity, the AlGaN layer 150 and the GaN layer 160 form a heterojunction containing 2DEG at the AlGaN/GaN interface. The method 300 continues in step 320 (Figure 3c), where a third layer is deposited on the first layer. The third layer may be a GaN layer (also referred to herein as a "GaN cap layer"). As described above, in some examples, the thickness of the GaN cap layer 140 is less than the thickness of the GaN layer 160. In some examples, the GaN cap layer 140 is used to prevent retention by protecting the top surface of the AlGaN/GaN HFET (for example, AlGaN).The method 300 continues in step 330 with the deposition of a fourth layer on the third layer. In some examples, the silicon nitride layer 145 is deposited on the GaN cap layer 140 as a protective layer to provide electrical isolation from the AlGaN/GaN HFET 100. The SiN layer 145 has an outer surface opposite to the surface on which the SiN layer 145 is deposited. The use of silicon nitride is not restrictive, and other materials such as silicon dioxide, aluminum oxide, etc. can also be used to provide electrical isolation. The method 300 further continues in step 340, where a recess extending from the outer surface of the first layer to the third layer is formed. In some examples, the recesses are created by first etching the SiN layer 145 using a plasma etching technique to expose a portion of the GaN cap layer 140, and then etching the exposed GaN cap layer 140. The etching of the SiN layer 145 is not limited to plasma etching, and other techniques such as chemical etching techniques may also be used to etch a part of the SiN layer 145. By performing the penetration step and the main etching step, the exposed portion of the GaN cap layer 140 is etched. The through step can be performed by using boron trichloride (BCl3). A through step is performed to remove native oxide from the GaN cap layer 140. In addition, the main etching step may be performed using a plasma etching process using a gas composed of a mixture of boron trichloride and sulfur hexafluoride. The recesses form a “discontinuous portion” in the GaN cap layer 140. In some examples, a plurality of discontinuous portions may be formed in the GaN cap layer 140 by selectively etching a certain portion of the exposed portion of the GaN cap layer 140. A similar etching process as described above can also be used to form multiple discontinuous portions.The steps of the method 300 can be adjusted as needed, including adding, deleting, modifying or rearranging one or more steps. For example, the gate dielectric layer 155 may be deposited in a discontinuous portion formed in the GaN cap layer 140. The gate dielectric layer 155 may also be deposited on the outer surface of the silicon nitride layer 145, and on multiple surfaces of the silicon nitride layer 145, as shown in FIG. 3b. In addition, the gate layer 120 can also be deposited on the gate dielectric layer 155 using, for example, sputtering technology.Within the scope of the claims, modifications to the described embodiments are possible, and other embodiments are possible.
Resistive memory cell structures and methods are described herein. One or more memory cell structures comprise a first resistive memory cell comprising a first resistance variable material and a second resistive memory cell comprising a second resistance variable material that is different than the first resistance variable material.
What is claimed is; 1. An array of resistive memory cells, comprising: a first resistive memory cell comprising a first resistance variable material; and a second resistive memory cell comprising a second resistance variable material that is different than the first resistance variable material, 2. The array of claim I, wherein first and second resistance variable materials include phase change -materials. 3. The array of claim 1 s wherein the first and second resistive memory cells each include a heater material formed on a conductive plug coupled to a select device. 4. The array of claim 1 , wherein the first and second resisti ve memory ceils each include a self-aligned structure comprising the respecti ve first and second resistance variable materials formed between respective first and second conductive elements. 5. The array of claim 45 wherein the first conductive element is a heater and the first and second resistance variable materials are phase change materials. 6. The array of any one of claims 1 to 5, wherein the first resistive memory cell belongs to a first sub-array and the second resistive memory cell belongs to a second sub-array. 7. The array of claim 1, wherein a first region of the array including the first resistance variable material is configured to store a first type of data, and a second region of the array including the second resistance variable material is configured to store a second type of data. 8. The array of claim 7, wherein the first region of the array has a higher associated retention capabi lit than the second region of the array9, The array of claim 7, wherein the first region of the arra has a higher associated programming throughput than the second region of the array. 10. An array of resistive memory ceils, comprising: a first number of resistive memory cells in a first region of the array and comprising a resistance variable materia! ha ving a particular electrothermal property associated therewith; and a second number of resistive memory ceils in a second region of the array and comprising a resistance variable material having a different electrothermal property associated therewith. 1 i. The array of claim 10, wherein at least one of the resistance variable material of the first number of cells and the resistance variable material of the second number of ceils is implanted with an ion.. 12. The array of claim 10, wherein the array includes a reactant material formed on the resistance variable material in the first and second regions, a thi ckness of the reactant material formed on the first region being different than a thickness of the reactant material formed on the second region. 13. The array of claim 10, wherein the array includes a first reactant material formed on the resistance variable material in the first region of the array and a second reactant material formed on the resistance variable material in the second region of the array, wherein the second reactant: material is different tha the first reactant material. 1 . The array of any one of claims 10 to 13, wherei th first and second resistive memory cells are coupled to a same bit line. 15. A method of forming an array of resistive memory cells, the method comprising:forming a first number of resisti ve memorv cei ls in a first remon of the array, the first number of ceils comprising a first resistance variable material; and forming a second number of resistive memory cel ls in a second region of the array, the second number of cells comprising a second resistance variable material that is different than the first resistance variable material. 16. The met hod of claim 15, wherein forming the arra includes: forming the first resistance variable materia! cm a. first conducti ve material formed on the first and second region; forming a first cap material on the first resistance variable material; removing the first resistance variable material and the first cap materia! from the second region; forming the second resistance variable materia! on the first cap material of the first region and on the conductive material of the second region; forming a second cap material on the second resistance variable material; removing the second resistance variable material and the second cap material from the first, region; and forming separate cell stacks corresponding to the respective first and second number of resistive memory cells. 17. The method of claim 16, wherein the first conductive material is formed on a number of conductive plugs coupled to respective select devices corresponding to the first number of resistive memory cells and to the second number of resistive memory cells. 18. The method of claim 1 , wherein the first and second cap materials serve as bit lines for the respective first and second number of resistive memory cells. 1 . The method of any one of claims 15 to 18, wherein the first and second resistance variable materials include different chalcogenide alloys. 20. The method of claim 15, wherein forniing the array includes:forming a first via in a dielectric formed on a first region and a second region of the array; forming a first resistance variable material on the first and the second region of the array, the first resistance variable material filling the first via; forming a .first cap material on the first resistance variable material; removing a portion of the first resistance variable material and a portion of the first cap material from the second region of the array; forming a second via in the dielectric formed on the second region of the array; forming a second resistance variable material on the first and second region of the array, the second resistance variable material filling the second via; forming a second cap material on the second resistance variable material; removing the second resistance variable material and the second cap material from the first region of the array; performing an etch process such that the first resistance variable material is confined to die first via and the second resistance variable material is confined to the second via; and forming a metal material on the first resistance variable material formed in the first via and on the second resistance variable material formed in the second via. 2 ί . The method of claim 20, wherein forming the meta l material includes tbrmina the metal material usin a damascene process. 22, The method of any one of claims 20 to 21 , wherein forming the array includes formina a metal nitride barrier material between the first and second resistance variable materials and the metal material. 23. A method of forming an array of resistive memory cells, the method comprising: forming a first number of resistive memory cells in a first regio of the arrav and a second number of resistive memorv cells in a second reakm of the array; andmodifying an electrothermal property of a resistance variable material in at least, one of the first region of the array and the second region of the array. 24. The method of claim 23, wherein modifying the electrothermal property of the resistance variable material in the at least one of the first and second regions includes at least one of performing a first ion implantation process on at least a portion of the first region and performing a second i on implantation on at least a portion of the second region. 25. The method of claim 24, wherein performing the at least one of the first implantation process and the second ion implantation process includes implanting ions through a cap material formed o the resistance variable material. 26. The method of claim 23, wherei modifying the electrothermal property of the resistance v ariable materia! in the at least one of the first and second regions includes; forming a first reactant material on the resistance variable materia! in the first region; and forming a second reactant material on the resistance variable material in the second region. 27. The method of claim 23, wherein modifying the electrothermal property of the resistance variable material in the at least one of the first and second regions includes thermal activation to modify t he electrothermal properties of the resistance variable material in the first and second regions of the array. 28. The method of c!aim 23, wherein modifying th electrothermal property of the resistance variable material in the at least one of the first and second regions includes; forming a reactant material of a first thickness on the resistance variable material in the first region; and forming the reactant material of a different thickness on the resistance variable material in the second region. 29. The method of claim 26, wherein modifying the electrothermal property of the resistance variable material in the at least one of the first and second regions includes: removing a portion of the first reactant material; forming the second reactant material where the portion of the first reactant material was removed and on the resistance variable material in the second region: forming a cap material on the second reactant material; and forming separate cell stacks corresponding to the respective first and second resistive memory cells, 30 The method of claim 23, wherein modifying the electrothermal property of the resistance variable material in the at least one of the first and second regions includes: performing a first ion implantation process using an ion of a first concentration on at least a portion of the first region; and per forming a second ion implantation using an ion of a second concentration on at least a portion of the second region, wherein the first ion concentration is different than the second ion concentration, 31. A method of forming an array of resistive memory cells, the method comprising; forming a first conductive material in a first region and a second region of the array; forming a resistance variable material on the first conductive material; forming a second conductive material on the resistance variable material; removing a portion of the second conductive material from the second region of the array : forming a third conductive material on the second conductive material in the first region and on the resistance variable material in the second region of the array;forming a fourth conductive material on the third conductive material; arid defining individual resistive memor cells by removing portions of the first and second resistance variable materials and portions of the first, second, third, and fourth conductive materials 32. The method of claim 31 , wherein the second conductive material reacts with the resistance variable material in the first region and the third conductive material reacts with the resistance variable material in the second region to modify an electrothermal property of the resistance variable material , 33. The method of claim 32, wherein the reactions are thermally activated. 34. The method of any one of claims 31 to 33. wherein the second conductive material and the third conductive material are the same conductive material
RESISTIVE MEMORY CELL STRUCTURES AND METHODS Technical Field {00011 The present disclosure relates generally to semiconductor memory devices and methods, and more particularly, to resistive memory cell structures and methods. Background [00021 Memory devices are typically provided as internal, semiconductor, integrated circuits in computers or other electronic devices. There are many different types of memory, including random-access memory (RAM), read only memory (ROM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), flash memory, resistive memory, such as phase change random access memory (PCRAM) and resistive random access memor (RRAM), and magnetic random access memory (MRAM), such as spin torque transfer random access memory (STT RAM), among others. [0003') Memory devices are utilized as non-volatile memory for a wide range of electronic applications in need of high memory densities, high reliability, and data retention without power. Non-volatile memory may be used in, for example, personal computers, portable memory sticks, solid state drives (SSDs), digital cameras, cellular telephones, portable music players such as MPS players, movie players, and other electronic devices. [O004J Resistive memory devices, such as PCRAM devices, can include a resistance variable material such as a phase change material, for instance, which can be programmed into different resistance states to store data. The particular data stored in a phase change memory cell can be read by sensing the cell's resistance, e.g., by sensing current and/or voltage variations based on the resistance of the phase change material. Bri ef Descri tion of the Dra in gs [00051 'Figure 1 is a schematic diagram of a portion of a resistive memory array in accordance with a number of embodiments of the present disclosure.[0006} Figures 2A-2E illustrate various process stages associated with forming an array of resistive memory cells in accordance wi th a number of embodiments of the present disclosure. [0007) Figures 3A-3F illustrate various process stages associated with forming an arra y of resistive memor cel ls in accordance with a number of embodiments of the present disclosure. |0008} igures 4Α-4Ϊ illustrate various process stages associated with forming an array of resistive memory cells in accordance with & number of embodiments of the present disclosure. [0009) Figures 5A-5E illustrate various process stages associated with forming an array of resistive memory cells in accordance with a n umber of embodiments of the present disclosure, Detaiied Description [00101 Resistive memor cell structures and methods are described herein. As an example, an array of resistive memory ceils can include a first resistive memory cell comprising a first resistance variable material and a second resistive memory cell comprising a second resistance variable material that is different than the first resistance variable material. [0011] In a number of embodiments, an array of resistive memory cells includes a first region, e.g., portion, comprising memory cells formed to provide increased speed, e.g., program throughput, and longer endurance, e.g., increased cycling ability, as compared to a second, e.g., different, region of the array. The second region of the array can comprise cells formed to provide an increased reliability, e.g., temperature retention capability, as compared to the cells of the first region of the array. As an example, the first region of the array may be more suitable for data manipulation, while the second region may he more suitable for code storage, e.g., storage of sensitive data, or for data backup. [0012} A region with increased retention capability, as compared to a different region, can also include the region specified to retain data at a higher temperature at a given time than the different region, as well as the region specified to retain data at a give temperature for an increased time period tha the different region. ?[0013} In a number of embodiments, the cells of the first region of the array can comprise a different resistance variable material, e.g., a different chalcogenide alloy, than the cells of the second region. For instance, the cells of the first, region may comprise a phase change material such as, Ge^SbsTes, which ma be more suited to a higher retentio than the cells of the second region, which may comprise a phase change material, such as GeaSI^Tes, which may be more suited to increased throughput, e.g., fester set-ability. [0014} In a number of embodiments, the memory cells of the first region and of the second region can comprise the same resistance variable material. In some such embodiments, different reactant materials can be formed on the resistance variable materials of the cells of the respective first and second regions, which can provide tor different ceil characteristics, e.g.. retention capability and/or cycling ability, between the cells of the respec tive first and second regions, hi a number of embodiments, the cell characteristics between the cells of the respective first and second regions of the array can be different due to forming a particular reactant material to a different thickness on the cells of the first region as compared to the cells of the second region. [0015} In one or more embodiments in which the same resistance variable material is used to form the memory cells of the first and second regions of the array, the electrothermal properties of the resistance variable materia! s of the first and/or second regions can be modified, e.g., via ion implantation, such that the cell characteristics of the cells of the respective first and second regions are different. As such, embodiments of the present disclosure can provide benefits such as providing the ability to tailor the cell characteristics of different regions of a memory array to achieve desired cell characteristics, araona other benefits. [0016) I the following detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how one or more embodiments of the disclosure may be practiced. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the embodiments of this disclosure, and it is to be understood that other embodiments may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure.[0017} The figures herein follow a numbering convention in which the first digit or digits correspond to the drawing figure number and the remaining digits identify an element or component in the drawing. Similar elements or components between different figures may be identified by the use of similar digits. For example, 102 may reference element "02" in Figure i, and a similar element may be referenced as 202 in Figure 2. As will be appreciated, elements shown in the various embodiments herein can be added, exchanged, and/or eliminated so as to provide a number of additional embodiments of the present disclosure, in addition, the proportion and the relative scale of the elements provided in the figures are intended to illustrate various embodiments of the present disclosure and are not to be used in a limiting sense, [0018} Figure 1 is a schematic diagram of a portion of a resistive memoryr array 102 in accordance with one or more embodiments of the present disclosure. The resistive memory array 102 includes a number of memory cells 104, each including a select device 132 coupled to a resistive storage element 1 12. The memory ceils Ϊ 04 can be formed in accordance with embodiments described herein. [0019} The resistive storage elements 1 i 2 can include a resistance variable material, e.g., a phase change material. The phase change material can be a chalcogenide, e.g., a Ge-Sb-Te (GST) material such as GetSh-jTe?, etc., among other resistance variable materials. The hyphenated chemical composition notation, as used herein, indicates the elements included in a particular mixture or compound, and is intended to represent ail stoichiometries involving the indicated elements. Other phase change materials can include Ge-Te, In-Se, Sb-Te, Ga-Sh, hi-Sh, As-Te, Al-Te, Ge-Sb-Te, 1 e-Ge-As, fn-Sb-Te, Te-Sn-Se, Ge-Se-Ga, Bi-Se-Sb, Ga-Se-Te, Sn~Sb-Te, In-Sb-Ge, Te-Ge-Sb-S, Te~Ge~Sn~0, Te~Ge-Sn-Au, Pd- Te-Ge~Sn. In~Se-Ti~Co, Ge-Sb-Te-Pd. Ge-Sb-Te-Co, Sb-Te- Bi-Se, As-m~Sb- Te, Ge-Sb-Se-Te, Ge-Sn-Sb-Te, Ge-Te-Sn- i, Ge-Te-Sn-Pd, and Ge-Te-Sn- Pt. for example. [0020 The select devices 132 may be field effect transistors, e.g., metal oxide semiconductor field effect transistors (MOSFETs), ovonic threshold switches (OTS), bipolar junction transistors (BJTs) or diodes, among other types of select devices. Although the select device 132 shown in Figure 1 is athree terminal select device, the select devices can be two ter.ra.iaal select devices, for instance. [0021} In the example illustrated in Figure 1 , the select device 132 is a gated three terminal field effect transistor. As shown, in Figure I, gate of eac h select de vice 132 is coupled to one of a number of access lines 105-1 , 105-2. . 05-R, i.e., each access line 105-1, 105-2, . . ., 105-N is coupled to a row of memory cells 104. The access lines 105-1 , 105-2, . . . , 105- may be referred to herein as "word lines." The designator "N" is used to indicate that the resistive memory array 102 can include a number of word lines. 0022| la the example illustrated in Figure 1 , each resistive storage element 1.12 is coupled to one of a number of data/sense lines 107-1 , 107-2, . . ., 107-M., i.e., each data line 107-1 , 1 7-2, . . 107-M is coupled to a column of memory cells 104. The data lines 107- 1 , 107-2, . . 107-M may be referred to herein as "bit lines. " The designator "M" is used to indicate that the resisti ve memory array 102 can include a number of bi t lines. The designators M and N can have various values. For instance. M and N can be 64, 128, or 256. In some embodiments, a bit line direction is perpendicular to a word line direction, e.g., the rows of memory cells 104 and the columns of memory cells 104 are perpendicular to one another. [0023] In a number of embodiments, dat lines 107-1 and 107-2 can be grouped into a sub-array 136, and other data lines (e.g., data line 107-M ) can be grouped into a sub-array 134. In the example illustrated in Figure 1 , a memory cell coupled to bit line 107-1 is adjacent to a memory cell coupled to bit line 1 7-2, Embodiments are not l mited to a particular number of word lines and/or bit lines or a particular number of sub-arrays.. [0024} The select devices 132 can be operated, e.g., turned on off, to select/deselect the memory cells 104 in order to perform operations such as data programming, e.g., writing, and/or data reading operations. In operation, appropriate voltage and/or current signals, e.g., pulses, can be applied to the bit lines and word lines in order to program data to and/or read data from the memory cells 104. As an example, the data stored by a memory cell 1 4 of array 102 can be determined by turning on a select device 132, and sensing a current through the resistive storage element 1 12. The current sensed on the bit line corresponding to the memory cell 104 being read corresponds toresistance level of the resistance variable material of resistive storage element 1 12, which in turn ma correspond to a particular dat state, e.g., a binary value. The resistive memory array 102 can have an architecture other than that illustrated in Figure 1 , as will be understood by one of ordinary skill in the art. (0025} Figures 2A-2E illustrate various process stages associated with forming an array 202 of resistive memory cells in accordance with a number of embodiments of the present disclosure. n a number of embodiments, the resistive memory cells may be coupled to a same bit line. The memory cells of array 202 can be resistive memory cells such as resistive memory cells 104, as described above. As an example, the array 202 can be an array of phase change memory cells. [0026] Figure 2 A illustrates a first region 234 and a second region 236 of array 202. In the example shown in Figure 2A, regions 234 and 236 include conductive plugs 230-1 ,... ,230-4 formed between a heater material 210, e.g., a conductive material, and a substrate 232. The conductive plugs 230-1 ,...,230- 4 are separated by a dielectric material 222 formed on substrate 232. The dielectric material 222 can be a material such as silicon dioxide or silicon nitride, for instance. The substrate 232 can be a silicon substrate, silicon on insulator (SOI) substrate, or silicon on sapphire (SOS) substrate, for instance, and can include various doped and/or undoped semiconductor materials. [0027} Although not shown in Figure 2A. select devices corresponding to the memory ceils of regions 234 and 236 can be formed in substrate 232. As described further herein, resistive memory cells of region 234 can be formed so as to exhibit different cell characteristics as compared to the resistive memory cells of region 236. For instance, the cells of the respecti ve regions 234 and 236 may comprise resistance variable materials having different characteristics, e.g., different electrothermal properties, such that cell characteristics of the cells of the respective regions 234 and 236 are different. [0028[ The heater material 210 is formed on plugs 230- 1...„ ,230-4 and can be various conductive materials such as a metal nitride, e.g., tungsten nitride and/or titanium nitride, among other conductive materials, in a number of embodiments, heater material 210 is limited in a direction perpendicular to the rows of memory cells (not shown) prior to a material, e.g., a resistance variable material, being formed on the heater material 210. As used herein, amaterial being "formed on" another materia! is not limited to the materials being formed directl on each other. For instance, number of intervening materials can foe formed between a first inaterial formed on a second material, in various embodiments. (0029} Figure 2B illustrates a process stage subsequent to that shown in Figure 2A and associated with forming array 202. A resistance variable material 212, e.g., a phase change material, is formed on heater material 210. That is, the heater material 210 of regions 234 and 236 includes resistance variable material 212 formed thereon. j 0030 As illustrated in Figure 2B, a reactant material 214 is formed on the resistance variable material 212 in regions 234 and 236. The reactant materia! 214 can be a metal reactant, such as, for example, a reactant comprising titanium, cobalt, and/or tungsten, for instance. Reactant material 214 can serve as a portion of a cap formed on resistance variah!e material 212. f00311 Figure 2C illustrates a portion of reactant material 214 removed from region 234 of array 202. The reactant material 214 can be removed from region 234 via an etch process, for instance, without removing the reactant materia! 214 from region 236. f0032j Figure 2D illustrates a process stage subsequent to that shown in Figure 2C and associated with forming array 202, Figure 2D illustrates a reactant material 216 formed on reaions 234 and 236. As such, reactant materia! 21 is formed on resistance variable materia! 212 in region 234 and on reactant material 21 in region 236. Reactant material 21 can be a metal reactant, such as, for example, a reactant comprising titanium, cobalt, and/or tungsten, among others. Reactant materia! 216 can be the same material as reactant material 214. or it can be a different reactant material. Also, reaetants 214 and 216 may be formed to the same or different thicknesses. |0033f Providing different reactant materials, e.g., reactant materials 214 and 216, in different regions, e.g., regions 234 and 236, of an array can be used to form memory cells having different cell characteristics, e.g., electrothermal properties, within the respective array regions. For instance, reactant material 214 can react with resistance variable materia! 2 Ϊ 2 in region 2 6, and a different reactant materia! 2 ! 6 can react with resistance variable material 212 in region 234. In number of embodiments, the reactions are thermallyacti ated. The reactants 21 and 216 react differently with the resistance variable material 212 in the respective regions 236 and 234. As such, the electrothermal properties of the resistance variable material 212 in regions 234 and 236 can be modified with respect to each other. Therefore, resistive memory cells formed i» region 234 can exhibit different cell, characteristics as compared to the cell characteristics of cells formed in region 236. |0034} in a number of embodiments, the reaetant materials 214 and 216 can be the same material, fa such embodiments, a thickness of the reaetant material 214/216 formed in the respective regions 234 and 236 can be different. Providing different thickness of a same reaetant material 214/216 in different regions of an array can also affect the cell characteristics within the respective regions, e.g., redoes 234 and 236. For instance, cells formed in a region, e.g., region 236, having a thicker reaetant material may exhibit a higher retention as compared to cells formed in a region, e.g., region 234, having a thinner reaetant material. Cells formed in a region, e.g., region 234, having a thinner reaetant material may exhibit a higher programming throughput as compared to cells formed in a region, e.g., region 236, having a thicker reaetant material. f0035 j Figure 2D also illustrates a cap material 218, e.g., a conductive material, formed on reaetant material 216 in regions 234 and 236, The cap materia! 218 can comprise a metal nitride such as titanium nitride and or tungsten nitride, among various other cap materials. |0036} Figure 2E illustrates a process stage subsequent to that shown hi Figure 2D and associated with forming array 202. As shown in Figure 2E, individual resistive memory cells 21 -1 and 21 -2 can be defined by removing portions of materials 210. 212, 214. 21 , and 2.18. in a number of embodiments, resistance variable material 212 is between two conductive elements, e.g., between a conductive cap material 218 and heater material 210. In this example, the memory cells 213- 1 and 213-2 are self-aligned and can be formed via a number of masking and etching processes, for instance. [0037} Although the example show in Figures 2A-2E is for an array of phase change memory cells, embodiments are not so limited. For instance, in a number of embodiments, the array 202 can be an array of RR AM cells orother resistive -memory ceils hav ng separate regions with different cell characteristics. [0038} Figures 3A-3F illustrate various process stages associated with forming array 302 of resisti ve memory ceils in accordance with a number of embodiments of the present disclosure. The .memory cells of array 302 can be resistive memory cells such as resistive memory cells 104, as described above. As an example, the array 302 can be an array of phase change memory cells. [0039} Figure 3A illustrates a first region 334 and a second region 336 of array 302. In the example shown in Figure 3.4, regions 334 and 336 include conductive -plugs 330-1 ,.. , ,330-4 formed between a heater material 31 , e.g., a conductive material, and a substrate 332. In a number of embodiments, heater materia! 10 is limited in direction perpendicular to the rows of memory cells (not shown) prior to a material e.g., a resistance variable material being formed on the heater materia! 310. f0040| The conductive plugs 330- 1 ,... ,330-4 are separated by a dielectric materia! 322 formed on substrate 332. The dielectric material 322 can be a materia! such as silicon dioxide or silicon nitride, for instance. The substrate 332 can be a silicon substrate, silicon on insulator (SOI) substrate, silicon on sapphire (SOS) substrate, for instance, and can include various doped and/or undoped semiconductor materials. A heater material 310 is formed on plugs 330-1,. ..,330-4 and can be various conductive materials, such as metal nitride, e.g., titanium nitride, tungsten nitride, among other conductive materials. |0041 } Figure 3B illustrates a process stage subsequent to that shown in Figure 3A and associated with forming array 302. A resistance variable material 312-1, e.g., a phase change material, is formed on heater materia! 310, That is. the heater material 3 1 of reaions 334 and 336 includes resistance variable material 312-1 formed thereon. As is further illustrated in Figure 3B, a cap material 3 18-1 is formed on resistance variable material 312-1 in regions 334 and 336. [0042 } Figure 3C illustrates a portion of resistance variable materia! 312- I and cap material 18-1 removed from region 34 of array 302. The portions of materials 312-1 and 3 18-1 can be remo ved from region 334 via an etch process, for instance, without removing portions of materials 312-1 and 18-1 in region 33 of array 302.[0043} Figure 3D illustrates a process stage subsequent to that shown in Figure 3C and associated with forming arra 302. Figure 3D illustrates resistance variable material 312-2 formed on heater material 310 in region 334 and. cap material 3 18- 1 in region 336 of array 302. Figure 3D also illustrates cap material 3.18-2, e.g., a conductive materia!, formed on resistance variable material 312-2 in regions 334 and 336. |0044| 'Figure 3 B illustrates a portion of resistance variable material 312- 2 and cap material 318-2 removed from region 336 of array 302. Removing resistance variable material 3 12-2 from region 336 of array 302 can result in a smooth region, .including heater material ' 10, resistance variable materia! 312- 1 , and cap material 318-1 in region 336 of array 302. Region 334 of example array 302 can also be a smooth region, including heater material 3 10, resistance variable material 312-2, and cap material 318-2. in a number of embodiments, cap materials 318-1 and 3.18-2 can serve as bit lines for the resistive memory cells, [0045] Providing different materials, e.g., resistance variable materials 3 12-1 and 312-2. in different regions, e.a., resions 334 and 336. of an arrav can be used to form memory cells having different, cell characteristics, e.g., electrothermal properties, within the respective array regions. For example, resistance variable material 312- 1 can act differently within region 336 of array 302 than 312-2 acts within region 334 of array 302. As such, properties of the resistance variable materials 12- 1 and 312-2 in regions 336 and 334 may be different, and resistive memory cells formed in region 334 can exhibit different cell characteristics as compared to the cell characteristics of cells formed in region 336. [0046] Figure 3 illustrates a process stage subsequent to that shown in Figure 3E and associated with forming array 302. As shown in Figure 3F, individual resistive memory cells 313-1 and 313-2, e.g. , separate cell stacks, can be defined by removing portions of materials 310, 12- 1, 318- 1 , 312-2, and 318-2, In a number of embodiments, resistance variable materials 312- 1 and 312-2 are between two conductive elements, e.g., between a conductive cap material 318-1 and/or 318-2 and a heater material 310. 'In this example, the memory cells 3 13-1 and 313-2 are self-aligned and can be formed via a number of masking and etching processes, for instance.}0047| Although the example shown in Figures 3A-3F is for an array of phase change materials, embodiments are not so limited. For instance, in a number of embodiments, the array 302 can be an array of RRAM ceils or other resistive memory cells having separate regions with different cell characteristics. [0048] Figures 4Α-4Ϊ illustrate various process stages associated with forming an array 402 of resisti ve memory cells in accordance with a number of embodiments of the present disclosure. The memory cells of array 402 can be resistive memory cells such as resistive memory cells 104, as described above with respect to Figure I . As an example, the array 402 can be an array of phase change memory ceils, but is not so limited. [0049J Figure 4A illustrates a first region 434 and a second region 436 of array 402. in the example shown in Figure 4A, regions 434 and 436 of array 402 include conductive plugs 430-1 ,... ,430-4 separated by a dielectric material 422 formed on a substrate material 432, The dielectric material 422 can be a materia! such as silicon dioxide or silicon nitride, for instance. The substrate 432 can be a silicon substrate, silicon on insulator (SOI) substrate, silicon on sapphire (SOS) substrate, for instance, and can include various doped and/or undoped semiconductor materials. [0050] Figure 4B illustrates a process stage subsequent to that shown in Figure 4A and associated with forming array 402. A via 420-1 is formed in a portion of region 436, within dielectric material 422. Via 420-1 can be aligned with a conductive plug in region 436, e.g., plug 430-3. §0051} Figure 4C illustrates a process stage subsequent to that shown in Figure 4B and associated with forming array 402. A resistance variable materia! 12-1 is formed on dielectric 422 in region 434 and region 436 of array 402. As further illustrated in Figure 4C, resistance variable material 412- 1 fills vi 420- 1. A cap material 418-1 , e.g., a conductive material, is formed on resistance variable material 412-1 and can comprise a number of conductive materials, including, for example, tungsten. [0052} Figure 4D illustrates a portion of resistance variable material 412- 1 and a portion of cap material 418-1 removed from region 434 of array 402. The materials 412-1 and 418-1 can be removed from reaion 434 via an etchprocess, for instance, without removing materials 412-1 and 418- Ϊ from region 436. [0053} Figure 4E illustrates a process stage subsequent to that shown m Figure 4D and associated with forming array 402, A via 4:20-2 is formed in a portion of region 434, within dielectric materia! 422. Via 420-2 can be aligned with a conductive plug in region 434, e.g., plug 430-1. |0054} Figure 4F illustrates a process stage subsequent to that shown in Figure 4E and associated with forming array 402. A resistance variable material 412-2 is formed on dielectric 422 in regions 434 and on cap material 4 8-1 in region 436 of array 402. As further illustrated in Figure 4F, resistance variable material 412- fills via 420-2. A cap material 418-2 is formed on resistance variable materia! 412-2 in regions 436 and 434 and can comprise a number of conduc tive materials, including, for example, tungsten. (0055} Figure 4G illustrates a portion of resistance variable material 412- 2 and a portion of cap material 418-2 removed from region 436 of array 402, The materials 4.12-2 and 18-2 can be remo ved from region 436 via an etch process, for instance, without removing materials 412-2 and 418-2 from region 434. (0056} Figure 4H illustrates a portion of resistance variable material 412- 2 and a portion of cap material 418-2 removed from region 434 of array 402 and a portion of resistance variable materia! 412- ί and a portion of cap material 418-1 removed from region 436 of array 402. Materials 412-1, 412-2, 418-1 , and 4.18-2 can be removed from regions 434 and 436 via an etch process, such that a portion of resistance variable material 412- 1 is confined to and not removed from via 420-1 , and a portion of resistance variable material 412-2 is confined to and not removed from via 420-2. (0057} Providing different resistance variable materials, e.g., materials 412-1 and 412-2, in different regions, e.g., regions 434 and 436, of an array can be used to form memory cells having different cell characteristics, e.g., electrothermal properties, within the respective array regions. For instance, the performance characteristics of resistance variable material 412-1 in via 420- 1 of region 436 may be different than the performance characteristics of resistance variable material 412-2 in via 420-1 of region 434. As such,resisti ve memory cells formed in region 434 can exhibit different cell characteristics as compared to cells formed in region 436. [0058} Figure 41 illustrates a process stage subsequent to that shown in Figure 4H and associated with forming array 202, A metal material 424, e.g., a copper material, is formed on. resistance variable material 412-1 and resistance variable material 412-2. Metal material 424 can be formed using a damascene process, and in a number of embodiments, metal material 424 can serve as a cap material, an electrode, and/or a bit Hue. In a number of embodiments, resistance variable materials 12-1 and 12-2 are between two conductive elements, e.g., between a conductive cap material 318-1 and/or 318-2 and a conductive plug 430-1 and/or 430-2. [00S9J Although not shown in Figure 41, metal material 424 may also interact with a separate bit line, in a number of embodiments, a barrier material (not shown) can be tooned between resistance variable materials 412- 1 and 412-2 and metal material 424. The barrier can comprise, for example, a metal nitride, such as, for example, titanium nitride and/or tantalum nitride. [OO60J Although the example shown in Figures 4.A-4H is for an array of phase change memory cells, embodiments are not so limited. For instance, in a number of embodiments, the array 402 can be an array of RAM cells or other resistive memory cells having separate regions with different cell characteristics. In the example shown in Figures 4A-4H, resistance variable materials 412-1 and 412-2 can comprise phase change materials formed on the conductive plugs, e.g., plugs 430-3 and 430-1 , As such, the conductive plugs can serve as heaters for array 402. 10061 J Figures 5A-5E illustrate various process stages associated with forming an array 502 of resistive memory cells in accordance with a number of embodiments of the present disclosure. The memory cells of array 502 can be resistive memory cells such as resistive memory cells 104. as described above. As an example, the array 502 can be an array of phase change memory cells. [0062} Figure 5 A illustrates a first region 534 and a second region 536 of array 502. In the example shown in Figure 5 A, regions 534 and 536 include conductive plugs 530- 1......530-4 formed between a heater material 510, e.g., a conductive material, and a substrate 532, In a number of embodiments, heater material 510 is limited in a direction perpendicular to the rows of memory cells(not shown) prior to a material, e.g., a resistance variable material, being formed on the heater material 510. [0063} The conductive plugs 530- 1,... ,530-4 are separated. by a dielectric material 522 formed cm substrate 532. The dielectric material 522 can be a material such as silicon dioxide or silicon nitride, for instance. The substrate 532 can be a silicon substrate, silicon on insulator (SOI) substrate, silicon on sap hire (SOS) substrate, for instance, and can include various doped and/or tmdoped semiconductor materials. The heater material 510 is formed on plugs 530-1,... ,530-4 and can be various conductive materials such as metal nitride, e.g., tungsten nitride and/or titanium nitride, among other conductive materials. [0064} Figure 5B illustrates a process stage subsequent to that shown in Figure 5 A and associated with forming array 502. A resistance variable material 512, e.g., a phase change material, is formed on heater materia! 510. That is, the heater material 510 of regions 534 and 536 includes resistance variable material 512 formed thereon. [0065J As illustrated in Figure 5B, a cap material.5 8, e.g., a conductive material, is formed on the resistance variable materia! 512 in regions 234 and 236. The cap material 518 can comprise, for example, a metal nitride such as titanium nitride and/or tungsten nitride, among various other cap materials. [0066J Figure 5C illustrates a process stage subsequent to that shown in Figure SB and associated with forming array 502. Arrows 526 represent ion implantation on at least a portion of region 536. in a number of embodiments, as a result of the ion implantation, an electrothermal property of resistance variable material 512 is modified in at least a portion of region 536. [0067] Figure 5D illustrates a process stage subsequent to that shown in Figure 5C and associated with forming array 502. Arrows 528 represent ion implantation on at least a portion of region 534, In a number of embodiments, as a result of the ion implantation, an electrothermal property of resistance variable material 512 is modified in at least a portion of region 534. It may be sufficient, in various embodiments, to perform ion implantation on at least a portion of region 536, but not region 534, (and vice versa) to differentiate electrothermal properties between resistance variable materia! 512 in each of regions 534 and 536.[0068} In a number of embodiments, ion implantation rocesses; e.g., as represented by arrows 526 and/or 528, can include implantation of different types of ions in the respective regions 536 and 534. The ions can be implanted through cap material 518, for example, and can include implantation of ions such, as arsenic, phosphorus, and/or boron, among other ions. In a number of embodiments, the implanted ions can be metal ions. The implantation processes 526 and 528 can have different associated ion concentrations, different associated ion energies, and'or different numbers of ions. jO069J As such, although the same resistance variable material 512 is formed in regions 534 and 536, the ion implantation processes 526 and/or 528 can be used to modify the electrothermal properties of the material 512 within the respective regions 536 and 534. Therefore, the memory ceils formed in regions 536 and 534 can have different cell characteristics associated therewith. In a number of embodiments, modifying the electrothermal properties of the material 512 can include thermal activation to modify die electrothermal properties of the resistance variable material 5 12 within regions 536 and 534. The thermal activation can apply to ion implantation processes 526 and/or 528. and thermal activation can also apply to a reaetani material processes. [O070J Figure 5E illustrates a process stage subsequent to that shown in Figure 5D and associated with forming array 502. As shown in Figure 5E, individual resistive memory cells 513-1 and 5.1 -2 can be defined by removing portions of materials 510, 512, and 518, In a number of embodiments, resistance variable materia! 512 is between two conductive elements, e.g., between a conductive cap material 518 and a heater material 510. [0071 J Although specific embodiments have been illustrated and described herein, those of ordinary skill in the art will appreciate that an arrangement calculated to achieve the same results can be substituted for the specific embodiments shown. This disclosure is intended to cover adaptations or v ariations of various embodimen ts of the presen t discl osure, it is to be understood that the above description has been made in an illustrative fashion, and not a restrictive one. Combination of the above embodiments, and other embodiments not specifically described herein will be apparent to those of skillin the art upon reviewing the above description. The scope of the various embodiments of the present disclosure includes other applications in which the above structures and methods are used. Therefore, the scope of various embodiments of the present disclosure should be determined with reference to the appended claims, along with the full range of Equivalents to which such claims are entitled, |0072} in the foregoing Detailed Description, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the disclosed embodimeftts of the present disclosure have to use more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.
Examples include techniques for self-healing of a processor in a computing system. A processor semiconductor chip includes one or more processing cores and an embedded non-volatile random-access memory (NVRAM), the NVRAM storing instructions, when executed by the one or more processing cores, detect an error causing a core failure, updates processor configuration information that reflects the corefailure, and causes reset and initialization of the processor using the updated processor configuration information.
1.A processor comprising:One or more processing cores;An embedded non-volatile random access memory (NVRAM) coupled to the one or more processing cores, the NVRAM storing instructions that, when executed by the one or more processing cores, update processing The device configuration information and causes the processor to be reset and initialized using the updated processor configuration information.2.The processor of claim 1 wherein the processor configuration information comprises one or more of the following: a set of valid and operational cores, a set of failed cores, and a set of spare cores.3.The processor of claim 2, comprising instructions for detecting a core failure, and wherein the instructions for updating the processor configuration information comprise updating the processor configuration information reflecting the core failure instruction.4.The processor of claim 3 wherein the instructions for updating the processor configuration information comprise means for removing a failed core from the set of valid and operational cores and adding the failed core to the A set of instructions for a faulty core.5.The processor of claim 2 wherein the instructions for updating the processor configuration information comprise means for removing a spare core from the set of spare cores and from the set of spare cores Instructions added to the set of valid and operational cores.6.The processor of claim 1 wherein said embedded NVRAM stores said processor configuration information.7.The processor of claim 1 wherein said processor configuration information is stored in said NVRAM when said processor is manufactured.8.The processor of claim 1 wherein said NVRAM comprises a three-dimensional crosspoint memory.9.A computing system comprising:System memorya processor coupled to the main memory, the processor comprising:One or more processing cores;An embedded non-volatile random access memory (NVRAM) coupled to the one or more processing cores, the NVRAM storing instructions that, when executed by the one or more processing cores, update processing The device configuration information and causes the processor to be reset and initialized using the updated processor configuration information.10.The computing system of claim 9, wherein the processor configuration information comprises one or more of the following: a set of valid and operational cores, a set of failed cores, and a set of spare cores.11.The computing system of claim 10, comprising instructions for detecting a core failure, and wherein the instructions for updating the processor configuration information comprise updating the processor configuration information reflecting the core failure instruction.12.The computing system of claim 11 wherein the instructions for updating the processor configuration information comprise means for removing a failed core from the set of valid and operational cores and adding the failed core to the A set of instructions for a faulty core.13.A method comprising:Reading the self-repair component from NVRAM embedded in a processor semiconductor chip having one or more processing cores;Executing the self-healing component by the one or more processing cores for detecting errors leading to core failures, updating processor configuration information reflecting the core failures, and causing use of updated processor configuration information pairs The processor semiconductor chip is reset and initialized.14.The method of claim 13 wherein the processor configuration information comprises one or more of the following: a set of valid and operational cores, a set of failed cores, and a set of spare cores.15.The method of claim 13 wherein updating the processor configuration information comprises removing a failed core from the set of valid and operational cores and adding the failed core to the set of failed cores.16.The method of claim 13 wherein updating the processor configuration information comprises removing a spare core from the set of spare cores and adding the spare core from the set of spare cores to the set of valid And the core of operation.17.The method of claim 13 including reading processor configuration information from said NVRAM and storing updated processor configuration information in said NVRAM.18.At least one machine readable medium comprising a plurality of instructions responsive to being executed by a system to cause the system to perform the method of any one of claims 13-17.19.An apparatus comprising means for performing the method of any one of claims 13 to 17.
Self-healing in a computing system using embedded non-volatile memoryTechnical fieldThe examples described herein relate generally to techniques for processing errors in a processor using embedded non-volatile memory.Background techniqueSome computing systems include processors having multiple processing cores. In some processors, the number of processing cores can be large. During operation of the computing system, one or more of the processing cores may fail due to hardware errors that cannot be overcome or corrected. In some cases, failure of the processing core results in failure of the entire multi-core processor, requiring replacement of the multi-core processor. In the case of a server, replacing a multi-core processor results in significant downtime for, for example, a server blade that can accommodate a multi-core processor, because the technician must physically remove the server blade, bring the server blade to the repair location, and replace the multi-core The processor, and returning the server blade to its previous socket in the server. In some processing environments, such as large server centers, where it is desirable to provide a high level of service to customers, such downtime may be unacceptable.DRAWINGSFIG. 1 illustrates an example multi-core processor semiconductor chip with embedded non-volatile random access memory (NVRAM).2 illustrates an example of a logic flow for performing self-healing of a multi-core processor semiconductor chip using embedded NVRAM in a computing system.3 illustrates an example computing system that can perform processor self-healing using embedded NVRAM on a multi-core processor semiconductor chip.FIG. 4 shows an example storage medium.detailed descriptionAs contemplated in the present disclosure, self-healing of a multi-core processor semiconductor chip can be performed by executing instructions of a self-healing component stored in an embedded NVRAM on a processing semiconductor chip. A self-healing component can be executed when an unrecoverable error is detected for the core of multi-core processing. The self-healing component can analyze processor configuration information stored in NVRAM when the processor semiconductor chip is fabricated to determine how to reconfigure the processor configuration for continued operation. The modified processor configuration information can be updated in NVRAM by the self-healing component. In an embodiment, the self-healing component can remove the failed core from a set of efficient and operable cores of the processor semiconductor chip. In an embodiment, if there is a spare core, the self-healing component can add the spare core to the set of valid and operational cores of the processor semiconductor chip.The booting of the computing system can be performed by storing a basic input/output system (BIOS) firmware architecture (eg, Unified Extensible Firmware Interface (UEFI) BIOS) in the embedded NVRAM on the processor semiconductor chip. Since the BIOS is located on the die within the processor semiconductor chip and the component configuration information stored therein can be securely accessed, the efficiency of booting and subsequent operations can be obtained on existing computing systems.The BIOS is a computer program that initializes the computing system and loads the operating system (OS) for the computing system after the power-on self-test (POST) action is completed. During the hard reboot process, the BIOS runs after completing the self-test. In an embodiment of the invention, the BIOS is loaded into the main memory from a persistent store such as an embedded NVRAM. The BIOS then loads and executes the process that completes the boot of the computing system. As with the POST process, the BIOS code comes from a "hardwired" and persistent location; in this case a specific address in the embedded NVRAM. The BIOS is used as an interface between the computer hardware and the OS. The BIOS includes instructions for initializing and enabling low level hardware services (eg, basic keyboard, video, disk drives, I/O ports, and memory controllers) of the computing system.The BIOS initialization and configuration of the computing system occurs during the pre-boot phase. After the system reset, the processor references a predetermined address that is mapped to the NVRAM of the storage BIOS in the processor semiconductor chip (ie, on the die). The processor sequentially fetches the BIOS instructions from the NVRAM. These instructions cause the computing system to initialize its computing hardware, initialize its peripherals, and boot the OS.Once the computing system is running, the self-healing component can manage the current set of valid and operational cores. In one embodiment, the self-healing component can be part of the BIOS. In other embodiments, the self-healing component can be separate from the BIOS, but is also stored in the embedded NVRAM along with processor configuration information.FIG. 1 illustrates an example processor with embedded non-volatile random access memory (NVRAM). FIG. 1 illustrates an improved method in which processor semiconductor chip 100 includes an embedded non-volatile memory for storing information and instructions of BIOS 106 executing on processor 100. In an embodiment, the non-volatile memory can be an embedded NVRAM 101, and the BIOS 106 includes instructions for managing the boot process of the computing system. In an embodiment, BIOS 106 may conform to UEFI specification version 2.7A dated September 2017, or other later versions as disclosed at www.uefi.org. The NVRAM 101 can include component configuration information (CCInfo) 108 that describes components that are installed in the computing system. In an embodiment, CC Info 108 may include a serial number of a memory device (eg, a DIMM) installed in a system memory of a computing system. In other embodiments, other identification information for the memory device or peripheral device may be included in CC Info 108.The NVRAM 101 can also include a self-healing component 109. Self-healing component 109 includes instructions for managing a set of valid and operational cores within processor semiconductor chip 100. The NVRAM 101 may also include processor configuration information (PCI) 110. Processor configuration information 110 may include information identifying a set of valid and operational cores, a set of failed cores, and a set of alternate cores. Initially, when manufacturing a processor semiconductor chip, the set of active and operable cores can be set to a predetermined first amount, the set of failed cores can be empty, and the set of spare cores can be set to a predetermined The second quantity. The sum of the number of active and operational cores, fault cores, and spare cores can be equal to the number of cores physically present in the processor semiconductor chip. PCI 110 may be used by one or more of self-healing component 109, BIOS 106, OS, or other system software to update the set of valid and operational cores, the set of failed cores, and the set of spare cores.Here, as is known in the art, the processor semiconductor chip includes other components that support a complete computing system. For example, as seen in FIG. 1, processor semiconductor chip 100 includes a plurality of central processing unit (CPU) processing cores 102_1 through 102_N (which execute program code instructions) coupled via interconnect 107 to one or more of the following Main memory controller 103 (for engagement with the main memory of the computing system), peripheral control center 104 (for peripherals with computing systems (eg, display, keyboard, printer, non-volatile mass storage device, Network interfaces (eg, Ethernet interfaces and/or wireless network interfaces), etc.), caches 105, and dedicated processors (eg, graphics) that may be used to offload specialized and/or digitally intensive computations from the CPU core Processing unit (GPU) and/or digital signal processor (DSP), which is not depicted in FIG.In an embodiment of the invention, the processor semiconductor chip 100 of FIG. 1 includes an embedded NVRAM 101. The NVRAM 101 can be one or more of the emerging non-volatile memory technologies, such as ferroelectric random access memory (FeRAM), dielectric random access memory, resistive random access memory (ReRAM), memristor random Access memory, phase change random access memory, three-dimensional cross-point random access memory (eg, 3DXPointTM available from Intel Corporation), magnetic random access memory (MRAM), and spin-transfer magnetic random access memory ( STT-MRAM). In one embodiment, NVRAM 101 is a three-dimensional crosspoint RAM.Many of these techniques can be integrated into high density logic circuit fabrication processes, for example, to fabricate the fabrication process of processor semiconductor chip 100 as depicted in FIG. For example, a memory cell of an emerging non-volatile memory can store different resistance states (eg, a cell exhibits a higher resistance or a lower resistance depending on whether it has been programmed with 1 or 0), and is located in the semiconductor. Metallurgy in the semiconductor chip above the substrate.Here, for example, the memory cells may be located between the metal wires that are orthogonally directed, and the three-dimensional intersection structure may be realized by stacking the cells and their associated orthogonal wires in the metallurgy of the semiconductor chip. In addition, access granularity can be more granular than traditional non-volatile storage devices that traditionally access data only or block-based access. That is, the emerging non-volatile memory can be designed to be used as a true random access memory that can support data access at a byte-level granularity or some moderate multiple of each address value applied to the memory. .Significantly, due to the location of the NVRAM 101 on the die, the BIOS and/or self-healing components and saved data are left out of the processor semiconductor chip (eg, in EEPROM or flash memory) and made available via peripheral control The time of accessing the BIOS 106 and/or the self-repair component 109 and the preservation of any data read or written by the BIOS and/or the self-healing component (eg, component configuration), as compared to methods associated with component and interface access, The time for information (CCInfo) 108 and/or processor configuration information (PCI) 110 is significantly reduced.In various embodiments, the address space of the embedded NVRAM 101 is (at least partially) reserved for use by the BIOS and/or self-healing component. That is, the embedded NVRAM 101 can be considered a special memory resource, for example, with the BIOS and/or the self-healing component that it has access to in order to read/write its particular data structure (which is in the processor) The semiconductor chip 100 is external and coupled to the main memory controller 103).Thus, in various embodiments, the instruction set architecture of one or more of the CPU cores 102 of the processor includes special memory access instructions that target embedded NVRAM 101 rather than main memory or other memory. Thus, in various embodiments, the BIOS and/or self-healing component can execute at least some of its respective instructions primarily from the main memory (eg, program code instructions can be transferred from NVRAM 101 to main memory), but for access The program code of the BIOS and/or self-healing component of the NVRAM 101 to obtain at least some of its data may include special read instructions that target the embedded NVRAM 101. In further embodiments, BIOS 106 and/or self-healing component 109 can write to NVRAM 101 to update/save any such data with another special write instruction that targets embedded NVRAM 101.Here, the special nature of the memory access instruction targeting the embedded NVRAM 101 can be designed into the instruction format of the instruction set architecture of the processor CPU core 102, where the special opcode or immediate operand specifies that the memory access is pointing to the embedded NVRAM. 101 instead of main memory. Alternatively, the address space of NVRAM 101 can be considered a privileged area of the main memory address space. In this case, the NVRAM 101 can be accessed using a nominal memory access instruction, but the BIOS and/or the self-healing component must be given a special privileged state to access the NVRAM 101.According to various embodiments, BIOS 106, component configuration information (CC Info) 108, self-healing component 109, and processor configuration information (PCI) 110 may be directly programmed into embedded NVRAM 101 as part of the processor semiconductor chip fabrication process. Thus, each time the processor is started by the computing system, the computing system does not need to access BIOS or external self-healing code from flash memory or other mass storage devices, all of which typically pass through a peripheral control center or other slower interface. access. Since the self-healing component 109 includes instructions to be executed by one or more cores, not hardwired into the circuitry of the processor semiconductor chip, and the processor configuration information can be updated programmatically, embodiments of the present invention are in the management core Provides greater flexibility.Utilizing the processor 100 with embedded (on-die) NVRAM 101, a more compact solution can therefore be implemented for BIOS 106 and/or self-healing component 109.Figure 2 shows an example of a logic flow using an embedded NVRAM. In some examples, the process as shown in FIG. 2 depicts a process of implementing self-healing of a processor in a computing system. For these examples, the process can be implemented by or using the components or elements of processor 100 shown in FIG. However, the process is not limited to being implemented by only or using these components or elements of system 100.A set of logic flows is included herein that represent example methods for performing novel aspects of the disclosed architecture. Although one or more of the methods illustrated herein are shown and described as a series of acts for the purpose of simplifying the description, those skilled in the art will understand and appreciate that these methods are not limited by the order of the acts. Accordingly, some acts may occur in a different order than shown and described herein and/or concurrently with other acts. For example, those skilled in the art will understand and appreciate that a method can be alternatively represented as a series of interrelated states or events, for example, in a state diagram. Moreover, not all of the acts shown in the method are required by the novel implementation.The logic flow can be implemented in software, firmware, and/or hardware. In software and firmware embodiments, the logic flow may be computer executable instructions stored on at least one non-transitory computer readable medium or machine readable medium (eg, optical storage device, magnetic storage device, or semiconductor storage device) to realise. Embodiments are not limited to this context.Turning now to FIG. 2, processing begins at block 202. At block 202, the processor semiconductor chip 100 can be reset and initialized. At block 204, BIOS 106 instructions and/or data may be read from NVRAM 101 and executed by one or more of CPU cores 102_1, 102_2, 102_3 through 102_N. In an embodiment, BIOS 106 may perform computing system initialization steps as described in UEFI Specification Version 2.7A dated September 2017 or other later versions as disclosed at www.uefi.org. In fabricating a computing system including a processor semiconductor chip, a computing system manufacturer can obtain a serial number or other identifying information that uniquely identifies each system memory device (e.g., a DIMM) and uses the BIOS 106 as the CC Info 108. Stored in NVRAM 101. The memory information in CCInfo 108 can then be updated by BIOS 106 whenever the system memory changes, for example, when the end user adds an additional DIMM or replaces it with a new DIMM. In another embodiment, information regarding system components other than the memory device may also be stored in the NVRAM 101.In an embodiment, when manufacturing a processor semiconductor chip, it is possible that during the verification test, information describing the presence of an active and operable core in the processor semiconductor chip may be stored in the processor configuration information (PCI) 110 in the NVRAM 101. in. In an embodiment, PCI 110 may include information identifying a set of valid and operational cores, a set of failed cores, and a set of cores that remain reserved as spares. In an embodiment, the set of failed cores initially includes a core (if any) that failed the verification test after manufacture. In an embodiment, the set of failed cores may initially be empty. As part of the boot process, the BIOS can use processor configuration information when initializing the computing system, particularly the set of valid and operational cores. At block 206, the OS can be loaded. The processing performed by the computing system continues by running an OS and an application as is known in the art.When the computing system is up and running, the self-healing component 109 can be executed by one or more cores in the core to detect and/or process any runtime errors occurring in the processor semiconductor chip at block 208 (eg, a machine an examination). In an embodiment, the self-healing component may run periodically or may only be performed when an unrecoverable error occurs in the core. In an embodiment, when an error causing a core failure is detected at block 210, the self-healing component 109 updates the core configuration in processor configuration information (PCI) 110 stored in the NVRAM. For example, if the core has failed, the self-healing component removes the core from the set of valid and operational cores and adds the failed core to the set of failed cores in the PCI. If the alternate core is available, the self-healing component adds the spare core to the set of valid and operational cores and removes the spare core from the set of spare cores. Since the core has failed, the processor semiconductor chip must be restarted with the updated processor configuration information (ie, the computer system will no longer use the failed core, but instead the spare core will be used). In an embodiment, as part of the restart process, the self-healing component may instruct one or more cores in the processor semiconductor chip to maintain any in-process progress (if possible) done by one or more cores of the processor semiconductor chip. . Processing then continues at block 202 to reset and reinitialize the processor semiconductor chip. In an embodiment, if a virtual machine (VM) or hypervisor running in a computing system is halted due to a core failure, these programs can be restarted when the processor is rebooted. Therefore, downtime due to faulty cores can be minimized.In an embodiment, updates to the processor configuration information (ie, on demand) may be performed due to actions of the system administrator or through remote management of the computing system. For example, a processor semiconductor chip can be fabricated with multiple spare cores. When the processor and/or computing system is sold, a predetermined first number of valid and operable cores can be enabled in the processor configuration information, wherein a predetermined second number of cores are maintained as a backup. Later, when the processor is used in a computing system, the user may expect the computing system to use additional cores to increase the performance characteristics of the processor. In this case, the OS may, for example, instruct the self-healing component 109 to move the spare core to the set of valid and operational cores. In an embodiment, the increased processing power can be provided by enabling the alternate core at a charge. Because the processor configuration information and self-healing components are stored in NVRAM, the ability to adjust the processor's processing power can be more flexible than known systems. In known systems, processor configuration information is hardwired in the processor circuit. Line of.FIG. 3 illustrates an example computing system that can perform self-healing using embedded NVRAM on a processor semiconductor chip. According to some examples, computing systems may include, but are not limited to, servers, server arrays or server farms, web servers, web servers, internet servers, workstations, minicomputers, mainframe computers, supercomputers, network devices, web devices, distributed computing systems, Personal computer, tablet computer, smart phone, multi-processor system, processor-based system, or a combination thereof.As observed in FIG. 3, computing system 300 can include a processor semiconductor chip 301 (which can include, for example, a plurality of general purpose processing cores 315_1 through 315_X and a main memory controller (MC) disposed on a multi-core processor or application processor 317), system memory 302, display 303 (eg, touch screen, tablet), local wired point-to-point link (eg, USB) interface 304, various network I/O functions 355 (eg, Ethernet interface and/or cellular modem) Subsystem), wireless local area network (eg, WiFi) interface 306, wireless point-to-point link (eg, Bluetooth (BT)) interface 307 and Global Positioning System (GPS) interface 308, various sensors 309_1 through 309_Y, one or more cameras 350, battery 311, power management control unit (PWR MGT) 312, speaker and microphone (SPKR/MIC) 313, and audio encoder/decoder (codec) 314. Power management control unit 312 typically controls the power consumption of system 300.The application processor or multi-core processor 301 can include one or more general purpose processing cores 315, one or more graphics processing units (GPUs) 316, memory management functions 317 (eg, memory controllers) within the processor semiconductor chip 301 ( MC)) and I/O control function 318. The general purpose processing core 315 executes the operating system and application software of the computing system. Graphics processing unit 316 performs graphics intensive functions for, for example, generating graphical information presented on display 303. Memory control function 317 interfaces with system memory 302 for writing to/reading data from system memory 302. Processor 301 may also include embedded NVRAM 319 as described above for improving the overall operation of BIOS 106 and self-healing component 109 executing on one or more of CPU cores 315.Touch screen display 303, communication interfaces 304, 355, 306, 307, GPS interface 308, sensor 309, camera 310 and speaker/microphone codec 313, and codec 314 can each be calculated relative to the overall The system is considered to be in various forms of I/O (input and/or output) and, where appropriate, integrated peripherals (eg, one or more cameras 310). Depending on the implementation, various I/O components of these I/O components may be integrated on the application processor/multi-core processor 301, or may be located outside of the die or in the package of the application processor/multi-core processor 301. outer. The computing system also includes a non-volatile storage device 320, which can be a mass storage component of the system.FIG. 4 shows an example of a first storage medium. As shown in FIG. 4, the first storage medium includes a storage medium 400. Storage medium 400 can include an article of manufacture. In some examples, storage medium 400 can include any non-transitory computer readable medium or machine readable medium, such as an optical storage device, a magnetic storage device, or a semiconductor storage device. Storage medium 400 can store various types of computer executable instructions, such as instructions for implementing logic flow 200 and/or BIOS 106 and self-repairing component 109. Examples of computer readable or machine readable storage media may include any tangible medium capable of storing electronic data, including volatile or nonvolatile memory, removable or non-removable memory, erasable or non-erasable. Memory, writable or rewritable memory, etc. Examples of computer executable instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, object oriented code, visual code, and the like. The examples are not limited to this context.Various examples may be implemented using hardware elements, software elements, or a combination of both. In some examples, hardware components can include devices, components, processors, microprocessors, circuits, circuit components (eg, transistors, resistors, capacitors, inductors, etc.), integrated circuits, ASICs, PLDs, DSPs, FPGAs, Memory cells, logic gates, registers, semiconductor devices, chips, microchips, chipsets, and the like. In some examples, software components can include software components, programs, applications, computer programs, applications, system programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, processes, Software interface, API, instruction set, calculation code, computer code, code segment, computer code segment, word, value, symbol, or any combination thereof. Determining whether to use hardware components and/or software components to implement an example may vary depending on any number of factors, such as desired computation rate, power level, thermal tolerance, processing cycle budget, input data rate, output data rate, memory resources , data bus speed, and other design or performance constraints, as desired for a given implementation.Some examples may be described using the expression "in one example" or "example" and its derivatives. These terms are meant to be included in at least one of the specific features, structures, or characteristics described in connection with the examples. The appearances of the phrase "in an example"Some examples may be described using the expression "coupled" and "connected" and their derivatives. These terms are not necessarily intended to be synonymous with each other. For example, the use of the terms "connected" and/or "coupled" may mean that two or more elements are in direct physical or electrical contact with each other. However, the term "coupled" may also mean that two or more elements are not in direct contact with each other, but still cooperate or interact with each other.It is emphasized that the Abstract of the Disclosure is provided to comply with 37 C.F.R. Section 1.72(b), requiring an abstract that allows the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that the abstract will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it can be seen that various features are grouped together in a single example for the purpose of the disclosure. This method of disclosure is not to be interpreted as an Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed example. Therefore, the following claims are hereby incorporated into the Detailed Description, the claims In the appended claims, the terms "including" and "in which" are used as the concise English equivalents of the respective terms "comprising" and "wherein", respectively. Moreover, the terms "first," "second," "third," and the like are used merely as labels, and are not intended to impose a numerical requirement on the subject.Although the subject matter is described in language specific to structural features and/or methods, it is understood that the subject matter defined in the appended claims Instead, the specific features and acts described above are disclosed as example forms of implementing the claims.
A memory system is disclosed. The memory system comprises a circuit board and at least two memory devices mounted on the circuit board. Each of the at least two memory devices includes a plurality of pins for receiving and providing signals. At least a first portion of the pins of one of the at least two memory devices are coupled to at least a second portion of the pins of the other at least two memory devices such that a pair of the first portion coupled to a pin of the second portion forms a coupled load. The coupled load then appears as one load. Accordingly, in a system in accordance with the present invention, at least two memory devices are provided on a circuit board. Each of the at least two memory devices includes a plurality of pins. At least a portion of the pins of one of the two memory devices is in close proximity to and coupled to the at least a portion of the pins of the other of the at least two memory devices such that a pin and one memory device is coupled to a pin on the other memory device to form a coupled load. The coupled load then appears as one load. This is accomplished in a preferred embodiment by allowing the pins which are on opposite sides (front and back) of a printed circuit board to be represented as one load and then remapping one of the oppositely disposed pins to have the same functionality as the other oppositely disposed pin.
What is claimed is: 1. A memory system comprising:a circuit board; at least two memory devices mounted on the circuit board; each of the at least two memory devices including a plurality of pins for receiving and providing signals; wherein at least a first portion of the pins of one of the at least two memory devices are coupled to at least a second portion of the pins of the other of the at least two memory devices such that a pin of the first portion coupled to a pin of the second portion forms a coupled load, wherein the coupled load appears as one load; and a remaining device that is coupled to the at least two memory devices for ensuring that the appropriate pins are defined properly. 2. The memory system of claim 1 wherein the signals comprise address signals.3. The memory system of claim 1 wherein the at least two memory devices are identical and disposed on opposite sides of the circuit board such that the pins on each of the at least two memory devices are mirror images of each other.4. The memory system of claim 1 wherein the remapping device comprises a look-up table.5. The memory system of claim 1 wherein the remapping device comprises a multiplex or arrangement.6. A memory system comprising:a circuit board; first and second memory devices mounted on the circuit board; each of the first and second memory devices including a plurality of pins for receiving and providing address signals; wherein the first and second memory devices are identical and disposed on opposite sides of the circuit board such that the pins on each of the first and second memory devices are mirror images of each other; wherein at least a first portion of the pins of one of the first memory device is coupled to at least a second portion of the pins of the second memory device such that a pin of the first portion coupled to a pin of the second portion forms a coupled load, wherein the coupled load appears as one load; and a remapping device is coupled to the first and second memory devices for ensuring that the appropriate pins are defined properly. 7. The memory system of claim 6 wherein the remapping device comprises a look-up table.8. The memory system of claim 6 wherein the remapping device comprises a multiplexor arrangement.
FIELD OF THE INVENTIONThe present invention relates generally to memory devices and more particularly to a memory system for use on a circuit board where the number of loads is minimized.BACKGROUND OF THE INVENTIONMemory devices are utilized extensively in printed circuit boards (PCBs). Oftentimes, these memory devices are utilized in both sides of the PCB. To save space on such boards, it is desirable to place these devices on opposite sides of the boards such that they are mirror images of each other.FIG. 1 illustrates a representative pin assignment of the address pins for a memory device. In this embodiment, the device 10 includes address pins A0-A11. As is seen, address pins A0-A5 are on one side of the device 10 and address pins A11-A6 are on an opposite side of the device. The memory device 10 is preferably a dynamic random access memory (DRAM). For purposes of clarity, only the portions of the memory device relevant to the explanation are described herein. One of ordinary skill in the art readily recognizes that there are other pins such as data pins, power pin and ground pins, and the like that are utilized on the memory device. It is also understood that the particular pin assignment locations are not relevant. The important factor is that memory devices have fixed pin assignments.Accordingly, if identical memory devices are placed on the opposite sides of the PCB, a pin on one side of the device is opposite its corresponding pin on the other side of the PCB. Typically, to utilize the corresponding pins simultaneously, the corresponding pins are connected by an electrical connection referred to as a via which couples the pin to a driver. In conventional systems, these connections have not created many problems, but as device sizes become smaller and device speeds become faster, these connections look more and more like transmission lines and have the attendant problems associated therewith such as reflections on the lines. Hence, signal integrity techniques must be utilized to ensure these transmission line problems are minimized.To illustrate this, refer to the following description in conjunction with the accompanying figures. FIGS. 2 and 3 illustrate simplified top front views of the connections of common address pins of two identical DRAMs on opposite sides of a portion of a PCB 12. For the sake of simplicity, it should be understood that these figures are for illustrative purposes only to allow for a straightforward description of this feature. One of ordinary skill in the art recognizes that there are many more pins and many more connections than that shown in these figures.For ease of understanding, FIG. 2 illustrates the conventional coupling of corresponding pins A0 to A5 connections between two memory devices on the front and back of the portion of the PCB 12, and FIG. 3 illustrates the conventional coupling of corresponding pins A11-A6 between the two memory devices. Hence, in FIG. 2, pins A0-A5, the pins on top of the portion of the PCB 12, are shown on the right hand side of FIG. 2, and their corresponding pins A0'-A5', the pins on the bottom of the PCB 12, are shown on the left-hand side. Similarly, pins A11-A6 on the top of the portion of the PCB 12 are on the left-hand side of FIG. 3, and pins A'11-A'6 on the bottom of the portion of the PCB 12 are on the right-hand side. Hence, as is seen, all of the corresponding pins ((A0,A'0,)(A1,A1'), etc.) are also opposite each other.Typically, as is seen in these figures, vias 16a-16f and 16a'-16f' are utilized to connect the appropriate corresponding pins (A0,A0') etc. to 20a-20f and 20a'-20f' drive pins. Accordingly, each of pins 20a-20f and 20a'-20f' are utilized to drive two loads (i.e., pins (A0, A0') etc.).As before mentioned, as device sizes become smaller and device speeds become higher, these connections look more and more like transmission lines. As a result, signal integrity techniques must be utilized to ensure that transmission line effects are minimized. These signal integrity techniques are complex as well as adding significant expense to the overall system. Since cost is always a factor in integrated circuit design, it is always desirable to minimize costs associated therewith. Therefore, it is readily apparent as the number of pins increase on a memory device, the number of loads increase in a corresponding fashion which is also very undesirable.Accordingly, what is desired is a system to minimize the number of loads when utilizing multiple memory chips on each side of the board. The system should be straightforward, should not add undue complexity or cost to the system, and be easy to implement on existing architectures. The present invention addresses such a need.SUMMARY OF THE INVENTIONA memory system is disclosed. The memory system comprises a circuit board and at least two memory devices mounted on the circuit board. Each of the at least two memory devices includes a plurality of pins for receiving and providing signals. At least a first portion of the pins of one of the at least two memory devices are coupled to at least a second portion of the pins of the other two memory devices such that a first portion coupled to a pin of a second portion forms a coupled load. The coupled load then appears as one load.Accordingly, in a system in accordance with the present invention, at least two memory devices are provided on a circuit board. Each of the at least two memory devices includes a plurality of pins. At least a portion of the pins of one of the two memory devices is in close proximity to and coupled to the at least a portion of the pins of the other of the at least two memory devices such that a pin and one memory device is coupled to a pin on the other memory device to form a coupled load. The coupled load then appears as one load. This is accomplished in a preferred embodiment by allowing the pins which are on opposite sides (front and back) of the printed circuit board to be represented as one load and then remapping one of the oppositely disposed pins to have the same functionality as the other oppositely disposed pin.BRIEF DESCRIPTION OF THE DRAWINGSFIG. 1 is an illustration of a pin assignment for a conventional dynamic random access memory (DRAM).FIGS. 2 and 3 illustrate simplified front views of the connection of common pins of the two DRAMs.FIG. 4 illustrates a simplified block diagram of the connections of common pins in accordance with the present invention.FIG. 5 illustrates a simple block diagram of a system for remapping the address pins in accordance with the present invention.DETAILED DESCRIPTIONThe present invention relates generally to memory devices and more particularly to a memory system for use on a circuit board where the number of loads is minimized. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiment and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiment shown but is to be accorded the widest scope consistent with the principles and features described herein.In a system in accordance with the present invention, the functions of the pins on one of at least two memory devices on the PCB are remapped to minimize the number of loads on the PCB. In so doing, a plurality of pins can be represented as one load. This is accomplished in a preferred embodiment by allowing the pins which are on opposite sides (front and back) of the PCB device to be represented as one load and then remapping one of the oppositely disposed pins to have the same functionality as the other oppositely disposed pin.To describe the features of the present invention in more detail, refer now to the following description in conjunction with the accompanying figures.FIG. 4 illustrates a simplified block diagram of the connections of common pins in accordance with the present invention. As is seen, pins on opposite sides of the PCB 12' are coupled together such that they appear as one load. Hence, as is seen, on the right hand side, pins A0/A11', A1/A10', A2/A9', A3/A8', A4/A7' and A5/A6' each appear as one load to their respective driver pins 20a''-20f' due to their close proximity to each other. Similarly, the left-hand side pins A11/A0', A10/A1', A9/A2', A8/A3', A7/A4', and A6/A5' each appear as one load to their respective driver pins 20a'''-20f'''. Accordingly, through a system and method in accordance with the present invention, the number of loads are minimized as well as minimizing the transmission line effects associated with conventional systems.In a preferred embodiment, the memory devices are identical and are placed oppositely disposed to each other, on a front and back side of the PCB 12'. In so doing, the pins are adjacent with each other and the connection length therebetween is minimized. However, one of ordinary skill in the art readily recognizes the memory devices can be in any relationship to each other (i.e., side by side), the key feature being that at least a portion of the pins of each device appear as one load to a driving pin.To effectively utilize this arrangement, particularly for address pins, the functionality of the pins must be defined properly. Addressing is important because the location of the bits associated with the address is important. The bits are important because they indicate a particular physical location in memory. Accordingly, it is important to make sure that the appropriate bits are utilized to address the appropriate locations in memory.To ensure that the pins are properly defined, a remapping of the pins occurs to ensure proper operation of the memory devices. To describe this remapping feature in more detail, refer now to the following description in conjunction with the accompanying figure.FIG. 5 illustrates a simple block diagram of a system 100 for remapping the address pins in accordance with the present invention. The system 100 includes a remapping chip 102 which is coupled to the memory devices 104 and 106. In this embodiment, memory device 104 is on top of the PCB (not shown) and memory device 106 is on the bottom of the PCB. Accordingly, the remapping chip 102 provides for two chip select signals 110 and 112 at the appropriate time. That is if the remapping chip 102 is to select memory device 104, then chip select 110 is enabled and if the remapping chip 102 is to select memory device 106, then chip 112 is enabled. Only one of the chip select signals 110 and 112 is enabled at a time. Accordingly, when chip select 110 is provided, the address signal for the pin of the memory device 104 is provided. On the other hand, if chip select 112 is enabled, the address signal for the pin for the memory device 106 is provided. The remapping chip allows in this case, the bottom memory device 106 address pin for particular address signal to remap to a different location.In a specific example, referring back to FIG. 4, if address for pin A0 is selected, then the pin A0 for top memory device 104 is selected if the chip select 110 is enabled. However, if the chip select 112 is enabled, the address is remapped to pin A11 on the bottom memory device 106. This functionality can be repeated for each of the oppositely disposed pins. The remapping chip can be implemented in a variety of conventional ways from a look up table which directly maps the appropriate pin from one device to the pin of the other device to a multiplexer arrangements which allows for many configurations.Accordingly, in a system in accordance with the present invention, at least two memory devices are provided on a circuit board. Each of the at least two memory devices includes a plurality of pins. At least a portion of the pins of one of the two memory devices is in close proximity to and coupled to the at least a portion of the pins of the other of the at least two memory devices such that a pin and one memory device is coupled to a pin on the other memory device to form a coupled load. The coupled load then appears as one load. This is accomplished in a preferred embodiment by allowing the pins which are on opposite sides (front and back) of the printed circuit board to be represented as one load and then remapping one of the oppositely disposed pins to have the same functionality as the other oppositely disposed pin. Accordingly, through the use of a memory system in accordance with the present invention, the transmission line problems associated with conventional systems are minimized.Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. For example, although the present invention is described in the context of a printed circuit board, one of ordinary skill in the art readily recognizes that any insulating material upon which a plurality of memory devices are located could be utilized and that would be within the spirit and scope of the present invention. Therefore, memory devices could be placed on an integrated circuit and that use would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
Generally, this disclosure provides systems, devices, methods and computer readable media for prevention of cable swap security attacks on storage devices. A host system may include a provisioning module configured to generate a challenge-response verification key-pair and further to provide the key-pair to the storage device to enable the challenge-response verification. The system may also include a link error detection module to detect a link error between the host system and the storage device. The system may further include a challenge-response protocol module configured to initiate, in response to the link-error detection, a verification challenge from the storage system and to provide a response to the verification challenge based on the key-pair.
CLAIMSWhat is claimed is:1. A host system for securing a storage device, said host system comprising: a provisioning module to generate a challenge-response verification key-pair and further to provide said key-pair to said storage device to enable said challenge- response verification;a link error detection module to detect a link error between said host system and said storage device; anda challenge-response protocol module to initiate, in response to said link-error detection, a verification challenge from said storage system and further to provide a response to said verification challenge based on said key-pair.2. The host system of claim 1 , wherein said detected link error is associated with a communication reset of a data cable coupled between said host system and said storage device.3. The host system of any of claims 1, wherein said detected link error is associated with a disconnect of a data cable coupled between said host system and said storage device.4. The host system of any of claims 1-3, wherein said detected link error occurs during a standby-connected mode of said storage device.5. The host system of any of claims 1-3, further comprising a power-up user authentication module to provide an authentication password to said storage device to unlock said storage device.6. The host system of any of claims 1-3, wherein said storage device is a hard disk drive (HDD) or a solid state drive (SSD).7. A storage device comprising:a data storage module to store data for access by a host system coupled to said storage device; a link error detection module to detect a link error between said storage device and said host system and further, in response to said detection, to cause said storage device to enter a read/write failure mode; anda challenge-response protocol module to, in response to a verification challenge initiation received from said host system, generate a verification challenge and transmit said verification challenge to said host system.8. The storage device of claim 7, wherein said challenge-response protocol module is further to verify a challenge-response received from said host system.9. The storage device of claim 7 or 8, wherein, said challenge-response protocol module is further to cause said storage device to exit said read/write failure mode if said verification is successful.10. The storage device of claim 7 or 8, wherein, said challenge-response protocol module is further to wait for a second verification challenge initiation received from said host system if said verification is unsuccessful.11. The storage device of claim 7 or 8, wherein said read/write failure mode is associated with a denial of access to said data storage module by said host system.12. The storage device of claim 8, further comprising a power-up user authentication module to verify an authentication password received from said host system and further to unlock said data storage module in response to success of said verification.13. The storage device of claim 7 or 8, wherein said detected link error is associated with a communication reset of a data cable coupled between said host system and said storage device.14. The storage device of claim 7 or 8, wherein said detected link error is associated with a disconnect of a data cable coupled between said host system and said storage device.15. The storage device of claim 7 or 8, wherein said detected link error occurs during a standby-connected mode of said storage device.16. The storage device of claim 7 or 8, further comprising an encryption module to lock and unlock said data storage module.17. The storage device of claim 7 or 8, wherein said storage device is a hard disk drive (HDD) or a solid state drive (SSD).18. At least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for securing a storage device, said operations comprising:generating a challenge-response verification key-pair;providing said key-pair to said storage device to enable said challenge- response verification;detecting a link error between a host system and said storage device;initiating, by said host system, in response to said link-error detection, a verification challenge from said storage system; andproviding a response to said verification challenge based on said key-pair.19. The computer-readable storage medium of claim 18, wherein said detected link error is associated with a communication reset of a data cable coupled between said host system and said storage device.20. The computer-readable storage medium of claim 18, wherein said detected link error is associated with a disconnect of a data cable coupled between said host system and said storage device.21. The computer-readable storage medium of any of claims 18-20, wherein said detected link error occurs during a standby-connected mode of said storage device.22. The computer-readable storage medium of any of claims 18-20, further comprising the operation of providing an authentication password to said storage device to unlock said storage device after a power-up of said storage device.23. At least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for securing a storage device, said operations comprising:detecting a link error between said storage device and a host system;entering a read/write failure mode in response to said detection;receiving a verification challenge initiation from said host system;generating a verification challenge in response to said receiving; and transmitting said verification challenge to said host system.24. The computer-readable storage medium of claim 23, further comprising the operation of verifying a challenge-response received from said host system.25. The computer-readable storage medium of claim 24, further comprising the operation of exiting said read/write failure mode if said verification is successful.26. The computer-readable storage medium of claim 24, further comprising the operation of waiting for a second verification challenge initiation from said host system if said verification is unsuccessful.27. The computer-readable storage medium of claim 23 or 24, wherein said read/write failure mode is associated with a denial of access of said host system to data stored on said storage device.28. The computer-readable storage medium of claim 24, further comprising the operations of verifying an authentication password received from said host system and unlocking data stored on said storage device in response to success of said verification.29. The computer-readable storage medium of claim 23 or 24, wherein said detected link error is associated with a communication reset of a data cable coupled between said host system and said storage device.30. The computer-readable storage medium of claim 23 or 24, wherein said detected link error is associated with a disconnect of a data cable coupled between said host system and said storage device.
PREVENTION OF CABLE-SWAP SECURITY ATTACK ON STORAGEDEVICESFIELDThe present disclosure relates to security of storage devices, and more particularly, to prevention of cable swap security attacks on storage devices. BACKGROUNDStorage devices, such as hard disk drives (HDDs) and solid state drives (SSDs), typically provide some level of security for data stored on the media while the device is at rest (e.g., powered off). Depending on implementation and standard requirements, user and/or administrator passwords may be required to establish security keys to encrypt/decrypt the stored data. When the device powers up, a password may be required to unlock the device.A problem with these techniques is that the devices (and the data) are susceptible to cable-swap attacks. In this type of attack, the data cable is removed from the device while maintaining power to the device. The device is then connected to the attacker' s system, and the attacker is able to access (read and write) all data present on the drive without requiring any password knowledge. Since the device has not lost power during the attack, it remains unlocked and continues to process all reads and writes from the attacking system. One existing approach to handling this problem involves the use of additional encryption layers between the host and the storage device for all data reads and writes. However, this adds cost and complexity, requires additional power and reduces performance. Another existing approach involves the use of a specially designed device side connector that combines data and power. Unfortunately, this technique suffers from a relatively larger form factor and remains vulnerable to an attacker that can disassemble the connector casing to apply an alternate power source to the devices power pins and then proceed with the cable swap attack.This type of cable swap attack is of growing concern as computer systems are expected to spend increased time in standby/connected- standby modes, and the storage devices associated with these systems remain unlocked during this period. Systems in these modes are susceptible to relatively easy theft, data extraction and data-wipes/replacements, since no password is required.BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals depict like parts, and in which:Figure 1 illustrates a top level system diagram of an example embodiment consistent with the present disclosure;Figure 2 illustrates a block diagram of one example embodiment consistent with the present disclosure;Figure 3 illustrates a flowchart of operations of one example embodiment consistent with the present disclosure;Figure 4 illustrates a flowchart of operations of another example embodiment consistent with the present disclosure;Figure 5 illustrates a flowchart of operations of another example embodiment consistent with the present disclosure;Figure 6 illustrates a flowchart of operations of another example embodiment consistent with the present disclosure; andFigure 7 illustrates a system diagram of a platform of another example embodiment consistent with the present disclosure.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONGenerally, this disclosure provides systems, devices, methods and computer readable media for prevention of cable swap security attacks on storage devices. In one embodiment, a host system may be coupled to a storage system by a data cable. Either or both of the host and storage systems may be configured to detect a link error in response to a disconnection of the data cable, a communication reset or similar disruptive event that may indicate a data cable swap even though power continues to be supplied to the storage system. The detected link error may then trigger a requirement for host verification/re- verification through a challenge-response protocol. The storage system may be configured to fail all read/write attempts from the host until a successful verification occurs. The challenge-response protocol may be based on a key-pair that is provided during an initial provisioning of the storage system. The key-pair may be a public/private encryption key pair or may be based on a shared secret.Thus, in some embodiments, host verification, through a challenge-response protocol, may be required whenever a data cable swap is suspected while the storage system remains powered up. This verification may be performed in addition to the user authentication that is normally required after a power cycle of the storage system.Figure 1 illustrates a top level system diagram 100 of one exampleembodiment consistent with the present disclosure. The host system 102 is shown to include a link error detection module 110a and a challenge-response protocol handling module 112a. Storage system 108 is shown to include a data storage module 114, a link error detection module 110b and a challenge-response protocol handling module 112b. In some embodiments, data storage module 114 may be a hard disk drive (HDD), a solid state drive (SSD), a combination of the two, or any other suitable data storage mechanism. Storage system 108 may be coupled to host system 102 through data cable 106. In some embodiments, data cable 106 may be a Serial Advanced Technology Attachment (SAT A) cable, a Non- Volatile Memory Express (NVMe) cable or a Serial Attached Small Computer System Interface (SAS) cable. NVMe may be implemented as a Peripheral Component Interconnect Express (PCIe) link management protocol. Storage system 108 may receive power though power cable 104, which may provide power from system 102 or, in some embodiments, from an alternative power source 116.Link error detection modules 110a, 110b may be configured, on the host system and storage system sides respectively, to detect a disconnection of the data cable 106. Challenge-response protocol handling modules 112a, 112b may be configured to verify the host system 102 after reconnection (indicating a possible swap of the data cable), as will be described in greater detail below.Figure 2 illustrates a block diagram 200 of one example embodiment consistent with the present disclosure. The host system 102 is shown to include a provisioning module 202, a power-up user authentication module 204a and a read/write command processing module 206a, in addition to the link error detection module 110a and challenge-response protocol handling module 112a. Storage system 108 is shown to include a power-up user authentication module 204b, a read/write command processing module 206b and an encryption module 210, in addition to the link error detection module 110b, challenge-response protocol module 112b and data storage module 114.Provisioning module 202 may be configured to generate a suitable key-pair for use during the challenge-response protocol described below. In some embodiments, the key-pair may be made available to both the host system 102 and the storage system 108 during an initial coupling or system configuration, for example by the manufacturer or some other trusted system configuration entity. The key-pair may be a public/private encryption key pair that allows only the host and storage system to correctly exchange challenge and response data for verification purposes, or the key- pair maybe based on a shared secret.Whenever the storage system 108 is powered up, for example after a power interruption or during a normal start up, the host system 102 may be authenticated using any suitable technique including a standard user/password verification procedure. Power-up user authentication modules 204a, 204b may be configured to perform this authentication on the host system and storage system sides respectively.Encryption module 210 may be configured to encrypt the data stored on data storage module 114 such that the data is protected or locked until the power-up user authentication is successfully accomplished, after which the data may be unlocked or made readily available to the host system 102 for normal runtime operations.Read/write command processing module 206a may be configured to generate read and write requests on the host system 102 for transmission to the storage system 108 in order to read or write data to/from the data storage module 114. Likewise, read/write command processing module 206b may be configured to handle these requests on the storage system side. Because a cable swap attack involves removal of the data cable from the storage system while the storage system remains powered, a link error orcommunications reset will be generated and can be detected by both the host system 102 and the storage system 108, for example by modules 110a and 110b respectively. When a link error is detected, the host system 102 and storage system 108 may operate collaboratively by executing a challenge-response protocol to verify or authenticate the host. Challenge-response protocol handling module 112a may be configured to execute portions of this protocol on the host system side and challenge- response protocol handling module 112b may be configured to execute portions of this protocol on the storage system side. Until verification of the host isaccomplished, through execution of this protocol, the storage system may be configured to fail on all read and write attempts made by the host system. For example, read/write command processing module 206b may generate these failure conditions.In some embodiments, execution of the challenge-response protocol may proceed as follows. The host system 102, after detecting a link error, may issue a request to storage system 108 to initiate the challenge-response protocol. In response to that request, the storage system 108 may generate a new challenge, for example a random challenge based on the key-pair provided during provisioning of the system, and transmit that challenge to the host system. The host system 102 may then generate a correct response to the challenge, also based on the key-pair, and transmit this response to the storage system 108. The storage system may then verify that the response is correct, and if so, resume processing of read/write operations. If the response is not correct, the storage system (e.g., read/write command processing module 206b) may continue to generate failures on all read/write operations and wait for any subsequent challenge-response protocol initiation requests from the host system 102.Figure 3 illustrates a flowchart of operations 300 of one example embodiment consistent with the present disclosure. The operations provide a method for a host system, for example host system 102, to prevent cable swap security attacks on a storage device. Operations 302 and 304 may be part of a provisioning operation which may be performed during an initial coupling or system configuration, for example by the manufacturer or some other trusted system configuration entity. At operation 302, a challenge-response verification key-pair is created. At operation 304 the key is provided to the storage system to enable the challenge-response verification feature. At operation 306, a user authentication is performed to unlock the storage system, for example at power up. The user authentication may include transmitting a user identification/password to the storage system.Operations 308 through 314 may be performed during run-time (e.g., after power up). At operation 308, storage read/write command processing may be performed to format, translate and/or transmit data access requests, from a user of the host system, to the storage system.At operation 310, a link error detection check is performed. The method for detection of a link error, indicating for example the removal or disconnection of the data cable, may depend on the signaling protocol associated with the data cable and/or storage device. In the case of an SATA connection, for example, the link error may be associated with any of the following signals: COMRESET, COMINIT and/or COMWAKE, any of which may also generate a link-down interrupt on the host system. In the case of an SAS connection, the link error may be associated with any of the following signals: COMSAS, COMRESET, COMINIT and/or COMWAKE, any of which may also generate a link-down interrupt on the host system. In the case of an NVMe connection, the link error may be associated with a PCIe reset and a link- down interrupt on the host system. It will be appreciated that other types of cabling and signaling protocols may be used along with any suitable link error detection mechanism.If a link error has not been detected then storage read/write command processing may continue. If a link error has been detected, however, then the host system may initiate a challenge-response protocol at operation 312 by, for example, requesting a challenge from the storage system. At operation 314, the host system may respond to a challenge received from the storage system by providing a verification key to the storage system. Read/write command processing may then proceed, at operation 308. If the verification was not successful, for any reason, the read/write requests will fail and a new attempt may be made to initiate the challenge - response protocol.Figure 4 illustrates a flowchart of operations 400 of another example embodiment consistent with the present disclosure. The operations provide a method for a storage system, for example storage system 108, to prevent cable swap security attacks. At operation 402, a user authentication is performed to unlock the storage system, for example at power up. The user authentication may include receiving a user identification/password from the storage system. Unlocking the storage system may include decryption of the stored data once the user identification is authenticated.Operations 404 through 418 may then be performed during run-time (e.g., after power up). At operation 404, storage read/write commands, received from the host system, may be processed and the corresponding reads and writes to/from the data storage module may be performed. At operation 406, a link error detection check is performed. If a link error has not been detected then storage read/write command processing continues. If a link error has been detected, then at operation 408, the storage system will cause subsequent read/write operations to fail.At operation 410, in response to receiving a challenge -response protocol initiation request from the host system, a verification challenge is provided to the host system. In some embodiments, the storage system may initiate the protocol, after detection of a link-error, without waiting for an initiation request. At operation 412, the response is received from the host system and verified. If the verification passes, then at operation 414, read/write operations will be permitted. If the verification fails, then at operation 416, read/write operations will continue to fail and, at operation 418, the storage system will wait for the host system to initiate or reinitiate the challenge- response protocol.Figure 5 illustrates a flowchart of operations 500 of another example embodiment consistent with the present disclosure. The operations provide a method for prevention of cable swap security attacks on a storage device. At operation 510, a challenge-response verification key-pair is generated by a host system coupled to the storage device. At operation 520, the key-pair is provided to the storage device to enable the challenge-response verification. At operation 530, a link error is detected between the host system and the storage device. At operation 540, in response to the link-error detection, the host system initiates a verification challenge from the storage system. At operation 550, a response to the verification challenge is provided. The response is based on the key-pair.Figure 6 illustrates a flowchart of operations 600 of another example embodiment consistent with the present disclosure. The operations provide a method for prevention of cable swap security attacks on a storage device. At operation 610, a link error between the storage device and a host system is detected. At operation 620, a read/write failure mode is entered in response to the detection. At operation 630, a verification challenge initiation is received from the host system. At operation 640, a verification challenge is generated in response to the receiving. At operation 650, the verification challenge is transmitted to the host system.Figure 7 illustrates a system diagram 700 of one example embodiment consistent with the present disclosure. The system 700 may be a mobile platform 710 or computing device such as, for example, a smart phone, smart tablet, personal digital assistant (PDA), mobile Internet device (MID), convertible tablet, notebook or laptop computer, or any other suitable device. It will be appreciated, however, that embodiments of the system described herein are not limited to mobile platforms, and in some embodiments, the system 700 may be a workstation or desktop computer. The device may generally present various interfaces to a user via a display element 760 such as, for example, a touch screen, liquid crystal display (LCD) or any other suitable display type.The system 700 is shown to include a host system 102 that may further include any number of processors 720 and memory 730. In some embodiments, the processors 720 may be implemented as any number of processor cores. The processor (or processor cores) may be any type of processor, such as, for example, a microprocessor, an embedded processor, a digital signal processor (DSP), a graphics processor (GPU), a network processor, a field programmable gate array or other device configured to execute code. The processors may be multithreaded cores in that they may include more than one hardware thread context (or "logical processor") per core. The memory 730 may be coupled to the processors. The memory 730 may be any of a wide variety of memories (including various layers of memory hierarchy and/or memory caches) as are known or otherwise available to those of skill in the art. It will be appreciated that the processors and memory may be configured to store, host and/or execute one or more user applications or other software modules. These applications may include, but not be limited to, for example, any type of computation, communication, data management, data storage and/or user interface task. In some embodiments, these applications may employ or interact with any other components of the mobile platform 710.System 700 is also shown to include network interface module 740 which may include wireless communication capabilities, such as, for example, cellular communications, Wireless Fidelity (WiFi), Bluetooth®, and/or Near FieldCommunication (NFC). The wireless communications may conform to or otherwise be compatible with any existing or yet to be developed communication standards including past, current and future version of Bluetooth®, Wi-Fi and mobile phone communication standards.System 700 is also shown to include an input/output (10) system or controller 750 which may be configured to enable or manage data communication between processor 720 and other elements of system 700 or other elements (not shown) external to system 700.System 700 is also shown to include a secure storage system 108, for example an HDD or SSD, coupled to the host system 102 and configured to prevent cable swap security attacks as described previously.It will be appreciated that in some embodiments, the various components of the system 700 may be combined in a system-on-a-chip (SoC) architecture. In some embodiments, the components may be hardware components, firmware components, software components or any suitable combination of hardware, firmware or software.Embodiments of the methods described herein may be implemented in a system that includes one or more storage mediums having stored thereon, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a system CPU (e.g., core processor) and/or programmable circuitry. Thus, it is intended that operations according to the methods described herein may be distributed across a plurality of physical devices, such as, for example, processing structures at several different physical locations. Also, it is intended that the method operations may be performed individually or in a subcombination, as would be understood by one skilled in the art. Thus, not all of the operations of each of the flow charts need to be performed, and the present disclosure expressly intends that all subcombinations of such operations are enabled as would be understood by one of ordinary skill in the art.The storage medium may include any type of tangible medium, for example, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), digital versatile disks (DVDs) and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions. "Circuitry", as used in any embodiment herein, may include, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. An app may be embodied as code or instructions which may be executed on programmable circuitry such as a host processor or other programmable circuitry. A module, as used in any embodiment herein, may be embodied as circuitry. The circuitry may be embodied as an integrated circuit, such as an integrated circuit chip.Thus, the present disclosure provides systems, devices, methods and computer readable media for prevention of cable swap security attacks on storage devices. The following examples pertain to further embodiments.According to Example 1 there is provided a host system for securing a storage device. The host system may include a provisioning module to generate a challenge- response verification key-pair and further to provide the key-pair to the storage device to enable the challenge-response verification. The host system of this example may also include a link error detection module to detect a link error between the host system and the storage device. The host system of this example may further include a challenge-response protocol module to initiate, in response to the link-error detection, a verification challenge from the storage system and further to provide a response to the verification challenge based on the key-pair.Example 2 may include the subject matter of Example 1, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 3 may include the subject matter of any of Examples 1 and 2, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 4 may include the subject matter of any of Examples 1-3, and the detected link error occurs during a standby-connected mode of the storage device.Example 5 may include the subject matter of any of Examples 1-4, further including a power-up user authentication module to provide an authentication password to the storage device to unlock the storage device.Example 6 may include the subject matter of any of Examples 1-5, and the storage device is a hard disk drive (HDD) or a solid state drive (SSD).According to Example 7 there is provided a storage device. The storage device may include a data storage module to store data for access by a host system coupled to the storage device. The storage device of this example may also include a link error detection module to detect a link error between the storage device and the host system and further, in response to the detection, to cause the storage device to enter a read/write failure mode. The storage device of this example may further include a challenge-response protocol module to, in response to a verification challenge initiation received from the host system, generate a verification challenge and transmit the verification challenge to the host system.Example 8 may include the subject matter of Example 7, and the challenge- response protocol module is further to verify a challenge-response received from the host system.Example 9 may include the subject matter of Examples 7 and 8 and the challenge-response protocol module is further to cause the storage device to exit the read/write failure mode if the verification is successful.Example 10 may include the subject matter of Examples 7-9, and the challenge-response protocol module is further to wait for a second verification challenge initiation received from the host system if the verification is unsuccessful.Example 11 may include the subject matter of Examples 7-10, and the read/write failure mode is associated with a denial of access to the data storage module by the host system.Example 12 may include the subject matter of Examples 7-11, further including a power-up user authentication module to verify an authentication password received from the host system and further to unlock the data storage module in response to success of the verification.Example 13 may include the subject matter of Examples 7-12, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 14 may include the subject matter of Examples 7-13, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 15 may include the subject matter of Examples 7-14, and the detected link error occurs during a standby-connected mode of the storage device.Example 16 may include the subject matter of Examples 7-15, further including an encryption module to lock and unlock the data storage module. Example 17 may include the subject matter of Examples 7-16, and the storage device is a hard disk drive (HDD) or a solid state drive (SSD).According to Example 18 there is provided at least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for securing a storage device. The operations may include generating a challenge-response verification key-pair;providing the key-pair to the storage device to enable the challenge -response verification; detecting a link error between a host system and the storage device; initiating, by the host system, in response to the link-error detection, a verification challenge from the storage system; and providing a response to the verification challenge based on the key-pair.Example 19 may include the subject matter of Example 18, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 20 may include the subject matter of Examples 18 and 19 and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 21 may include the subject matter of Examples 18-20 and the detected link error occurs during a standby-connected mode of the storage device.Example 22 may include the subject matter of Examples 18-21 further including the operation of providing an authentication password to the storage device to unlock the storage device after a power-up of the storage device.According to Example 23 there is provided at least one computer-readable storage medium having instructions stored thereon which when executed by a processor result in the following operations for securing a storage device. The operations may include detecting a link error between the storage device and a host system; entering a read/write failure mode in response to the detection; receiving a verification challenge initiation from the host system; generating a verification challenge in response to the receiving; and transmitting the verification challenge to the host system.Example 24 may include the subject matter of Example 23, further including the operation of verifying a challenge-response received from the host system. Example 25 may include the subject matter of Examples 23 and 24, further including the operation of exiting the read/write failure mode if the verification is successful.Example 26 may include the subject matter of Examples 23-25, further including the operation of waiting for a second verification challenge initiation from the host system if the verification is unsuccessful.Example 27 may include the subject matter of Examples 23-26, and the read/write failure mode is associated with a denial of access of the host system to data stored on the storage device.Example 28 may include the subject matter of Examples 23-27, further including the operations of verifying an authentication password received from the host system and unlocking data stored on the storage device in response to success of the verification.Example 29 may include the subject matter of Examples 23-28, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 30 may include the subject matter of Examples 23-29, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 31 may include the subject matter of Examples 23-30, and the detected link error occurs during a standby-connected mode of the storage device.Example 32 may include the subject matter of Examples 23-31, further including the operations of encrypting the data stored on the storage device to lock the data and unencrypting the data to unlock the data.According to Example 33 there is provided a method for securing a storage device. The method may include generating a challenge-response verification key- pair; providing the key-pair to the storage device to enable the challenge-response verification; detecting a link error between a host system and the storage device; initiating, by the host system, in response to the link-error detection, a verification challenge from the storage system; and providing a response to the verification challenge based on the key-pair.Example 34 may include the subject matter of Example 33, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device. Example 35 may include the subject matter of Examples 33 and 34, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 36 may include the subject matter of Examples 33-35, and the detected link error occurs during a standby-connected mode of the storage device.Example 37 may include the subject matter of Examples 33-36, further including providing an authentication password to the storage device to unlock the storage device after a power-up of the storage device.According to Example 38 there is provided a method for securing a storage device. The method may include detecting a link error between the storage device and a host system; entering a read/write failure mode in response to the detection; receiving a verification challenge initiation from the host system; generating a verification challenge in response to the receiving; and transmitting the verification challenge to the host system.Example 39 may include the subject matter of Example 38, further including verifying a challenge-response received from the host system.Example 40 may include the subject matter of Examples 38 and 39, further including exiting the read/write failure mode if the verification is successful.Example 41 may include the subject matter of Examples 38-40, further including waiting for a second verification challenge initiation from the host system if the verification is unsuccessful.Example 42 may include the subject matter of Examples 38-41, and the read/write failure mode is associated with a denial of access of the host system to data stored on the storage device.Example 43 may include the subject matter of Examples 38-42, further including verifying an authentication password received from the host system and unlocking data stored on the storage device in response to success of the verification.Example 44 may include the subject matter of Examples 38-43, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 45 may include the subject matter of Examples 38-44, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device. Example 46 may include the subject matter of Examples 38-45, and the detected link error occurs during a standby-connected mode of the storage device.Example 47 may include the subject matter of Examples 38-46, further including encrypting the data stored on the storage device to lock the data and unencrypting the data to unlock the data.According to Example 48 there is provided a system for securing a storage device. The system may include means for generating a challenge-response verification key-pair; means for providing the key-pair to the storage device to enable the challenge-response verification; means for detecting a link error between a host system and the storage device; means for initiating, by the host system, in response to the link-error detection, a verification challenge from the storage system; and means for providing a response to the verification challenge based on the key-pair.Example 49 may include the subject matter of Example 48, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 50 may include the subject matter of Examples 48 and 49, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 51 may include the subject matter of Examples 48-50, and the detected link error occurs during a standby-connected mode of the storage device.Example 52 may include the subject matter of Examples 48-51, further including means for providing an authentication password to the storage device to unlock the storage device after a power-up of the storage device.According to Example 53 there is provided a system for securing a storage device. The system may include means for detecting a link error between the storage device and a host system; means for entering a read/write failure mode in response to the detection; means for receiving a verification challenge initiation from the host system; means for generating a verification challenge in response to the receiving; and means for transmitting the verification challenge to the host system.Example 54 may include the subject matter of Example 53, further including verifying a challenge-response received from the host system.Example 55 may include the subject matter of Examples 53 and 54, further including means for exiting the read/write failure mode if the verification is successful. Example 56 may include the subject matter of Examples 53-55, further including means for waiting for a second verification challenge initiation from the host system if the verification is unsuccessful.Example 57 may include the subject matter of Examples 53-56, and the read/write failure mode is associated with a denial of access of the host system to data stored on the storage device.Example 58 may include the subject matter of Examples 53-57, further including means for verifying an authentication password received from the host system and unlocking data stored on the storage device in response to success of the verification.Example 59 may include the subject matter of Examples 53-58, and the detected link error is associated with a communication reset of a data cable coupled between the host system and the storage device.Example 60 may include the subject matter of Examples 53-59, and the detected link error is associated with a disconnect of a data cable coupled between the host system and the storage device.Example 61 may include the subject matter of Examples 53-60, and the detected link error occurs during a standby-connected mode of the storage device.Example 62 may include the subject matter of Examples 53-61, further including means for encrypting the data stored on the storage device to lock the data and unencrypting the data to unlock the data.The terms and expressions which have been employed herein are used as terms of description and not of limitation, and there is no intention, in the use of such terms and expressions, of excluding any equivalents of the features shown and described (or portions thereof), and it is recognized that various modifications are possible within the scope of the claims. Accordingly, the claims are intended to cover all such equivalents. Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.
The present disclosure is directed to systems and methods of performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry. The DMA control circuitry executes a modified instruction set architecture (ISA) that facilitates the broadcast distribution of data to a plurality of destination addresses in system memory circuitry. The broadcast instruction may include broadcast of a single data value to each destination address. The broadcast instruction may include broadcast of a data array to each destination address. The DMA control circuitry may also execute a reduction instruction that facilitates the retrieval of data from a plurality of source addresses in system memory and performing one or more operations using the retrieved data. Since the DMA control circuitry, rather than the processor circuitry performs the broadcast and reduction operations, system speed and efficiency is beneficially enhanced.
A direct memory access (DMA) system, comprising:DMA control circuitry coupled to memory circuitry, the DMA control circuitry to execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction:wherein, upon execution of the data broadcast instruction, the DMA control circuitry to:causes a data broadcast operation of a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction;wherein, upon execution of the array broadcast instruction, the DMA control circuitry to:cause an array broadcast operation of an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; andwherein, upon execution of the array reduction instruction, the DMA control circuitry to:perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.The system of claim 1, the DMA control circuitry to further:generate the data broadcast instruction, the data broadcast instruction having a format that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of a memory address location containing the first data value;a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; anda fourth data field that includes information indicative of the base memory address location.The system of claim 2, the DMA control circuitry to further:generate the data broadcast instruction having a format that includes:a fifth data field that includes information representative of a memory address location containing a second data value; andperform a first compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.The system of claim 3 the DMA circuitry to further:perform a second compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.The system of claim 1, the DMA control circuitry to further:generate the array broadcast instruction, the array broadcast instruction having a format that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses;a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; anda fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; anda fifth data field that includes information representative of the base memory address location.The system of claim 1, the DMA control circuitry to further:generate the array reduction instruction, the array reduction instruction having a format that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of the memory address location to receive the output value;a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; anda fourth data field that includes information representative of the base memory address location.The system of any of claims 1 through 6 wherein, in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction the DMA control circuitry further includes:a 15-bit DMA type field that includes information indicative of the direct memory access type associated with the respective instruction.A DMA broadcast method, comprising:executing, by DMA control circuitry, at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction:wherein executing the data broadcast instruction comprises:broadcasting, by the DMA control circuitry, a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction;wherein executing the array broadcast instruction comprises:broadcasting, by the DMA control circuitry, an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; andwherein executing the array reduction instruction comprises:performing, by the DMA control circuitry, one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.The method of claim 8 wherein broadcasting the first data value to each of the plurality of memory addresses further comprises:generating, by the DMA control circuitry, a data broadcast instruction that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of a memory address location containing the first data value;a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; anda fourth data field that includes information indicative of the base memory address location; andbroadcasting the data broadcast instruction to each of the plurality of memory addresses.The method of claim 9 wherein generating the data broadcast instruction further includes:a fifth data field that includes information representative of a memory address location containing a second data valueThe method of claim 10, further comprising:performing, by the DMA control circuitry, a compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.The method of claim 10, further comprising:performing, by the DMA control circuitry, a compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.The method of claim 8 wherein broadcasting the array that includes the defined number of elements to each of the plurality of memory addresses further comprises:generating, by the DMA control circuitry, an array broadcast instruction that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses;a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; anda fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; anda fifth data field that includes information representative of the base memory address location.The method of claim 8 wherein performing the one or more operations to generate the output value using respective values stored at each of the plurality of memory address locations further comprises:generating, by the DMA control circuitry, an array reduction instruction that includes:a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset;a second data field that includes information representative of the memory address location to receive the output value;a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; anda fourth data field that includes information representative of the base memory address location.The method of any of claims 8 through 14, further comprising:inserting, by the DMA control circuitry, a 15-bit DMA type field that includes information indicative of the direct memory access type in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction.At least one non-transitory storage device that includes instructions that, in response to execution by a computing device, causes the computing device to carry out the method according to any of claims 8 through 15.
TECHNICAL FIELDThe present disclosure relates to systems and methods of performing array operations in memory circuitry, more specifically using direct memory access control circuitry to perform array operations.BACKGROUNDMany graphic workloads include situations where a single vertex must communicate data, such as an instruction, a single value, or an array or values, to at least some of its neighboring vertices. A list of such receptor vertices may be represented as a list using a format such as compressed sparse row (CSR) format. The list of receptor vertices must be accessed prior to communicating the data to determine the memory location of each of the receptor vertices that will receive the data. The broadcast value or instruction is then communicated to each receptor vertex, at times as an atomic operation (i.e., increment/decrement, add, mul, bitop).BRIEF DESCRIPTION OF THE DRAWINGSFeatures and advantages of various embodiments of the claimed subject matter will become apparent as the following Detailed Description proceeds, and upon reference to the Drawings, wherein like numerals designate like parts, and in which:FIG 1 is a block diagram of an illustrative system that includes a direct memory access (DMA) control circuitry, processor circuitry, and memory circuitry; where the DMA control circuitry includes an instruction set architecture (ISA) that includes instructions capable of conditionally populating data to a plurality of memory addresses (i.e., a broadcast instruction) with data or collapsing data at a plurality of memory addresses to one or more values (i.e., a reduce instruction), in accordance with at least one embodiment described herein;FIG 2 is a schematic diagram of an example DMA data broadcast instruction, in accordance with at least one embodiment described herein;FIG 3 is a schematic diagram of an example DMA array broadcast instruction, in accordance with at least one embodiment described herein;FIG 4 is a schematic diagram of an example DMA reduce broadcast instruction, in accordance with at least one embodiment described herein;FIG 5 is a schematic diagram of an illustrative electronic, processor-based, device that includes processor circuitry, such as a central processing unit (CPU) or multi-chip module (MCM), and DMA control circuitry, in accordance with at least one embodiment described herein;FIG 6 is a high-level logic flow diagram of an illustrative data broadcast method for broadcasting data such as an individual value or an array of values to a plurality of memory addresses within memory circuitry, in accordance with at least one embodiment described herein;FIG 7 is a high-level logic flow diagram of an illustrative array reduction method that includes gathering data from a plurality of physical addresses prior to performing one or more operations using the data, in accordance with at least one embodiment described herein;FIGs 8A and 8B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention;FIGs 9A, 9B, 9C , and 9D are block diagrams illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention;FIG 10 is a block diagram of a register architecture according to one embodiment of the invention;FIG 11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention.FIG 11B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention;FIGs 12A and 12B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip;FIG 13 is a block diagram of a processor that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention;FIGs 14 , 15 , 16 , and 17 are block diagrams of exemplary computer architectures; andFIG 18 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention.Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications and variations thereof will be apparent to those skilled in the art.DETAILED DESCRIPTIONThe Seeded Graph Matching (SGM) workload provides an example of such broadcast usage. SGM attempts to establish a correspondence between the vertices of two graphs in an attempt to maximize the adjacency lists between the graphs under the constraint that the correspondence will respect a user-provided correspondence (i.e., the seeds of the matching). A parallel implementation of SGM may be broken into a plurality of subroutines, such as ZAQB. The ZAQB subroutine performs an incremental update for each column of the second graph into a vector of the corresponding columns of the first graph.The Breadth First Search (BFS) algorithm used to test for connectivity or compute the single-source shortest path of unweighted graphs, provides another example of such broadcast usage. The BFS algorithm traverses the graph by exploring all of the nodes at the present depth prior to moving on to the nodes at a subsequent depth level. The BFS algorithm begins at a given starting node and terminates when all of the nodes reachable from the starting node have been discovered. The parent node assignment carries out the discovery of neighbor nodes, and returns a parent vector based on the provided starting node. The top-down portion of the BFS algorithm searches active nodes to determine whether the node has been previously visited by broadcasting a compare-swap instruction to the active nodes. If a node has not been visited, the node is claimed using a unique parent identifier. Once an active node is claimed, presence bytes may be broadcast to the neighboring nodes as the next level in the search.For large graphs, a parent node may have on the order of 105 or more neighboring nodes to which the parent node will broadcast. Thus, algorithms such as BFS are resource intensive and tend to tie-up significant core pipeline resources. In the Programmable Unified Memory Architecture (PUMA) graph processor, a single core may have four-multithreaded pipelines. Multiple pipelines allow a programmer to split the elements of the broadcast among 64 threads. While the availability of multiple threads allows the process to be handled more efficiently, a significant resource burden is still imposed on the pipelines for an extended number of clock cycles. Additionally, distribution of the elements among the threads consumes overhead and the potential for vertices to be located across a multi-node distributed global address space (DGAS) system may lead to extreme load imbalances between threads.In both the BFS and the SGM, each element of the broadcast requires the following operations:1. A read of the pre-built neighbor index (e.g., a list of vertices in CSR format) from memory;2. Dereferencing the value to determine the neighbor location in physical memory space; and3. Generating and communicating a remote atomic request to the neighbor location.The systems and methods disclosed herein beneficially enhance the instruction set available to direct memory access (DMA) control circuitry by including a number of instructions that enable the DMA control circuitry to autonomously: determine starting data (e.g., a single value or an array of values), receive a starting node address, and a memory offset value that identifies each of the neighboring nodes. Such systems and methods beneficially reduce the traffic within processor pipelines associated with more traditional array broadcast operations such as SGM and BFS discussed above. The systems and methods disclosed herein beneficially enhance the instruction set available to DMA control circuitry by including at least one instruction that permits the DMA control circuitry to autonomously perform an array reduction operation using data stored as array elements in each of a plurality of memory locations.A direct memory access (DMA) system is provided. The system may include: DMA control circuitry coupled to memory circuitry, the DMA control circuitry to execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein, upon execution of the data broadcast instruction, the DMA control circuitry to: causes a data broadcast operation of a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein, upon execution of the array broadcast instruction, the DMA control circuitry to: cause an array broadcast operation of an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein, upon execution of the array reduction instruction, the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.An electronic device is provided. The electronic device may include: processor circuitry; memory circuitry coupled to the processor circuitry; and DMA control circuitry coupled to the memory circuitry, the DMA control circuitry to execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein, upon execution of the data broadcast instruction, the DMA control circuitry to: causes a data broadcast operation of a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein, upon execution of the array broadcast instruction, the DMA control circuitry to: cause an array broadcast operation of an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein, upon execution of the array reduction instruction, the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.A DMA broadcast method is provided. The method may include: executing, by DMA control circuitry, at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein executing the data broadcast instruction comprises: broadcasting, by the DMA control circuitry, a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein executing the array broadcast instruction comprises: broadcasting, by the DMA control circuitry, an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein executing the array reduction instruction comprises: performing, by the DMA control circuitry, one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.A non-transitory storage device is provided. The non-transitory storage device includes instructions that, when executed by direct memory access (DMA) control circuitry, cause the DMA control circuitry to: execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein execution of the data broadcast instruction causes the DMA control circuitry to: broadcast a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein execution of the array broadcast instruction causes the DMA control circuitry to: broadcast an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein execution of the array reduction instruction causes the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.A DMA broadcast system is provided. The system may include: means for executing at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein the means for executing the data broadcast instruction comprises: means for broadcasting a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein the means for executing the array broadcast instruction comprises: means for broadcasting an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein the means for executing the array reduction instruction comprises: means for performing one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.As used herein the terms "about" or "approximately" when used to prefix an enumerated value should be interpreted to indicate a value that is plus or minus 15% of the enumerated value. Thus, a value that is listed as "about 100" or "approximately 100%" should be understood to represent a value that could include any value or group of values between 85 (i.e., -15%) to 115 (i.e., +15%).As used herein the term "processor circuit" may refer to the physical circuitry included in a microprocessor or central processing unit (CPU), a virtual instantiation of a processor on physical circuitry included in a microprocessor or CPU, or combinations thereof. The term processor circuit may refer to a single- or multi-thread processor core circuit.FIG 1 is a block diagram of an illustrative system 100 that includes a direct memory access (DMA) control circuitry 110, processor circuitry 120, and memory circuitry 130 - the DMA control circuitry 110 includes an instruction set architecture (ISA) that includes instructions capable of conditionally populating data to a plurality of memory addresses (i.e., a broadcast instruction) with data or collapsing data at a plurality of memory addresses to one or more values (i.e., a reduce instruction), in accordance with at least one embodiment described herein. In embodiments, the DMA control circuitry 110 includes but is not limited to: data broadcast logic 110A, array broadcast logic 110B, and array reduction broadcast logic 110C. Beneficially, the DMA control circuitry 110 interprets the instruction to perform the broadcast or reduce operation as a single instruction and performs the broadcast or reduce operations in the memory circuitry 130 without the involvement of or burdening the processor circuitry 120. In embodiments, the DMA control circuitry 110 may execute a data broadcast instruction 140 that causes a broadcast of data representative of a single value at a defined first memory location to a plurality of memory locations, each of the plurality of memory locations at a defined offset from the first location. In embodiments, the DMA control circuitry 110 may execute an array broadcast instruction 150 that causes a broadcast of data representative of an array containing a plurality of values at a defined first memory location to a plurality of memory locations, each of the plurality of memory locations at a defined offset from the first location. In embodiments, the DMA control circuitry 110 may execute a reduce broadcast instruction 160 that causes a reduction of data stored at each of a plurality of memory locations to a single memory location.In embodiments, the DMA control circuitry 110 interprets data representative of a list in a list of offsets format. In such embodiments, the DMA control circuitry 110 provides a base value (e.g., a 64-bit canonical address) and the address of the offset list as separate registers in the instruction. Such a construction permits the use of offset data in different DMA operations various applications while, minimizing or eliminating the need for data structure reorganization. In embodiments, the list stored in the memory circuitry 130 includes integers representing a count of elements. Such a construction permits applications to provide the original vertex identifiers without scaling by the size of individual elements. In embodiments, the integers can include 4-bit or 8-bit, signed or unsigned, integers.In embodiments, the data broadcast logic 110A performs or otherwise causes the performance of a data broadcast operation upon receipt of a data broadcast instruction 140 that includes the following fields:a first field that includes data representative of a pointer to an array of addresses/offsets;a second field that includes data representative of the source data to broadcast;a third field that includes data representative of a number of physical address destinations in memory circuitry 130 to receive the data broadcast; anda fourth field that includes data representative of a base address in the memory circuitry for base plus offset format.In some embodiments, in addition to the above fields, the data broadcast instruction 140 may further include a fifth field that includes data representative of a compare value where the memory operation includes a compare-overwrite.In embodiments, the array broadcast logic 110B performs or otherwise causes the performance of an array broadcast operation upon receipt of an array broadcast instruction 150 that includes:a first field that includes data representative of a pointer to an array of addresses/offsets;a second field that includes data representative of the base address in memory circuitry 130 for the source data to broadcast;a third field that includes data representative of a number of physical address destinations in memory circuitry 130 to receive the array broadcast;a fourth field that includes data representative of the number of array elements to broadcast; anda fifth field that includes data representative of a base address in the memory circuitry for base plus offset format.In embodiments, the array reduction logic 110C performs or otherwise causes the performance of an array reduction operation upon receipt of an array reduce instruction 160 that includes:a first field that includes data representative of a pointer to an array of addresses/offsets;a second field that includes data representative of a destination address in memory circuitry 130 to receive the result of the array reduction;a third field that includes data representative of source physical addresses in memory circuitry 130 to include in the array for reduction; anda fourth field that includes data representative of the base address in memory circuitry 130 for the source data to broadcast.The DMA control circuitry 110 may include any number and/or combination of currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of utilizing an ISA that includes data broadcast, array broadcast, and array reduce instructions as described herein. In embodiments, the processor circuitry 120 may initiate one or more array operations that the DMA control circuitry 110, using the ISA as described herein, beneficially performs as an in-memory broadcast or reduction operation, thereby freeing the processor circuitry 120 to perform other operations during the pendency of the in-memory array operation. In embodiments, the DMA control circuitry 110 may include circuitry disposed on a semiconductor die included in a system-on-chip (SoC) or on a semiconductor chiplet included in a multi-chip module (MCM). In other embodiments, memory management unit (MMU) circuitry may provide all or a portion of the DMA control circuitry 110. In embodiments, a system bus 170 communicatively couples the DMA control circuitry 110, the processor circuitry 120, and the memory circuitry 130.The processor circuitry 120 may include any number and/or combination of currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of executing instructions that include but are not limited to operating system and application instructions. The processor circuitry 120 may include any of a wide variety of commercially available processors, including without limitation, an AMD® Athlon®, Duron® or Opteron® processor; an ARM® application, embedded and secure processors; an IBM® and/or Motorola® DragonBall® or PowerPC® processor; an IBM and/or Sony® Cell processor; or an Intel® Celeron®, Core (2) Duo®, Core (2) Quad®, Core i3®, Core i5®, Core i7®, Atom®, Itanium®, Pentium®, Xeon® or XScale® processor. Further, the processor circuitry 120 may include a multi-core processor (whether the multiple cores coexist on the same or separate dies), and/or a multi-processor architecture of some other variety by which multiple physically separate processors are in some way communicatively coupled.The system memory circuitry 130 may include any number and/or combination of currently available and/or future developed electronic components, semiconductor devices, and/or logic elements capable of storing or otherwise retaining information and/or data. The system memory circuitry 130 may be based on any of wide variety of information storage technologies, possibly including volatile technologies requiring the uninterrupted provision of electric power, and possibly including technologies entailing the use of machine-readable storage media that may be removable, or that may not be removable. Thus, the system memory circuitry 130 may include any of a wide variety of types of storage device, including without limitation, read-only memory (ROM), random-access memory (RAM), dynamic RAM (DRAM), Double-Data-Rate DRAM (DDR-DRAM), synchronous DRAM (SDRAM), static RAM (SRAM), programmable ROM (PROM), erasable programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), flash memory, polymer memory (e.g., ferroelectric polymer memory), ovonic memory, phase change or ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, magnetic or optical cards, one or more individual ferromagnetic disk drives, or a plurality of storage devices organized into one or more arrays (e.g., multiple ferromagnetic disk drives organized into a Redundant Array of Independent Disks array, or RAID array). It should be noted that although the system memory circuitry 130 is depicted as a single block in FIG 1 , the system memory circuitry 130 may include multiple storage devices that may be based on differing storage technologies.FIG 2 is a schematic diagram of an example DMA data broadcast instruction 140, in accordance with at least one embodiment described herein. In the embodiment depicted in FIG 2 , the DMA data broadcast instruction 140 includes a first field that contains information 210 representative of a base address 212. In addition, the first field may contain data representative of a list of one or more physical memory addresses 214A-214n. The DMA data broadcast instruction 140 further includes a second field that contains information 220 representative of the broadcast data value 222 used to populate the addresses 212 and 214A-214n in the memory circuitry 130. The DMA data broadcast instruction 140 further includes a third field containing information 230 representative of the number or count of physical addresses in the memory circuitry 130 to store the data value 222. At the conclusion of the data broadcast instruction, the physical addresses 212 and 214A-214n each contain the broadcast data value 222.Although not depicted in FIG 2 , in embodiments, the DMA data broadcast instruction 140 may include a DMA_Type field that contains information and/or data indicative of a conditional data broadcast instruction. In such embodiments, the DMA broadcast instruction 140 may include a second additional field containing information representative of one or more defined values used by the conditional DMA data broadcast instruction 140. In embodiments, the DMA control circuitry 110 may use the one or more defined values to conditionally or selectively replace the current value in some or all of the addresses 212, 214A-214n in the memory circuitry 130. In embodiments, the DMA control circuitry 110 may compare the one or more defined values with the current data or current information stored at each of at least some of the addresses 212, 214A-214n. In some embodiments, if the current data or current information stored at each of at least some of the addresses 212, 214A-214n is the same as or matches all or a portion of one or more defined values, the DMA control circuitry 110 replaces the current data or information at the respective address with the data value 222. In some embodiments, if the current data or current information stored at each of at least some of the addresses 212, 214A-214n differs from all or a portion of the one or more defined values, the DMA control circuitry 110 replaces the current data or information at the respective address with the data value 222.Although not depicted in FIG 2 , in embodiments, rather than including a list of addresses in the memory circuitry 130, the DMA broadcast instruction 140 may instead include a field containing information representative of a base address in the memory circuitry 130 and data representative of an offset value from the base address that, in conjunction with the data representative of number or count of addresses in the memory circuitry 130 to store the data value 222, may be used by the DMA control circuitry 110 to determine each successive address in the memory circuitry 130 to store the data value 222.FIG 3 is a schematic diagram of an example DMA array broadcast instruction 150, in accordance with at least one embodiment described herein. In the embodiment depicted in FIG 3 , the DMA array broadcast instruction 150 includes a first field containing information 210 representative of a base address 212. In addition, the first field may contain data representative of a list of one or more physical memory addresses 214A-214n. The DMA array broadcast instruction 150 includes a second field 310 containing information 312 representative of a base address in the memory circuitry of the data array and information 314 representative of the size of the data array. The DMA array broadcast instruction 150 includes a third field 310 that contains information 230 representative of the number or count of addresses in the memory circuitry 130 to store the data array 310. The DMA array broadcast instruction 150 includes a fourth field 320 that contains information 322 representative of the number or count of items included in the data array 310 to copy to each of the destination addresses in the memory circuitry 130. At the conclusion of the DMA array broadcast instruction, the addresses 212 and 214A-214n each contain the broadcast array data value 310.In embodiments, the DMA array broadcast instruction may additionally include a DMA Type field that contains information and/or data indicative of an element-wise operation requested at the destination addresses 212 and 214A-214n in memory circuitry 130.Although not depicted in FIG 3 , in embodiments, rather than including a list of addresses in the memory circuitry 130, the DMA array broadcast instruction 150 may instead include information and/or data in the DMA broadcast modifier field indicative of a base address plus address offset format DMA array broadcast instruction 150. In such instances, the first field in the DMA array broadcast instruction may include information and/or data representative of a base address in the memory circuitry 130 and information and/or data representative of an offset value from the base address that, in conjunction with the data 230 representative of number or count of addresses in the memory circuitry 130 to store the array data value 310, may be used by the DMA control circuitry 110 to determine each successive address in the memory circuitry 130 to store the array data value 310.FIG 4 is a schematic diagram of an example DMA reduce broadcast instruction 160, in accordance with at least one embodiment described herein. In the embodiment depicted in FIG 4 , the DMA reduce broadcast instruction 160 includes a first field containing information 210 representative of a base address 212 containing data used in the reduction operation. In addition, the first field may contain data representative of a list of one or more physical memory addresses 214A-214n containing data used in the reduction operation. The DMA array reduce instruction 160 includes a second field 410 containing information 412 representative of an address in the memory circuitry 130 to receive the resultant data from the reduce operation. The DMA array reduce instruction 160 includes a third field 310 that contains information 230 representative of the number or count of addresses in the memory circuitry 130 to provide data to the reduce operation. The DMA array reduction instruction 160 includes a DMA _Type field that contains information and/or data indicative of the type of operation to perform using the data retrieved from addresses 212 and 214A-214n. At the conclusion of the DMA array reduce instruction, the address 412 contains data representative of the result generated by the DMA array reduction operation.Although not depicted in FIG 4 , in embodiments, rather than including a list of addresses in the memory circuitry 130, the DMA array reduction instruction 160 may instead include information and/or data indicative of a base plus offset memory address format to provide the addresses of the data included in the array reduction operation. In such instances, the first field in the DMA array reduction instruction 160 may include information and/or data representative of a base address in the memory circuitry 130 and information and/or data representative of an offset value from the base address used by the DMA control circuitry 110 to determine each successive address in the memory circuitry 130 from which to retrieve the data used in the array reduction operation.Each of the DMA broadcast instruction 140, the DMA array broadcast instruction 150, and the DMA array reduction instruction 160 may include a DMA_Type field that contains information and/or data associated with the operation of the broadcast or reduction operation being performed. Although the DMA _Type field may have any length, in at least some embodiments, the DMA_Type field may include a 15-bit field. In at least one embodiment, the DMA_Type field may include the following information and/or data:Table 1. DMA_Type Field ComponentsSizeFunction1 BitBase + Offset Address Format Indicator1 BitPack/Unpack data1 BitOffset pointer size (32 bit/64 bit)1 BitOffset pointer type (signed/unsigned)1 BitComplement incoming value from source1 BitComplement existing value4 BitsReduction operation encoding2 BitsOperand type (integer, floating, unsigned)3 BitsOperand to perform at destination addressFIG 5 is a schematic diagram of an illustrative electronic, processor-based, device 500 that includes processor circuitry 120, such as a central processing unit (CPU) or multi-chip module (MCM), and DMA control circuitry 110, in accordance with at least one embodiment described herein. The processor-based device 500 may additionally include graphical processing unit (GPU) circuitry 512. The processor-based device 500 may additionally include one or more of the following: a wireless input/output (I/O) interface 520, a wired I/O interface 530, system memory 540, power management circuitry 550, a non-transitory storage device 560, and a network interface 570 used to communicatively couple the processor-based device 500 to one or more external devices (e.g., a cloud-based server) 590 via one or more networks 580. The following discussion provides a brief, general description of the components forming the illustrative processor-based device 500. Example, non-limiting processor-based devices 500 may include, but are not limited to: autonomous motor vehicles, semi-autonomous motor vehicles, manually controlled motor vehicles, smartphones, wearable computers, portable computing devices, handheld computing devices, desktop computing devices, blade server devices, workstations, and similar.Those skilled in the relevant art will appreciate that the illustrated embodiments as well as other embodiments may be practiced with other processor-based device configurations, including portable electronic or handheld electronic devices, for instance smartphones, portable computers, wearable computers, consumer electronics, personal computers ("PCs"), network PCs, minicomputers, server blades, mainframe computers, and the like. The processor circuitry 120 may include any number of hardwired or configurable circuits, some or all of which may include programmable and/or configurable combinations of electronic components, semiconductor devices, and/or logic elements that are disposed partially or wholly in a PC, server, or other computing system capable of executing machine-readable instructions.The processor-based device 500 includes a bus or similar communications link 516 that communicably couples and facilitates the exchange of information and/or data between various system components including the processor circuitry 120, the graphics processor circuitry 512, one or more wireless I/O interfaces 520, one or more wired I/O interfaces 530, the system memory 540, one or more storage devices 560, and/or the network interface circuitry 570. The processor-based device 500 may be referred to in the singular herein, but this is not intended to limit the embodiments to a single processor-based device 500, since in certain embodiments, there may be more than one processor-based device 500 that incorporates, includes, or contains any number of communicably coupled, collocated, or remote networked circuits or devices.The processor circuitry 120 may include any number, type, or combination of currently available or future developed devices capable of executing machine-readable instruction sets. The processor circuitry 150 may include but is not limited to any current or future developed single- or multi-core processor or microprocessor, such as: on or more systems on a chip (SOCs); central processing units (CPUs); digital signal processors (DSPs); graphics processing units (GPUs); application-specific integrated circuits (ASICs), programmable logic units, field programmable gate arrays (FPGAs), and the like. Unless described otherwise, the construction and operation of the various blocks shown in FIG 5 are of conventional design. Consequently, such blocks need not be described in further detail herein, as they will be understood by those skilled in the relevant art. The bus 516 that interconnects at least some of the components of the processor-based device 500 may employ any currently available or future developed serial or parallel bus structures or architectures.The system memory 130 may include read-only memory ("ROM") 542 and random access memory ("RAM") 546. A portion of the ROM 542 may be used to store or otherwise retain a basic input/output system ("BIOS") 544. The BIOS 544 provides basic functionality to the processor-based device 500, for example by causing the processor circuitry 120 to load and/or execute one or more machine-readable instruction sets 514. In embodiments, at least some of the one or more machine-readable instruction sets 514 cause at least a portion of the processor circuitry 120 to provide, create, produce, transition, and/or function as a dedicated, specific, and particular machine.The processor-based device 500 may include at least one wireless input/output (I/O) interface 520. The at least one wireless I/O interface 520 may be communicably coupled to one or more physical output devices 522 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wireless I/O interface 520 may communicably couple to one or more physical input devices 524 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The at least one wireless I/O interface 520 may include any currently available or future developed wireless I/O interface. Example wireless I/O interfaces include, but are not limited to: BLUETOOTH®, near field communication (NFC), and similar.The processor-based device 500 may include one or more wired input/output (I/O) interfaces 630. The at least one wired I/O interface 630 may be communicably coupled to one or more physical output devices 522 (tactile devices, video displays, audio output devices, hardcopy output devices, etc.). The at least one wired I/O interface 530 may be communicably coupled to one or more physical input devices 524 (pointing devices, touchscreens, keyboards, tactile devices, etc.). The wired I/O interface 530 may include any currently available or future developed I/O interface. Example wired I/O interfaces include but are not limited to: universal serial bus (USB), IEEE 1394 ("FireWire"), and similar.The processor-based device 500 may include one or more communicably coupled, non-transitory, data storage devices 560. The data storage devices 560 may include one or more hard disk drives (HDDs) and/or one or more solid-state storage devices (SSDs). The one or more data storage devices 560 may include any current or future developed storage appliances, network storage devices, and/or systems. Non-limiting examples of such data storage devices 560 may include, but are not limited to, any current or future developed non-transitory storage appliances or devices, such as one or more magnetic storage devices, one or more optical storage devices, one or more electro-resistive storage devices, one or more molecular storage devices, one or more quantum storage devices, or various combinations thereof. In some implementations, the one or more data storage devices 560 may include one or more removable storage devices, such as one or more flash drives, flash memories, flash storage units, or similar appliances or devices capable of communicable coupling to and decoupling from the processor-based device 500.The one or more data storage devices 560 may include interfaces or controllers (not shown) communicatively coupling the respective storage device or system to the bus 516. The one or more data storage devices 560 may store, retain, or otherwise contain machine-readable instruction sets, data structures, program modules, data stores, databases, logical structures, and/or other data useful to the processor circuitry 120 and/or graphics processor circuitry 512 and/or one or more applications executed on or by the processor circuitry 120 and/or graphics processor circuitry 512. In some instances, one or more data storage devices 560 may be communicably coupled to the processor circuitry 120, for example via the bus 516 or via one or more wired communications interfaces 530 (e.g., Universal Serial Bus or USB); one or more wireless communications interfaces 520 (e.g., Bluetooth®, Near Field Communication or NFC); and/or one or more network interfaces 570 (IEEE 802.3 or Ethernet, IEEE 802.11, or WiFi®, etc.).Machine-readable instruction sets 514 and other programs, applications, logic sets, and/or modules may be stored in whole or in part in the system memory 130. Such instruction sets 514 may be transferred, in whole or in part, from the one or more data storage devices 560. The instruction sets 514 may be loaded, stored, or otherwise retained in system memory 130, in whole or in part, during execution by the processor circuitry 120 and/or graphics processor circuitry 512.The processor-based device 500 may include power management circuitry 550 that controls one or more operational aspects of the energy storage device 552. In embodiments, the energy storage device 552 may include one or more primary (i.e., non-rechargeable) or secondary (i.e., rechargeable) batteries or similar energy storage devices. In embodiments, the energy storage device 552 may include one or more supercapacitors or ultracapacitors. In embodiments, the power management circuitry 550 may alter, adjust, or control the flow of energy from an external power source 554 to the energy storage device 552 and/or to the processor-based device 500. The power source 554 may include, but is not limited to, a solar power system, a commercial electric grid, a portable generator, an external energy storage device, or any combination thereof.For convenience, the processor circuitry 120, the GPU circuitry 512, the wireless I/O interface 520, the wired I/O interface 530, the system memory 130, the power management circuitry 550, the storage device 560, and the network interface 570 are illustrated as communicatively coupled to each other via the bus 516, thereby providing connectivity between the above-described components. In alternative embodiments, the above-described components may be communicatively coupled in a different manner than illustrated in FIG 5 . For example, one or more of the above-described components may be directly coupled to other components, or may be coupled to each other, via one or more intermediary components (not shown). In another example, one or more of the above-described components may be integrated into the processor circuitry 120 and/or the graphics processor circuitry 512. In some embodiments, all or a portion of the bus 516 may be omitted and the components are coupled directly to each other using suitable wired or wireless connections.FIG 6 is a high-level logic flow diagram of an illustrative data broadcast method 600 for broadcasting data such as an individual value or an array of values to a plurality of memory addresses within memory circuitry 130, in accordance with at least one embodiment described herein. In embodiments, the ISA executed by the DMA control circuitry 110 includes a broadcast instruction that enables the DMA control circuitry 110 to communicate a value, such as a single data value or an array of data values, from a source address in memory circuitry 130 to each of a plurality addresses in memory circuitry 130 with minimal impact on processor circuitry 120. The method 600 commences at 602.At 604, the DMA control circuitry 110 obtains one or more destination addresses in memory circuitry 130 to copy or otherwise communicate data. In embodiments, the plurality of destination addresses includes a list that contains a base address 212 and a plurality of other addresses 214A-214n in the memory circuitry 110. In other embodiments, the plurality of destination addresses includes a base address 212 and an offset value used to obtain each remaining destination address in memory circuitry 130 (e.g., base address, base address + (1∗offset value), base address + (2∗offset value)... base address + (n∗offset value)).In some embodiments, a pointer directs the DMA control circuitry 110 to an address in memory circuitry 130 that stores or otherwise retains a single data value for broadcast to the plurality of destination addresses 212 and 214A-214n. In other embodiments, a pointer directs the DMA control circuitry 110 to an address in memory circuitry 130 that stores or otherwise retains an array of data values for broadcast to the plurality of destination addresses 212 and 214A-214n.At 606, the DMA control circuitry 110 determines whether to execute a data compare/overwrite instruction for each of the plurality of destination addresses 212 and 214A-214n. If the DMA control circuitry 110 does not execute a data compare/overwrite instruction, the method 600 continues at 608, otherwise the method 600 continues at 610.At 608, the DMA control circuitry 110 overwrites or otherwise replaces the data at the respective destination address and the method 600 continues at 616.At 610, the DMA control circuitry 110 executes the compare/overwrite instruction in which the DMA control circuitry 110 compares the current value at the respective destination address with the one or more defined values. Dependent on the outcome of the comparison operation, the DMA control circuitry 110 autonomously and selectively either permits the current value at the respective destination address to remain unchanged or replaces the current value at the respective destination address with the value from the source address.In embodiments, if the current value at the respective destination address matches or is the same as at least a portion of the one or more defined values, the DMA control circuitry 110 replaces the current value at the respective destination address with the broadcast value. In other embodiments, if the current value at the respective destination address differs from at least a portion of the one or more defined values, the DMA control circuitry replaces the current value at the respective destination address with the broadcast value.At 612, if the comparison performed at 610 indicates the DMA control circuitry 110 should replace the current value at the respective destination address with the broadcast data value, the method 600 continues at 608. If the comparison performed at 610 indicates the DMA control circuitry 110 should NOT replace the current value at the respective destination address with the broadcast data value, the method 600 continues at 614.At 614, responsive to a determination that the current data at the respective address should NOT be overwritten or otherwise replaced, the DMA control circuitry 110 aborts the replacement of the current data at the respective destination address.At 616, the DMA control circuitry 110 determines whether additional destination addresses exist to receive the broadcast data. Responsive to a determination that additional destination addresses should receive the broadcast data, the method 600 returns to 604. Responsive to a determination that no additional destination addresses should receive the broadcast data value, the method 600 concludes at 618.FIG 7 is a high-level logic flow diagram of an illustrative array reduction method 700 that includes gathering data from a plurality of physical addresses prior to performing one or more operations using the data, in accordance with at least one embodiment described herein. In embodiments, the DMA control circuitry 110 may perform one or more operations (e.g., one or more mathematical operations) to combine or reduce an array containing a plurality of data values (e.g., single data values or an array of data values) to a result that contains fewer data values. The method 700 commences at 702.At 704, the DMA control circuitry 110 obtains information and/or data representative of each of a plurality of source data addresses in memory circuitry 130 that store or otherwise contain the source data values.At 706, the DMA control circuitry 110 obtains information and/or data representative of one or more destination addresses in memory circuitry 130 to receive the resultant output of the one or more operations performed on the input data collected from the addresses identified at 704.At 708, the DMA control circuitry 110 obtains the source data values stored or otherwise retained at each of the plurality of source data addresses identified at 704.At 710, the DMA control circuitry 110 performs one or more operations using the source data values obtained at 708.At 712, the DMA control circuitry 110 stores or otherwise retains the one or more output data value(s) generated at 710 in the one or more destination addresses identified at 706. The method 700 concludes at 714The figures below detail exemplary architectures and systems to implement embodiments of the above. In some embodiments, one or more hardware components and/or instructions described above are emulated as detailed below, or implemented as software modules.Embodiments of the instruction(s) detailed above are embodied may be embodied in a "generic vector friendly instruction format" which is detailed below. In other embodiments, such a format is not utilized and another instruction format is used, however, the description below of the writemask registers, various data transformations (swizzle, broadcast, etc.), addressing, etc. is generally applicable to the description of the embodiments of the instruction(s) above. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) above may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.An instruction set may include one or more instruction formats. A given instruction format may define various fields (e.g., number of bits, location of bits) to specify, among other things, the operation to be performed (e.g., opcode) and the operand(s) on which that operation is to be performed and/or other data field(s) (e.g., mask). Some instruction formats are further broken down though the definition of instruction templates (or sub-formats). For example, the instruction templates of a given instruction format may be defined to have different subsets of the instruction format's fields (the included fields are typically in the same order, but at least some have different bit positions because there are less fields included) and/or defined to have a given field interpreted differently. Thus, each instruction of an ISA is expressed using a given instruction format (and, if defined, in a given one of the instruction templates of that instruction format) and includes fields for specifying the operation and the operands. For example, an exemplary ADD instruction has a specific opcode and an instruction format that includes an opcode field to specify that opcode and operand fields to select operands (source1/destination and source2); and an occurrence of this ADD instruction in an instruction stream will have specific contents in the operand fields that select specific operands. A set of IMD extensions referred to as the Advanced Vector Extensions (AVX) (AVX1 and AVX2) and using the Vector Extensions (VEX) coding scheme has been released and/or published (e.g., see Intel® 64 and IA-32 Architectures Software Developer's Manual, September 2014 ; and see Intel® Advanced Vector Extensions Programming Reference, October 2014 ).Exemplary Instruction FormatsEmbodiments of the instruction(s) described herein may be embodied in different formats. Additionally, exemplary systems, architectures, and pipelines are detailed below. Embodiments of the instruction(s) may be executed on such systems, architectures, and pipelines, but are not limited to those detailed.Generic Vector Friendly Instruction FormatA vector friendly instruction format is an instruction format that is suited for vector instructions (e.g., there are certain fields specific to vector operations). While embodiments are described in which both vector and scalar operations are supported through the vector friendly instruction format, alternative embodiments use only vector operations the vector friendly instruction format.FIGs 8A and 8B are block diagrams illustrating a generic vector friendly instruction format and instruction templates thereof according to embodiments of the invention. FIG 8A is a block diagram illustrating a generic vector friendly instruction format and class A instruction templates thereof according to embodiments of the invention; while FIG 8B is a block diagram illustrating the generic vector friendly instruction format and class B instruction templates thereof according to embodiments of the invention. Specifically, a generic vector friendly instruction format 800 for which are defined class A and class B instruction templates, both of which include no memory access 805 instruction templates and memory access 820 instruction templates. The term generic in the context of the vector friendly instruction format refers to the instruction format not being tied to any specific instruction set.While embodiments of the invention will be described in which the vector friendly instruction format supports the following: a 64 byte vector operand length (or size) with 32 bit (4 byte) or 64 bit (8 byte) data element widths (or sizes) (and thus, a 64 byte vector consists of either 16 doubleword-size elements or alternatively, 8 quadword-size elements); a 64 byte vector operand length (or size) with 16 bit (2 byte) or 8 bit (1 byte) data element widths (or sizes); a 32 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); and a 16 byte vector operand length (or size) with 32 bit (4 byte), 64 bit (8 byte), 16 bit (2 byte), or 8 bit (1 byte) data element widths (or sizes); alternative embodiments may support more, less and/or different vector operand sizes (e.g., 256 byte vector operands) with more, less, or different data element widths (e.g., 128 bit (16 byte) data element widths).The class A instruction templates in FIG 8A include: 1) within the no memory access 805 instruction templates there is shown a no memory access, full round control type operation 810 instruction template and a no memory access, data transform type operation 815 instruction template; and 2) within the memory access 820 instruction templates there is shown a memory access, temporal 825 instruction template and a memory access, non-temporal 830 instruction template. The class B instruction templates in FIG 8B include: 1) within the no memory access 805 instruction templates there is shown a no memory access, write mask control, partial round control type operation 812 instruction template and a no memory access, write mask control, vsize type operation 817 instruction template; and 2) within the memory access 820 instruction templates there is shown a memory access, write mask control 827 instruction template. The generic vector friendly instruction format 800 includes the following fields listed below in the order illustrated in FIGs 8A and 8B .Format field 840 - a specific value (an instruction format identifier value) in this field uniquely identifies the vector friendly instruction format, and thus occurrences of instructions in the vector friendly instruction format in instruction streams. As such, this field is optional in the sense that it is not needed for an instruction set that has only the generic vector friendly instruction format.Base operation field 842 - its content distinguishes different base operations.Register index field 844 - its content, directly or through address generation, specifies the locations of the source and destination operands, be they in registers or in memory. These include a sufficient number of bits to select N registers from a PxQ (e.g. 32x512, 16x128, 32x1024, 64x1024) register file. While in one embodiment N may be up to three sources and one destination register, alternative embodiments may support more or less sources and destination registers (e.g., may support up to two sources where one of these sources also acts as the destination, may support up to three sources where one of these sources also acts as the destination, may support up to two sources and one destination).Modifier field 846 - its content distinguishes occurrences of instructions in the generic vector instruction format that specify memory access from those that do not; that is, between no memory access 805 instruction templates and memory access 820 instruction templates. Memory access operations read and/or write to the memory hierarchy (in some cases specifying the source and/or destination addresses using values in registers), while non-memory access operations do not (e.g., the source and destinations are registers). While in one embodiment this field also selects between three different ways to perform memory address calculations, alternative embodiments may support more, less, or different ways to perform memory address calculations.Augmentation operation field 850 - its content distinguishes which one of a variety of different operations to be performed in addition to the base operation. This field is context specific. In one embodiment of the invention, this field is divided into a class field 868, an alpha field 852, and a beta field 854. The augmentation operation field 850 allows common groups of operations to be performed in a single instruction rather than 2, 3, or 4 instructions.Scale field 860 - its content allows for the scaling of the index field's content for memory address generation (e.g., for address generation that uses 2scale ∗ index + base).Displacement Field 862A- its content is used as part of memory address generation (e.g., for address generation that uses 2scale ∗ index + base + displacement).Displacement Factor Field 862B (note that the juxtaposition of displacement field 862A directly over displacement factor field 862B indicates one or the other is used) - its content is used as part of address generation; it specifies a displacement factor that is to be scaled by the size of a memory access (N) - where N is the number of bytes in the memory access (e.g., for address generation that uses 2scale ∗ index + base + scaled displacement). Redundant low-order bits are ignored and hence, the displacement factor field's content is multiplied by the memory operands total size (N) in order to generate the final displacement to be used in calculating an effective address. The value of N is determined by the processor hardware at runtime based on the full opcode field 874 (described later herein) and the data manipulation field 854C. The displacement field 862A and the displacement factor field 862B are optional in the sense that they are not used for the no memory access 805 instruction templates and/or different embodiments may implement only one or none of the two.Data element width field 864 - its content distinguishes which one of a number of data element widths is to be used (in some embodiments for all instructions; in other embodiments for only some of the instructions). This field is optional in the sense that it is not needed if only one data element width is supported and/or data element widths are supported using some aspect of the opcodes.Write mask field 870 - its content controls, on a per data element position basis, whether that data element position in the destination vector operand reflects the result of the base operation and augmentation operation. Class A instruction templates support merging-writemasking, while class B instruction templates support both merging- and zeroing-writemasking. When merging, vector masks allow any set of elements in the destination to be protected from updates during the execution of any operation (specified by the base operation and the augmentation operation); in other one embodiment, preserving the old value of each element of the destination where the corresponding mask bit has a 0. In contrast, when zeroing vector masks allow any set of elements in the destination to be zeroed during the execution of any operation (specified by the base operation and the augmentation operation); in one embodiment, an element of the destination is set to 0 when the corresponding mask bit has a 0 value. A subset of this functionality is the ability to control the vector length of the operation being performed (that is, the span of elements being modified, from the first to the last one); however, it is not necessary that the elements that are modified be consecutive. Thus, the write mask field 870 allows for partial vector operations, including loads, stores, arithmetic, logical, etc. While embodiments of the invention are described in which the write mask field's 870 content selects one of a number of write mask registers that contains the write mask to be used (and thus the write mask field's 870 content indirectly identifies that masking to be performed), alternative embodiments instead or additional allow the mask write field's 870 content to directly specify the masking to be performed.Immediate field 872 - its content allows for the specification of an immediate. This field is optional in the sense that is it not present in an implementation of the generic vector friendly format that does not support immediate and it is not present in instructions that do not use an immediate.Class field 868 - its content distinguishes between different classes of instructions. With reference to FIGs 8A and B , the contents of this field select between class A and class B instructions. In FIGs 8A and 8B , rounded corner squares are used to indicate a specific value is present in a field (e.g., class A 868A and class B 868B for the class field 868 respectively in FIGs 8A and 8B ).Instruction Templates of Class AIn the case of the non-memory access 805 instruction templates of class A, the alpha field 852 is interpreted as an RS field 852A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 852A.1 and data transform 852A.2 are respectively specified for the no memory access, round type operation 810 and the no memory access, data transform type operation 815 instruction templates), while the beta field 854 distinguishes which of the operations of the specified type is to be performed. In the no memory access 805 instruction templates, the scale field 860, the displacement field 862A, and the displacement scale filed 862B are not present.No-Memory Access Instruction Templates - Full Round Control Type OperationIn the no memory access full round control type operation 810 instruction template, the beta field 854 is interpreted as a round control field 854A, whose content(s) provide static rounding. While in the described embodiments of the invention the round control field 854A includes a suppress all floating point exceptions (SAE) field 856 and a round operation control field 858, alternative embodiments may support may encode both these concepts into the same field or only have one or the other of these concepts/fields (e.g., may have only the round operation control field 858).SAE field 856 - its content distinguishes whether or not to disable the exception event reporting; when the SAE field's 856 content indicates suppression is enabled, a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler.Round operation control field 858 - its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 858 allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 850 content overrides that register value.No Memory Access Instruction Templates - Data Transform Type OperationIn the no memory access data transform type operation 815 instruction template, the beta field 854 is interpreted as a data transform field 854B, whose content distinguishes which one of a number of data transforms is to be performed (e.g., no data transform, swizzle, broadcast).In the case of a memory access 820 instruction template of class A, the alpha field 852 is interpreted as an eviction hint field 852B, whose content distinguishes which one of the eviction hints is to be used (in FIG 8A , temporal 852B.1 and non-temporal 852B.2 are respectively specified for the memory access, temporal 825 instruction template and the memory access, non-temporal 830 instruction template), while the beta field 854 is interpreted as a data manipulation field 854C, whose content distinguishes which one of a number of data manipulation operations (also known as primitives) is to be performed (e.g., no manipulation; broadcast; up conversion of a source; and down conversion of a destination). The memory access 820 instruction templates include the scale field 860, and optionally the displacement field 862A or the displacement scale field 862B.Vector memory instructions perform vector loads from and vector stores to memory, with conversion support. As with regular vector instructions, vector memory instructions transfer data from/to memory in a data element-wise fashion, with the elements that are actually transferred is dictated by the contents of the vector mask that is selected as the write mask.Memory Access Instruction Templates - TemporalTemporal data is data likely to be reused soon enough to benefit from caching. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Memory Access Instruction Templates - Non-TemporalNon-temporal data is data unlikely to be reused soon enough to benefit from caching in the 1st-level cache and should be given priority for eviction. This is, however, a hint, and different processors may implement it in different ways, including ignoring the hint entirely.Instruction Templates of Class BIn the case of the instruction templates of class B, the alpha field 852 is interpreted as a write mask control (Z) field 852C, whose content distinguishes whether the write masking controlled by the write mask field 870 should be a merging or a zeroing.In the case of the non-memory access 805 instruction templates of class B, part of the beta field 854 is interpreted as an RL field 857A, whose content distinguishes which one of the different augmentation operation types are to be performed (e.g., round 857A.1 and vector length (VSIZE) 857A.2 are respectively specified for the no memory access, write mask control, partial round control type operation 812 instruction template and the no memory access, write mask control, VSIZE type operation 817 instruction template), while the rest of the beta field 854 distinguishes which of the operations of the specified type is to be performed. In the no memory access 805 instruction templates, the scale field 860, the displacement field 862A, and the displacement scale filed 862B are not present.In the no memory access, write mask control, partial round control type operation 810 instruction template, the rest of the beta field 854 is interpreted as a round operation field 859A and exception event reporting is disabled (a given instruction does not report any kind of floating-point exception flag and does not raise any floating point exception handler).Round operation control field 859A - just as round operation control field 858, its content distinguishes which one of a group of rounding operations to perform (e.g., Round-up, Round-down, Round-towards-zero and Round-to-nearest). Thus, the round operation control field 859A allows for the changing of the rounding mode on a per instruction basis. In one embodiment of the invention where a processor includes a control register for specifying rounding modes, the round operation control field's 850 content overrides that register value.In the no memory access, write mask control, VSIZE type operation 817 instruction template, the rest of the beta field 854 is interpreted as a vector length field 859B, whose content distinguishes which one of a number of data vector lengths is to be performed on (e.g., 128, 256, or 512 byte).In the case of a memory access 820 instruction template of class B, part of the beta field 854 is interpreted as a broadcast field 857B, whose content distinguishes whether or not the broadcast type data manipulation operation is to be performed, while the rest of the beta field 854 is interpreted the vector length field 859B. The memory access 820 instruction templates include the scale field 860, and optionally the displacement field 862A or the displacement scale field 862B.With regard to the generic vector friendly instruction format 800, a full opcode field 874 is shown including the format field 840, the base operation field 842, and the data element width field 864. While one embodiment is shown where the full opcode field 874 includes all of these fields, the full opcode field 874 includes less than all of these fields in embodiments that do not support all of them. The full opcode field 874 provides the operation code (opcode).The augmentation operation field 850, the data element width field 864, and the write mask field 870 allow these features to be specified on a per instruction basis in the generic vector friendly instruction format.The combination of write mask field and data element width field create typed instructions in that they allow the mask to be applied based on different data element widths.The various instruction templates found within class A and class B are beneficial in different situations. In some embodiments of the invention, different processors or different cores within a processor may support only class A, only class B, or both classes. For instance, a high performance general purpose out-of-order core intended for general-purpose computing may support only class B, a core intended primarily for graphics and/or scientific (throughput) computing may support only class A, and a core intended for both may support both (of course, a core that has some mix of templates and instructions from both classes but not all templates and instructions from both classes is within the purview of the invention). Also, a single processor may include multiple cores, all of which support the same class or in which different cores support different class. For instance, in a processor with separate graphics and general purpose cores, one of the graphics cores intended primarily for graphics and/or scientific computing may support only class A, while one or more of the general purpose cores may be high performance general purpose cores with out of order execution and register renaming intended for general-purpose computing that support only class B. Another processor that does not have a separate graphics core, may include one more general purpose in-order or out-of-order cores that support both class A and class B. Of course, features from one class may also be implement in the other class in different embodiments of the invention. Programs written in a high level language would be put (e.g., just in time compiled or statically compiled) into an variety of different executable forms, including: 1) a form having only instructions of the class(es) supported by the target processor for execution; or 2) a form having alternative routines written using different combinations of the instructions of all classes and having control flow code that selects the routines to execute based on the instructions supported by the processor which is currently executing the code.Exemplary Specific Vector Friendly Instruction FormatFIG 9 is a block diagram illustrating an exemplary specific vector friendly instruction format according to embodiments of the invention. FIG 9 shows a specific vector friendly instruction format 900 that is specific in the sense that it specifies the location, size, interpretation, and order of the fields, as well as values for some of those fields. The specific vector friendly instruction format 900 may be used to extend the x86 instruction set, and thus some of the fields are similar or the same as those used in the existing x86 instruction set and extension thereof (e.g., AVX). This format remains consistent with the prefix encoding field, real opcode byte field, MOD R/M field, SIB field, displacement field, and immediate fields of the existing x86 instruction set with extensions. The fields from FIG 8 into which the fields from FIG 9 map are illustrated.It should be understood that, although embodiments of the invention are described with reference to the specific vector friendly instruction format 900 in the context of the generic vector friendly instruction format 800 for illustrative purposes, the invention is not limited to the specific vector friendly instruction format 900 except where claimed. For example, the generic vector friendly instruction format 800 contemplates a variety of possible sizes for the various fields, while the specific vector friendly instruction format 900 is shown as having fields of specific sizes. By way of specific example, while the data element width field 864 is illustrated as a one bit field in the specific vector friendly instruction format 900, the invention is not so limited (that is, the generic vector friendly instruction format 800 contemplates other sizes of the data element width field 864).The generic vector friendly instruction format 800 includes the following fields listed below in the order illustrated in FIG 9A .EVEX Prefix (Bytes 0-3) 902 - is encoded in a four-byte form.Format Field 840 (EVEX Byte 0, bits [7:0]) - the first byte (EVEX Byte 0) is the format field 840 and it contains 0x62 (the unique value used for distinguishing the vector friendly instruction format in one embodiment of the invention).The second-fourth bytes (EVEX Bytes 1-3) include a number of bit fields providing specific capability.REX field 905 (EVEX Byte 1, bits [7-5]) - consists of a EVEX.R bit field (EVEX Byte 1, bit [7] - R), EVEX.X bit field (EVEX byte 1, bit [6] - X), and 857BEX byte 1, bit[5] - B). The EVEX.R, EVEX.X, and EVEX.B bit fields provide the same functionality as the corresponding VEX bit fields, and are encoded using Is complement form, i.e. ZMM0 is encoded as 1111B, ZMM15 is encoded as 0000B. Other fields of the instructions encode the lower three bits of the register indexes as is known in the art (rrr, xxx, and bbb), so that Rrrr, Xxxx, and Bbbb may be formed by adding EVEX.R, EVEX.X, and EVEX.B.REX' field 810 - this is the first part of the REX' field 810 and is the EVEX.R' bit field (EVEX Byte 1, bit [4] - R') that is used to encode either the upper 16 or lower 16 of the extended 32 register set. In one embodiment of the invention, this bit, along with others as indicated below, is stored in bit inverted format to distinguish (in the well-known x86 32-bit mode) from the BOUND instruction, whose real opcode byte is 62, but does not accept in the MOD R/M field (described below) the value of 11 in the MOD field; alternative embodiments of the invention do not store this and the other indicated bits below in the inverted format. A value of 1 is used to encode the lower 16 registers. In other words, R'Rrrr is formed by combining EVEX.R', EVEX.R, and the other RRR from other fields.Opcode map field 915 (EVEX byte 1, bits [3:0] - mmmm) - its content encodes an implied leading opcode byte (0F, 0F 38, or 0F 3).Data element width field 864 (EVEX byte 2, bit [7] - W) - is represented by the notation EVEX.W. EVEX.W is used to define the granularity (size) of the datatype (either 32-bit data elements or 64-bit data elements).EVEX.vvvv 920 (EVEX Byte 2, bits [6:3]-vvvv)- the role of EVEX.vvw may include the following: 1) EVEX.vvw encodes the first source register operand, specified in inverted (Is complement) form and is valid for instructions with 2 or more source operands; 2) EVEX.vvvv encodes the destination register operand, specified in 1s complement form for certain vector shifts; or 3) EVEX.vvvv does not encode any operand, the field is reserved and should contain 1111b. Thus, EVEX.vwv field 920 encodes the 4 low-order bits of the first source register specifier stored in inverted (Is complement) form. Depending on the instruction, an extra different EVEX bit field is used to extend the specifier size to 32 registers.EVEX.U 868 Class field (EVEX byte 2, bit [2]-U) - If EVEX.U = 0, it indicates class A or EVEX.U0; if EVEX.U = 1, it indicates class B or EVEX.U1.Prefix encoding field 925 (EVEX byte 2, bits [1:0]-pp) - provides additional bits for the base operation field. In addition to providing support for the legacy SSE instructions in the EVEX prefix format, this also has the benefit of compacting the SIMD prefix (rather than requiring a byte to express the SIMD prefix, the EVEX prefix requires only 2 bits). In one embodiment, to support legacy SSE instructions that use a SIMD prefix (66H, F2H, F3H) in both the legacy format and in the EVEX prefix format, these legacy SIMD prefixes are encoded into the SIMD prefix encoding field; and at runtime are expanded into the legacy SIMD prefix prior to being provided to the decoder's PLA (so the PLA can execute both the legacy and EVEX format of these legacy instructions without modification). Although newer instructions could use the EVEX prefix encoding field's content directly as an opcode extension, certain embodiments expand in a similar fashion for consistency but allow for different meanings to be specified by these legacy SIMD prefixes. An alternative embodiment may redesign the PLA to support the 2 bit SIMD prefix encodings, and thus not require the expansion.Alpha field 852 (EVEX byte 3, bit [7] - EH; also known as EVEX.EH, EVEX.rs, EVEX.RL, EVEX. write mask control, and EVEX.N; also illustrated with α) - as previously described, this field is context specific.Beta field 854 (EVEX byte 3, bits [6:4]-SSS, also known as EVEX.s2-0, EVEX.r2-0, EVEX.rr1, EVEX.LL0, EVEX.LLB; also illustrated with βββ) - as previously described, this field is context specific.REX' field 810 - this is the remainder of the REX' field and is the EVEX. V' bit field (EVEX Byte 3, bit [3] - V') that may be used to encode either the upper 16 or lower 16 of the extended 32 register set. This bit is stored in bit inverted format. A value of 1 is used to encode the lower 16 registers. In other words, V'VVVV is formed by combining EVEX.V', EVEX.vvvv.Write mask field 870 (EVEX byte 3, bits [2:0]-kkk) - its content specifies the index of a register in the write mask registers as previously described. In one embodiment of the invention, the specific value EVEX.kkk=000 has a special behavior implying no write mask is used for the particular instruction (this may be implemented in a variety of ways including the use of a write mask hardwired to all ones or hardware that bypasses the masking hardware).Real Opcode Field 930 (Byte 4) is also known as the opcode byte. Part of the opcode is specified in this field.MOD R/M Field 940 (Byte 5) includes MOD field 942, Reg field 944, and R/M field 946. As previously described, the MOD field's 942 content distinguishes between memory access and non-memory access operations. The role of Reg field 944 can be summarized to two situations: encoding either the destination register operand or a source register operand, or be treated as an opcode extension and not used to encode any instruction operand. The role of R/M field 946 may include the following: encoding the instruction operand that references a memory address, or encoding either the destination register operand or a source register operand.Scale, Index, Base (SIB) Byte (Byte 6) - As previously described, the scale field's 850 content is used for memory address generation. SIB.xxx 954 and SIB.bbb 956 - the contents of these fields have been previously referred to with regard to the register indexes Xxxx and Bbbb.Displacement field 862A (Bytes 7-10) - when MOD field 942 contains 10, bytes 7-10 are the displacement field 862A, and it works the same as the legacy 32-bit displacement (disp32) and works at byte granularity.Displacement factor field 862B (Byte 7) - when MOD field 942 contains 01, byte 7 is the displacement factor field 862B. The location of this field is that same as that of the legacy x86 instruction set 8-bit displacement (disp8), which works at byte granularity. Since disp8 is sign extended, it can only address between -128 and 127 bytes offsets; in terms of 64 byte cache lines, disp8 uses 8 bits that can be set to only four really useful values -128, -64, 0, and 64; since a greater range is often needed, disp32 is used; however, disp32 requires 4 bytes. In contrast to disp8 and disp32, the displacement factor field 862B is a reinterpretation of disp8; when using displacement factor field 862B, the actual displacement is determined by the content of the displacement factor field multiplied by the size of the memory operand access (N). This type of displacement is referred to as disp8∗N. This reduces the average instruction length (a single byte of used for the displacement but with a much greater range). Such compressed displacement is based on the assumption that the effective displacement is multiple of the granularity of the memory access, and hence, the redundant low-order bits of the address offset do not need to be encoded. In other words, the displacement factor field 862B substitutes the legacy x86 instruction set 8-bit displacement. Thus, the displacement factor field 862B is encoded the same way as an x86 instruction set 8-bit displacement (so no changes in the ModRM/SIB encoding rules) with the only exception that disp8 is overloaded to disp8∗N. In other words, there are no changes in the encoding rules or encoding lengths but only in the interpretation of the displacement value by hardware (which needs to scale the displacement by the size of the memory operand to obtain a byte-wise address offset). Immediate field 872 operates as previously described.Full Opcode FieldFIG 9B is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the full opcode field 874 according to one embodiment of the invention. Specifically, the full opcode field 874 includes the format field 840, the base operation field 842, and the data element width (W) field 864. The base operation field 842 includes the prefix encoding field 925, the opcode map field 915, and the real opcode field 930.Register Index FieldFIG 9C is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the register index field 844 according to one embodiment of the invention. Specifically, the register index field 844 includes the REX field 905, the REX' field 910, the MODR/M.reg field 944, the MODR/M.r/m field 946, the VVVV field 920, xxx field 954, and the bbb field 956.Augmentation Operation FieldFIG 9D is a block diagram illustrating the fields of the specific vector friendly instruction format 900 that make up the augmentation operation field 850 according to one embodiment of the invention. When the class (U) field 868 contains 0, it signifies EVEX.U0 (class A 868A); when it contains 1, it signifies EVEX.U1 (class B 868B). When U=0 and the MOD field 942 contains 11 (signifying a no memory access operation), the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the rs field 852A. When the rs field 852A contains a 1 (round 852A. 1), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the round control field 854A. The round control field 854A includes a one bit SAE field 856 and a two bit round operation field 858. When the rs field 852A contains a 0 (data transform 852A.2), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data transform field 854B. When U=0 and the MOD field 942 contains 00, 01, or 10 (signifying a memory access operation), the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the eviction hint (EH) field 852B and the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as a three bit data manipulation field 854C.When U=1, the alpha field 852 (EVEX byte 3, bit [7] - EH) is interpreted as the write mask control (Z) field 852C. When U=1 and the MOD field 942 contains 11 (signifying a no memory access operation), part of the beta field 854 (EVEX byte 3, bit [4]- S0) is interpreted as the RL field 857A; when it contains a 1 (round 857A.1) the rest of the beta field 854 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the round operation field 859A, while when the RL field 857A contains a 0 (VSIZE 857.A2) the rest of the beta field 854 (EVEX byte 3, bit [6-5]- S2-1) is interpreted as the vector length field 859B (EVEX byte 3, bit [6-5]- L1-0). When U=1 and the MOD field 942 contains 00, 01, or 10 (signifying a memory access operation), the beta field 854 (EVEX byte 3, bits [6:4]- SSS) is interpreted as the vector length field 859B (EVEX byte 3, bit [6-5]- L1-0) and the broadcast field 857B (EVEX byte 3, bit [4]- B).Exemplary Register ArchitectureFIG 10 is a block diagram of a register architecture 1000 according to one embodiment of the invention. In the embodiment illustrated, there are 32 vector registers 1010 that are 512 bits wide; these registers are referenced as zmm0 through zmm31. The lower order 256 bits of the lower 16 zmm registers are overlaid on registers ymm0-16. The lower order 128 bits of the lower 16 zmm registers (the lower order 128 bits of the ymm registers) are overlaid on registers xmm0-15. The specific vector friendly instruction format 900 operates on these overlaid register file as illustrated in the below tables.Adjustable Vector LengthClassOperationsRegistersInstruction Templates that do not include the vector length field 859BA ( Figure 8A ; U=0)810, 815, 825, 830zmm registers (the vector length is 64 byte)B ( Figure 8B ; U=1)812zmm registers (the vector length is 64 byte)Instruction templates that do include the vector length field 859BB ( Figure 8B ; U=1)817, 827zmm, ymm, or xmm registers (the vector length is 64 byte, 32 byte, or 16 byte) depending on the vector length field 859BIn other words, the vector length field 859B selects between a maximum length and one or more other shorter lengths, where each such shorter length is half the length of the preceding length; and instructions templates without the vector length field 859B operate on the maximum vector length. Further, in one embodiment, the class B instruction templates of the specific vector friendly instruction format 900 operate on packed or scalar single/double-precision floating point data and packed or scalar integer data. Scalar operations are operations performed on the lowest order data element position in an zmm/ymm/xmm register; the higher order data element positions are either left the same as they were prior to the instruction or zeroed depending on the embodiment.Write mask registers 1015 - in the embodiment illustrated, there are 8 write mask registers (k0 through k7), each 64 bits in size. In an alternate embodiment, the write mask registers 1015 are 16 bits in size. As previously described, in one embodiment of the invention, the vector mask register k0 cannot be used as a write mask; when the encoding that would normally indicate k0 is used for a write mask, it selects a hardwired write mask of 0xFFFF, effectively disabling write masking for that instruction.General-purpose registers 1025 - in the embodiment illustrated, there are sixteen 64-bit general-purpose registers that are used along with the existing x86 addressing modes to address memory operands. These registers are referenced by the names RAX, RBX, RCX, RDX, RBP, RSI, RDI, RSP, and R8 through R15.Scalar floating point stack register file (x87 stack) 1045, on which is aliased the MMX packed integer flat register file 1050 - in the embodiment illustrated, the x87 stack is an eight-element stack used to perform scalar floating-point operations on 32/64/80-bit floating point data using the x87 instruction set extension; while the MMX registers are used to perform operations on 64-bit packed integer data, as well as to hold operands for some operations performed between the MMX and XMM registers.Alternative embodiments of the invention may use wider or narrower registers. Additionally, alternative embodiments of the invention may use more, less, or different register files and registers.Exemplary Core Architectures, Processors, and Computer ArchitecturesProcessor cores may be implemented in different ways, for different purposes, and in different processors. For instance, implementations of such cores may include: 1) a general purpose in-order core intended for general-purpose computing; 2) a high performance general purpose out-of-order core intended for general-purpose computing; 3) a special purpose core intended primarily for graphics and/or scientific (throughput) computing. Implementations of different processors may include: 1) a CPU including one or more general purpose in-order cores intended for general-purpose computing and/or one or more general purpose out-of-order cores intended for general-purpose computing; and 2) a coprocessor including one or more special purpose cores intended primarily for graphics and/or scientific (throughput). Such different processors lead to different computer system architectures, which may include: 1) the coprocessor on a separate chip from the CPU; 2) the coprocessor on a separate die in the same package as a CPU; 3) the coprocessor on the same die as a CPU (in which case, such a coprocessor is sometimes referred to as special purpose logic, such as integrated graphics and/or scientific (throughput) logic, or as special purpose cores); and 4) a system on a chip that may include on the same die the described CPU (sometimes referred to as the application core(s) or application processor(s)), the above described coprocessor, and additional functionality. Exemplary core architectures are described next, followed by descriptions of exemplary processors and computer architectures.Exemplary Core ArchitecturesIn-order and out-of-order core block diagramFIG 11A is a block diagram illustrating both an exemplary in-order pipeline and an exemplary register renaming, out-of-order issue/execution pipeline according to embodiments of the invention. Figure 11B is a block diagram illustrating both an exemplary embodiment of an in-order architecture core and an exemplary register renaming, out-of-order issue/execution architecture core to be included in a processor according to embodiments of the invention. The solid lined boxes in FIGs 11A and 11B illustrate the in-order pipeline and in-order core, while the optional addition of the dashed lined boxes illustrates the register renaming, out-of-order issue/execution pipeline and core. Given that the in-order aspect is a subset of the out-of-order aspect, the out-of-order aspect will be described.In FIG 11A , a processor pipeline 1100 includes a fetch stage 1102, a length decode stage 1104, a decode stage 1106, an allocation stage 1108, a renaming stage 1110, a scheduling (also known as a dispatch or issue) stage 1112, a register read/memory read stage 1114, an execute stage 1116, a write back/memory write stage 1118, an exception handling stage 1122, and a commit stage 1124.FIG 11B shows processor core 1190 including a front end unit 1130 coupled to an execution engine unit 1150, and both are coupled to a memory unit 1170. The core 1190 may be a reduced instruction set computing (RISC) core, a complex instruction set computing (CISC) core, a very long instruction word (VLIW) core, or a hybrid or alternative core type. As yet another option, the core 1190 may be a special-purpose core, such as, for example, a network or communication core, compression engine, coprocessor core, general purpose computing graphics processing unit (GPGPU) core, graphics core, or the like.The front end unit 1130 includes a branch prediction unit 1132 coupled to an instruction cache unit 1134, which is coupled to an instruction translation lookaside buffer (TLB) 1136, which is coupled to an instruction fetch unit 1138, which is coupled to a decode unit 1140. The decode unit 1140 (or decoder) may decode instructions, and generate as an output one or more micro-operations, micro-code entry points, microinstructions, other instructions, or other control signals, which are decoded from, or which otherwise reflect, or are derived from, the original instructions. The decode unit 1140 may be implemented using various different mechanisms. Examples of suitable mechanisms include, but are not limited to, look-up tables, hardware implementations, programmable logic arrays (PLAs), microcode read only memories (ROMs), etc. In one embodiment, the core 1190 includes a microcode ROM or other medium that stores microcode for certain macroinstructions (e.g., in decode unit 1140 or otherwise within the front end unit 1130). The decode unit 1140 is coupled to a rename/allocator unit 1152 in the execution engine unit 1150.The execution engine unit 1150 includes the rename/allocator unit 1152 coupled to a retirement unit 1154 and a set of one or more scheduler unit(s) 1156. The scheduler unit(s) 1156 represents any number of different schedulers, including reservations stations, central instruction window, etc. The scheduler unit(s) 1156 is coupled to the physical register file(s) unit(s) 1158. Each of the physical register file(s) units 1158 represents one or more physical register files, different ones of which store one or more different data types, such as scalar integer, scalar floating point, packed integer, packed floating point, vector integer, vector floating point,, status (e.g., an instruction pointer that is the address of the next instruction to be executed), etc. In one embodiment, the physical register file(s) unit 1158 comprises a vector registers unit, a write mask registers unit, and a scalar registers unit. These register units may provide architectural vector registers, vector mask registers, and general purpose registers. The physical register file(s) unit(s) 1158 is overlapped by the retirement unit 1154 to illustrate various ways in which register renaming and out-of-order execution may be implemented (e.g., using a reorder buffer(s) and a retirement register file(s); using a future file(s), a history buffer(s), and a retirement register file(s); using a register maps and a pool of registers; etc.). The retirement unit 1154 and the physical register file(s) unit(s) 1158 are coupled to the execution cluster(s) 1160. The execution cluster(s) 1160 includes a set of one or more execution units 1162 and a set of one or more memory access units 1164. The execution units 1162 may perform various operations (e.g., shifts, addition, subtraction, multiplication) and on various types of data (e.g., scalar floating point, packed integer, packed floating point, vector integer, vector floating point). While some embodiments may include a number of execution units dedicated to specific functions or sets of functions, other embodiments may include only one execution unit or multiple execution units that all perform all functions. The scheduler unit(s) 1156, physical register file(s) unit(s) 1158, and execution cluster(s) 1160 are shown as being possibly plural because certain embodiments create separate pipelines for certain types of data/operations (e.g., a scalar integer pipeline, a scalar floating point/packed integer/packed floating point/vector integer/vector floating point pipeline, and/or a memory access pipeline that each have their own scheduler unit, physical register file(s) unit, and/or execution cluster - and in the case of a separate memory access pipeline, certain embodiments are implemented in which only the execution cluster of this pipeline has the memory access unit(s) 1164). It should also be understood that where separate pipelines are used, one or more of these pipelines may be out-of-order issue/execution and the rest in-order.The set of memory access units 1164 is coupled to the memory unit 1170, which includes a data TLB unit 1172 coupled to a data cache unit 1174 coupled to a level 2 (L2) cache unit 1176. In one exemplary embodiment, the memory access units 1164 may include a load unit, a store address unit, and a store data unit, each of which is coupled to the data TLB unit 1172 in the memory unit 1170. The instruction cache unit 1134 is further coupled to a level 2 (L2) cache unit 1176 in the memory unit 1170. The L2 cache unit 1176 is coupled to one or more other levels of cache and eventually to a main memory.By way of example, the exemplary register renaming, out-of-order issue/execution core architecture may implement the pipeline 1100 as follows: 1) the instruction fetch 1138 performs the fetch and length decoding stages 1102 and 1104; 2) the decode unit 1140 performs the decode stage 1106; 3) the rename/allocator unit 1152 performs the allocation stage 1108 and renaming stage 1110; 4) the scheduler unit(s) 1156 performs the schedule stage 1112; 5) the physical register file(s) unit(s) 1158 and the memory unit 1170 perform the register read/memory read stage 1114; the execution cluster 1160 perform the execute stage 1116; 6) the memory unit 1170 and the physical register file(s) unit(s) 1158 perform the write back/memory write stage 1118; 7) various units may be involved in the exception handling stage 1122; and 8) the retirement unit 1154 and the physical register file(s) unit(s) 1158 perform the commit stage 1124.The core 1190 may support one or more instructions sets (e.g., the x86 instruction set (with some extensions that have been added with newer versions); the MIPS instruction set of MIPS Technologies of Sunnyvale, CA; the ARM instruction set (with optional additional extensions such as NEON) of ARM Holdings of Sunnyvale, CA), including the instruction(s) described herein. In one embodiment, the core 1190 includes logic to support a packed data instruction set extension (e.g., AVX1, AVX2), thereby allowing the operations used by many multimedia applications to be performed using packed data.It should be understood that the core may support multithreading (executing two or more parallel sets of operations or threads), and may do so in a variety of ways including time sliced multithreading, simultaneous multithreading (where a single physical core provides a logical core for each of the threads that physical core is simultaneously multithreading), or a combination thereof (e.g., time sliced fetching and decoding and simultaneous multithreading thereafter such as in the Intel® Hyperthreading technology).While register renaming is described in the context of out-of-order execution, it should be understood that register renaming may be used in an in-order architecture. While the illustrated embodiment of the processor also includes separate instruction and data cache units 1134/1174 and a shared L2 cache unit 1176, alternative embodiments may have a single internal cache for both instructions and data, such as, for example, a Level 1 (L1) internal cache, or multiple levels of internal cache. In some embodiments, the system may include a combination of an internal cache and an external cache that is external to the core and/or the processor. Alternatively, all of the cache may be external to the core and/or the processor.Specific Exemplary In-Order Core ArchitectureFIGs 12A and 12B illustrate a block diagram of a more specific exemplary in-order core architecture, which core would be one of several logic blocks (including other cores of the same type and/or different types) in a chip. The logic blocks communicate through a high-bandwidth interconnect network (e.g., a ring network) with some fixed function logic, memory I/O interfaces, and other necessary I/O logic, depending on the application.FIG 12A is a block diagram of a single processor core, along with its connection to the on-die interconnect network 1202 and with its local subset of the Level 2 (L2) cache 1204, according to embodiments of the invention. In one embodiment, an instruction decoder 1200 supports the x86 instruction set with a packed data instruction set extension. An L1 cache 1206 allows low-latency accesses to cache memory into the scalar and vector units. While in one embodiment (to simplify the design), a scalar unit 1208 and a vector unit 1210 use separate register sets (respectively, scalar registers 1212 and vector registers 1214) and data transferred between them is written to memory and then read back in from a level 1 (L1) cache 1206, alternative embodiments of the invention may use a different approach (e.g., use a single register set or include a communication path that allow data to be transferred between the two register files without being written and read back).The local subset of the L2 cache 1204 is part of a global L2 cache that is divided into separate local subsets, one per processor core. Each processor core has a direct access path to its own local subset of the L2 cache 1204. Data read by a processor core is stored in its L2 cache subset 1204 and can be accessed quickly, in parallel with other processor cores accessing their own local L2 cache subsets. Data written by a processor core is stored in its own L2 cache subset 1204 and is flushed from other subsets, if necessary. The ring network ensures coherency for shared data. The ring network is bi-directional to allow agents such as processor cores, L2 caches and other logic blocks to communicate with each other within the chip. Each ring data-path is 1012-bits wide per direction.FIG 12B is an expanded view of part of the processor core in FIG 12A according to embodiments of the invention. FIG 12B includes an L1 data cache 1206A part of the L1 cache 1204, as well as more detail regarding the vector unit 1210 and the vector registers 1214. Specifically, the vector unit 1210 is a 16-wide vector processing unit (VPU) (see the 16-wide ALU 1228), which executes one or more of integer, single-precision float, and double-precision float instructions. The VPU supports swizzling the register inputs with swizzle unit 1220, numeric conversion with numeric convert units 1222A-B, and replication with replication unit 1224 on the memory input. Write mask registers 1226 allow predicating resulting vector writes.FIG 13 is a block diagram of a processor 1300 that may have more than one core, may have an integrated memory controller, and may have integrated graphics according to embodiments of the invention. The solid lined boxes in FIG 13 illustrate a processor 1300 with a single core 1302A, a system agent 1310, a set of one or more bus controller units 1316, while the optional addition of the dashed lined boxes illustrates an alternative processor 1300 with multiple cores 1302A-N, a set of one or more integrated memory controller unit(s) 1314 in the system agent unit 1310, and special purpose logic 1308.Thus, different implementations of the processor 1300 may include: 1) a CPU with the special purpose logic 1308 being integrated graphics and/or scientific (throughput) logic (which may include one or more cores), and the cores 1302A-N being one or more general purpose cores (e.g., general purpose in-order cores, general purpose out-of-order cores, a combination of the two); 2) a coprocessor with the cores 1302A-N being a large number of special purpose cores intended primarily for graphics and/or scientific (throughput); and 3) a coprocessor with the cores 1302A-N being a large number of general purpose in-order cores. Thus, the processor 1300 may be a general-purpose processor, coprocessor or special-purpose processor, such as, for example, a network or communication processor, compression engine, graphics processor, GPGPU (general purpose graphics processing unit), a high-throughput many integrated core (MIC) coprocessor (including 30 or more cores), embedded processor, or the like. The processor may be implemented on one or more chips. The processor 1300 may be a part of and/or may be implemented on one or more substrates using any of a number of process technologies, such as, for example, BiCMOS, CMOS, or NMOS.The memory hierarchy includes one or more levels of cache within the cores, a set or one or more shared cache units 1306, and external memory (not shown) coupled to the set of integrated memory controller units 1314. The set of shared cache units 1306 may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof. While in one embodiment a ring based interconnect unit 1312 interconnects the integrated graphics logic 1308, the set of shared cache units 1306, and the system agent unit 1310/integrated memory controller unit(s) 1314, alternative embodiments may use any number of well-known techniques for interconnecting such units. In one embodiment, coherency is maintained between one or more cache units 1306 and cores 1302-A-N.In some embodiments, one or more of the cores 1302A-N are capable of multi-threading. The system agent 1310 includes those components coordinating and operating cores 1302A-N. The system agent unit 1310 may include for example a power control unit (PCU) and a display unit. The PCU may be or include logic and components needed for regulating the power state of the cores 1302A-N and the integrated graphics logic 1308. The display unit is for driving one or more externally connected displays.The cores 1302A-N may be homogenous or heterogeneous in terms of architecture instruction set; that is, two or more of the cores 1302A-N may be capable of execution the same instruction set, while others may be capable of executing only a subset of that instruction set or a different instruction set.Exemplary Computer ArchitecturesFIGs 14 , 15 , 16 , and 17 are block diagrams of exemplary computer architectures. Other system designs and configurations known in the arts for laptops, desktops, handheld PCs, personal digital assistants, engineering workstations, servers, network devices, network hubs, switches, embedded processors, digital signal processors (DSPs), graphics devices, video game devices, set-top boxes, micro controllers, cell phones, portable media players, hand held devices, and various other electronic devices, are also suitable. In general, a huge variety of systems or electronic devices capable of incorporating a processor and/or other execution logic as disclosed herein are generally suitable.Referring now to FIG 14 , shown is a block diagram of a system 1400 in accordance with one embodiment of the present invention. The system 1400 may include one or more processors 1410, 1415, which are coupled to a controller hub 1420. In one embodiment the controller hub 1420 includes a graphics memory controller hub (GMCH) 1490 and an Input/Output Hub (IOH) 1450 (which may be on separate chips); the GMCH 1490 includes memory and graphics controllers to which are coupled memory 1440 and a coprocessor 1445; the IOH 1450 is couples input/output (I/O) devices 1460 to the GMCH 1490. Alternatively, one or both of the memory and graphics controllers are integrated within the processor (as described herein), the memory 1440 and the coprocessor 1445 are coupled directly to the processor 1410, and the controller hub 1420 in a single chip with the IOH 1450.The optional nature of additional processors 1415 is denoted in FIG 14 with broken lines. Each processor 1410, 1415 may include one or more of the processing cores described herein and may be some version of the processor 1300.The memory 1440 may be, for example, dynamic random access memory (DRAM), phase change memory (PCM), or a combination of the two. For at least one embodiment, the controller hub 1420 communicates with the processor(s) 1410, 1415 via a multi-drop bus, such as a frontside bus (FSB), point-to-point interface such as QuickPath Interconnect (QPI), or similar connection 1495.In one embodiment, the coprocessor 1445 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like. In one embodiment, controller hub 1420 may include an integrated graphics accelerator.There can be a variety of differences between the physical resources 1410, 1415 in terms of a spectrum of metrics of merit including architectural, microarchitectural, thermal, power consumption characteristics, and the like.In one embodiment, the processor 1410 executes instructions that control data processing operations of a general type. Embedded within the instructions may be coprocessor instructions. The processor 1410 recognizes these coprocessor instructions as being of a type that should be executed by the attached coprocessor 1445. Accordingly, the processor 1410 issues these coprocessor instructions (or control signals representing coprocessor instructions) on a coprocessor bus or other interconnect, to coprocessor 1445. Coprocessor(s) 1445 accept and execute the received coprocessor instructions.Referring now to FIG 15 , shown is a block diagram of a first more specific exemplary system 1500 in accordance with an embodiment of the present invention. As shown in FIG 15 , multiprocessor system 1500 is a point-to-point interconnect system, and includes a first processor 1570 and a second processor 1580 coupled via a point-to-point interconnect 1550. Each of processors 1570 and 1580 may be some version of the processor 1300. In one embodiment of the invention, processors 1570 and 1580 are respectively processors 1410 and 1415, while coprocessor 1538 is coprocessor 1445. In another embodiment, processors 1570 and 1580 are respectively processor 1410 coprocessor 1445.Processors 1570 and 1580 are shown including integrated memory controller (IMC) units 1572 and 1582, respectively. Processor 1570 also includes as part of its bus controller units point-to-point (P-P) interfaces 1576 and 1578; similarly, second processor 1580 includes P-P interfaces 1586 and 1588. Processors 1570, 1580 may exchange information via a point-to-point (P-P) interface 1550 using P-P interface circuits 1578, 1588. As shown in FIG 15 , IMCs 1572 and 1582 couple the processors to respective memories, namely a memory 1532 and a memory 1534, which may be portions of main memory locally attached to the respective processors.Processors 1570, 1580 may each exchange information with a chipset 1590 via individual P-P interfaces 1552, 1554 using point to point interface circuits 1576, 1594, 1586, 1598. Chipset 1590 may optionally exchange information with the coprocessor 1538 via a high-performance interface 1539. In one embodiment, the coprocessor 1538 is a special-purpose processor, such as, for example, a high-throughput MIC processor, a network or communication processor, compression engine, graphics processor, GPGPU, embedded processor, or the like.A shared cache (not shown) may be included in either processor or outside of both processors, yet connected with the processors via P-P interconnect, such that either or both processors' local cache information may be stored in the shared cache if a processor is placed into a low power mode.Chipset 1590 may be coupled to a first bus 1516 via an interface 1596. In one embodiment, first bus 1516 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the present invention is not so limited.As shown in FIG 15 , various I/O devices 1514 may be coupled to first bus 1516, along with a bus bridge 1518 which couples first bus 1516 to a second bus 1520. In one embodiment, one or more additional processor(s) 1515, such as coprocessors, high-throughput MIC processors, GPGPU's, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processor, are coupled to first bus 1516. In one embodiment, second bus 1520 may be a low pin count (LPC) bus. Various devices may be coupled to a second bus 1520 including, for example, a keyboard and/or mouse 1522, communication devices 1527 and a storage unit 1528 such as a disk drive or other mass storage device which may include instructions/code and data 1530, in one embodiment. Further, an audio I/O 1524 may be coupled to the second bus 1520. Note that other architectures are possible. For example, instead of the point-to-point architecture of FIG 15 , a system may implement a multi-drop bus or other such architecture.Referring now to FIG 16 , shown is a block diagram of a second more specific exemplary system 1600 in accordance with an embodiment of the present invention. Like elements in FIGs 15 and 16 bear like reference numerals, and certain aspects of FIG 15 have been omitted from FIG 16 in order to avoid obscuring other aspects of FIG 16 .FIG 16 illustrates that the processors 1570, 1580 may include integrated memory and I/O control logic ("CL") 1572 and 1582, respectively. Thus, the CL 1572, 1582 include integrated memory controller units and include I/O control logic. FIG 16 illustrates that not only are the memories 1532, 1534 coupled to the CL 1572, 1582, but also that I/O devices 1614 are also coupled to the control logic 1572, 1582. Legacy I/O devices 1615 are coupled to the chipset 1590.Referring now to FIG 17 , shown is a block diagram of a SoC 1700 in accordance with an embodiment of the present invention. Similar elements in FIG 13 bear like reference numerals. Also, dashed lined boxes are optional features on more advanced SoCs. In FIG 17 , an interconnect unit(s) 1702 is coupled to: an application processor 1710 which includes a set of one or more cores 202A-N and shared cache unit(s) 1306; a system agent unit 1310; a bus controller unit(s) 1316; an integrated memory controller unit(s) 1314; a set or one or more coprocessors 1720 which may include integrated graphics logic, an image processor, an audio processor, and a video processor; an static random access memory (SRAM) unit 1730; a direct memory access (DMA) unit 1732; and a display unit 1740 for coupling to one or more external displays. In one embodiment, the coprocessor(s) 1720 include a special-purpose processor, such as, for example, a network or communication processor, compression engine, GPGPU, a high-throughput MIC processor, embedded processor, or the like.Embodiments of the mechanisms disclosed herein may be implemented in hardware, software, firmware, or a combination of such implementation approaches. Embodiments of the invention may be implemented as computer programs or program code executing on programmable systems comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.Program code, such as code 1530 illustrated in FIG 15 , may be applied to input instructions to perform the functions described herein and generate output information. The output information may be applied to one or more output devices, in known fashion. For purposes of this application, a processing system includes any system that has a processor, such as, for example; a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.The program code may be implemented in a high level procedural or object oriented programming language to communicate with a processing system. The program code may also be implemented in assembly or machine language, if desired. In fact, the mechanisms described herein are not limited in scope to any particular programming language. In any case, the language may be a compiled or interpreted language.One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represents various logic within the processor, which when read by a machine causes the machine to fabricate logic to perform the techniques described herein. Such representations, known as "IP cores" may be stored on a tangible, machine readable medium and supplied to various customers or manufacturing facilities to load into the fabrication machines that actually make the logic or processor.Such machine-readable storage media may include, without limitation, non-transitory, tangible arrangements of articles manufactured or formed by a machine or device, including storage media such as hard disks, any other type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritable's (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic random access memories (DRAMs), static random access memories (SRAMs), erasable programmable read-only memories (EPROMs), flash memories, electrically erasable programmable read-only memories (EEPROMs), phase change memory (PCM), magnetic or optical cards, or any other type of media suitable for storing electronic instructions.Accordingly, embodiments of the invention also include non-transitory, tangible machine-readable media containing instructions or containing design data, such as Hardware Description Language (HDL), which defines structures, circuits, apparatuses, processors and/or system features described herein. Such embodiments may also be referred to as program products.Emulation (including binary translation, code morphing, etc.)In some cases, an instruction converter may be used to convert an instruction from a source instruction set to a target instruction set. For example, the instruction converter may translate (e.g., using static binary translation, dynamic binary translation including dynamic compilation), morph, emulate, or otherwise convert an instruction to one or more other instructions to be processed by the core. The instruction converter may be implemented in software, hardware, firmware, or a combination thereof. The instruction converter may be on processor, off processor, or part on and part off processor.FIG 18 is a block diagram contrasting the use of a software instruction converter to convert binary instructions in a source instruction set to binary instructions in a target instruction set according to embodiments of the invention. In the illustrated embodiment, the instruction converter is a software instruction converter, although alternatively the instruction converter may be implemented in software, firmware, hardware, or various combinations thereof. FIG 18 shows a program in a high level language 1802 may be compiled using an x86 compiler 1804 to generate x86 binary code 1806 that may be natively executed by a processor with at least one x86 instruction set core 1816. The processor with at least one x86 instruction set core 1816 represents any processor that can perform substantially the same functions as an Intel processor with at least one x86 instruction set core by compatibly executing or otherwise processing (1) a substantial portion of the instruction set of the Intel x86 instruction set core or (2) object code versions of applications or other software targeted to run on an Intel processor with at least one x86 instruction set core, in order to achieve substantially the same result as an Intel processor with at least one x86 instruction set core. The x86 compiler 1804 represents a compiler that is operable to generate x86 binary code 1806 (e.g., object code) that can, with or without additional linkage processing, be executed on the processor with at least one x86 instruction set core 1816. Similarly, FIG 18 shows the program in the high level language 1802 may be compiled using an alternative instruction set compiler 1808 to generate alternative instruction set binary code 1810 that may be natively executed by a processor without at least one x86 instruction set core 1814 (e.g., a processor with cores that execute the MIPS instruction set of MIPS Technologies of Sunnyvale, CA and/or that execute the ARM instruction set of ARM Holdings of Sunnyvale, CA). The instruction converter 1812 is used to convert the x86 binary code 1806 into code that may be natively executed by the processor without an x86 instruction set core 1814. This converted code is not likely to be the same as the alternative instruction set binary code 1810 because an instruction converter capable of this is difficult to make; however, the converted code will accomplish the general operation and be made up of instructions from the alternative instruction set. Thus, the instruction converter 1812 represents software, firmware, hardware, or a combination thereof that, through emulation, simulation or any other process, allows a processor or other electronic device that does not have an x86 instruction set processor or core to execute the x86 binary code 1806.While FIGs 6 and 7 illustrate various operations according to one or more embodiments, it is to be understood that not all of the operations depicted in FIGs 6 and 7 are necessary for other embodiments. Indeed, it is fully contemplated herein that in other embodiments of the present disclosure, the operations depicted in FIGs 6 and 7 , and/or other operations described herein, may be combined in a manner not specifically shown in any of the drawings, but still fully consistent with the present disclosure. Thus, claims directed to features and/or operations that are not exactly shown in one drawing are deemed within the scope and content of the present disclosure.As used in this application and in the claims, a list of items joined by the term "and/or" can mean any combination of the listed items. For example, the phrase "A, B and/or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C. As used in this application and in the claims, a list of items joined by the term "at least one of' can mean any combination of the listed terms. For example, the phrases "at least one of A, B or C" can mean A; B; C; A and B; A and C; B and C; or A, B and C.As used in any embodiment herein, the terms "system" or "module" may refer to, for example, software, firmware and/or circuitry configured to perform any of the aforementioned operations. Software may be embodied as a software package, code, instructions, instruction sets and/or data recorded on non-transitory computer readable storage mediums. Firmware may be embodied as code, instructions or instruction sets and/or data that are hard-coded (e.g., nonvolatile) in memory devices.As used in any embodiment herein, the term "circuitry" may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry such as computer processors comprising one or more individual instruction processing cores, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry or future computing paradigms including, for example, massive parallelism, analog or quantum computing, hardware embodiments of accelerators such as neural net processors and non-silicon implementations of the above. The circuitry may, collectively or individually, be embodied as circuitry that forms part of a larger system, for example, an integrated circuit (IC), system on-chip (SoC), desktop computers, laptop computers, tablet computers, servers, smartphones, etc.Any of the operations described herein may be implemented in a system that includes one or more mediums (e.g., non-transitory storage mediums) having stored therein, individually or in combination, instructions that when executed by one or more processors perform the methods. Here, the processor may include, for example, a server CPU, a mobile device CPU, and/or other programmable circuitry. Also, it is intended that operations described herein may be distributed across a plurality of physical devices, such as processing structures at more than one different physical location. The storage medium may include any type of tangible medium, for example, any type of disk including hard disks, floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, Solid State Disks (SSDs), embedded multimedia cards (eMMCs), secure digital input/output (SDIO) cards, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software executed by a programmable control device.The present disclosure is directed to systems and methods of performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry. The DMA control circuitry executes a modified instruction set architecture (ISA) that facilitates the broadcast distribution of data to a plurality of destination addresses in system memory circuitry. The broadcast instruction may include broadcast of a single data value to each destination address. The broadcast instruction may include broadcast of a data array to each destination address. The DMA control circuitry may also execute a reduction instruction that facilitates the retrieval of data from a plurality of source addresses in system memory and performing one or more operations using the retrieved data. Since the DMA control circuitry, rather than the processor circuitry performs the broadcast and reduction operations, system speed and efficiency is beneficially enhanced.The following examples pertain to further embodiments. The following examples of the present disclosure may comprise subject material such as at least one device, a method, at least one machine-readable medium for performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry.According to example 1, there is provided a direct memory access (DMA) system. The system may include: DMA control circuitry coupled to memory circuitry, the DMA control circuitry to execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein, upon execution of the data broadcast instruction, the DMA control circuitry to: causes a data broadcast operation of a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein, upon execution of the array broadcast instruction, the DMA control circuitry to: cause an array broadcast operation of an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein, upon execution of the array reduction instruction, the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.Example 2 may include elements of example 1 and the DMA control circuitry may further: generate the data broadcast instruction, the data broadcast instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of a memory address location containing the first data value; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; and a fourth data field that includes information indicative of the base memory address location.Example 3 may include elements of any of examples 1 or 2 and the DMA control circuitry may further: generate the data broadcast instruction having a format that includes: a fifth data field that includes information representative of a memory address location containing a second data value; and perform a first compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.Example 4 may include elements of any of examples 1 through 3 and the DMA control circuitry may further: perform a second compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.Example 5 may include elements of any of examples 1 through 4 and the DMA control circuitry may further: generate the array broadcast instruction, the array broadcast instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; and a fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; and a fifth data field that includes information representative of the base memory address location.Example 6 may include elements of any of examples 1 through 5 and the DMA control circuitry may further: generate the array reduction instruction, the array reduction instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location to receive the output value; a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; and a fourth data field that includes information representative of the base memory address location.Example 7 may include elements of any of examples 1 through 6 where, in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction the DMA control circuitry may further includes: a 15-bit DMA type field that includes information indicative of the direct memory access type associated with the respective instruction.Example 8 may include elements of any of examples 1 through 7 where, in the 15-bit DMA type field, the DMA control circuitry may further include: information indicative of an operation performed using the data in the second instruction and the data stored at the respective memory address.According to example 9, there is provided an electronic device. The electronic device may include: processor circuitry; memory circuitry coupled to the processor circuitry; and DMA control circuitry coupled to the memory circuitry, the DMA control circuitry to execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein, upon execution of the data broadcast instruction, the DMA control circuitry to: causes a data broadcast operation of a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein, upon execution of the array broadcast instruction, the DMA control circuitry to: cause an array broadcast operation of an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein, upon execution of the array reduction instruction, the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.Example 10 may include elements of example 9 where the memory circuitry may include dual memory operation circuitry including memory interface circuitry communicatively coupled to atomic execution circuitry.Example 11 may include elements of any of examples 9 or 10 and the DMA control circuitry may further: generate the data broadcast instruction, the data broadcast instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of a memory address location containing the first data value; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; and a fourth data field that includes information indicative of the base memory address location.Example 12 may include elements of any of examples 9 through 11 and the DMA control circuitry may further: generate the data broadcast instruction having a format that includes: a fifth data field that includes information representative of a memory address location containing a second data value; and perform a first compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory addressExample 13 may include elements of any of examples 9 through 12 and the DMA circuitry may further: perform a second compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.Example 14 may include elements of any of examples 9 through 13 and the DMA control circuitry may further: generate the array broadcast instruction, the array broadcast instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; and a fifth data field that includes information representative of the base memory address location.Example 15 may include elements of any of examples 9 through 14 and the DMA control circuitry may further: generate the array reduction instruction, the array reduction instruction having a format that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location to receive the output value; a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; and a fourth data field that includes information representative of the base memory address location.Example 16 may include elements of any of examples 9 through 15 where in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction the DMA control circuitry may further include: a 15-bit DMA type field that includes information indicative of the direct memory access type associated with the respective instruction.Example 17 may include elements of any of examples 9 through 16 where in the 15-bit DMA type field, the DMA control circuitry may further include: information indicative of an operation performed using the data in the second instruction and the data stored at the respective memory address.According to example 18, there is provided a DMA broadcast method. The method may include: executing, by DMA control circuitry, at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein executing the data broadcast instruction comprises: broadcasting, by the DMA control circuitry, a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein executing the array broadcast instruction comprises: broadcasting, by the DMA control circuitry, an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein executing the array reduction instruction comprises: performing, by the DMA control circuitry, one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.Example 19 may include elements of example 18 where broadcasting the first data value to each of the plurality of memory addresses may further include: generating, by the DMA control circuitry, a data broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of a memory address location containing the first data value; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information indicative of the base memory address location; and broadcasting the data broadcast instruction to each of the plurality of memory addresses.Example 20 may include elements of any of examples 18 or 19 where generating the data broadcast instruction may further include: a fifth data field that includes information representative of a memory address location containing a second data value.Example 21 may include elements of any of examples 18 through 20 and the method may additionally include: performing, by the DMA control circuitry, a compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.Example 22 may include elements of any of examples 18 through 21 and the method may additionally include: performing, by the DMA control circuitry, a compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.Example 23 may include elements of any of examples 18 through 22 where broadcasting the array that includes the defined number of elements to each of the plurality of memory addresses may further include: generating, by the DMA control circuitry, an array broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; and a fifth data field that includes information representative of the base memory address location.Example 24 may include elements of any of examples 18 through 23 where performing the one or more operations to generate the output value using respective values stored at each of the plurality of memory address locations may further include: generating, by the DMA control circuitry, an array reduction instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location to receive the output value; a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; and a fourth data field that includes information representative of the base memory address location.Example 25 may include elements of any of examples 18 through 24 and the method may additionally include: inserting, by the DMA control circuitry, a 15-bit DMA type field that includes information indicative of the direct memory access type in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction.Example 26 may include elements of any of examples 18 through 25 where inserting the 15-bit DMA type field that includes information indicative of the direct memory access type may further include: inserting, by the DMA control circuitry, a 15-bit DMA type field that includes information indicative of an operation performed using the data in the second instruction and the data stored at the respective memory address.According to example 27, there is provided a non-transitory storage device. The non-transitory storage device includes instructions that, when executed by direct memory access (DMA) control circuitry, cause the DMA control circuitry to: execute at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein execution of the data broadcast instruction causes the DMA control circuitry to: broadcast a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein execution of the array broadcast instruction causes the DMA control circuitry to: broadcast an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein execution of the array reduction instruction causes the DMA control circuitry to: perform one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.Example 28 may include elements of example 27 where the instructions that cause the DMA control circuitry to broadcast the first data value to each of the plurality of memory addresses further cause the DMA control circuitry to: generate a data broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of a memory address location containing the first data value; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information indicative of the base memory address location; and broadcast the data broadcast instruction to each of the plurality of memory addresses.Example 29 may include elements of any of examples 27 or 28 where the instructions that cause the DMA control circuitry to generate the data broadcast instruction may further cause the DMA control circuitry to: generate a data broadcast instruction that includes: a fifth data field that includes information representative of a memory address location containing a second data value.Example 30 may include elements of any of examples 27 through 29 where the instructions may further cause the DMA control circuitry to: perform a first compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.Example 31 may include elements of any of examples 27 through 30 where the instructions may further cause the DMA control circuitry to: perform a second compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.Example 32 may include elements of any of examples 27 through 31 where the instructions that cause the DMA control circuitry to broadcast the array that includes the defined number of elements to each of the plurality of memory addresses may further cause the DMA control circuitry to: generate an array broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; and a fifth data field that includes information representative of the base memory address location.Example 33 may include elements of any of examples 27 through 32 where the instructions that cause the DMA control circuitry to perform the one or more operations to generate the output value using respective values stored at each of the plurality of memory address locations may further cause the DMA control circuitry to: generate an array reduction instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location to receive the output value; a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; and a fourth data field that includes information representative of the base memory address location.Example 34 may include elements of any of examples 27 through 33 where the instructions may further cause the DMA circuitry to: insert into the instruction a 15-bit DMA type field that includes information indicative of the direct memory access type in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction.Example 35 may include elements of any of examples 27 through 34 where the instructions that cause the DMA control circuitry to insert into the instruction the 15-bit DMA type field that includes information indicative of the direct memory access type may further cause the DMA control circuitry to: insert into the instruction a 15-bit DMA type field that includes information indicative of an operation performed using the data in the second instruction and the data stored at the respective memory address.According to example 36, there is provided a DMA broadcast system. The system may include: means for executing at least one of: a data broadcast instruction, an array broadcast instruction, or an array reduction instruction: wherein the means for executing the data broadcast instruction comprises: means for broadcasting a first data value to each of a plurality of memory addresses that begin at a base memory address location included in the data broadcast instruction and increment by a defined memory address offset also included in the data broadcast instruction; wherein the means for executing the array broadcast instruction comprises: means for broadcasting an array that includes a defined number of elements to each of a plurality of memory addresses that begin at a base memory address location included in the array broadcast instruction and increment by a defined memory address offset also included in the array broadcast instruction; and wherein the means for executing the array reduction instruction comprises: means for performing one or more operations to generate an output value using respective values stored at each of a plurality of memory address locations, the plurality of memory address locations including a base memory address location included in the array reduction instruction and a defined memory address offset included in the array reduction instruction.Example 37 may include elements of examples 36 where the means for broadcasting the first data value to each of the plurality of memory addresses may further include: means for generating a data broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of a memory address location containing the first data value; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information indicative of the base memory address location; and means for broadcasting the data broadcast instruction to each of the plurality of memory addresses.Example 38 may include elements of any of examples 36 or 37 where the means for generating the data broadcast instruction may further include: means for generating a data broadcast instruction having a fifth data field that includes information representative of a memory address location containing a second data value.Example 39 may include elements of any of examples 36 through 38, and the system may further include: means for performing a first compare-overwrite operation, such that if existing data at respective ones of each of the plurality of memory addresses matches the second data value, the first data value replaces the existing data at the respective memory address.Example 40 may include elements of any of examples 36 through 39, and the system may further include: means for performing a second compare-overwrite operation, such that if the existing data at respective ones of each of the plurality of memory addresses differs from the second data value, the existing data is retained at the respective memory address.Example 41 may include elements of any of examples 36 through 40 where the means for broadcasting the array that includes the defined number of elements to each of the plurality of memory addresses may further include: means for generating an array broadcast instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location containing the elements included in the array broadcast to each of the plurality of memory addresses; a third data field that includes information representative of a defined number of memory addresses included in the plurality of memory addresses; a fourth data field that includes information representative of the defined number of elements included in the array broadcast to each of the plurality of memory addresses; and a fifth data field that includes information representative of the base memory address location.Example 42 may include elements of any of claims 36 through 41 where the means for performing the one or more operations to generate the output value using respective values stored at each of the plurality of memory address locations may further include: means for generating an array reduction instruction that includes: a first data field that includes information representative of a pointer to a memory address location containing the defined memory address offset; a second data field that includes information representative of the memory address location to receive the output value; a third data field that includes information representative of a number of memory addresses included in the plurality of memory address locations that contain a value used in the one or more operations; and a fourth data field that includes information representative of the base memory address location.Example 43 may include elements of any of examples 36 through 42, and the system may further include: means for inserting a 15-bit DMA type field that includes information indicative of the direct memory access type in each of the data broadcast instruction, the array broadcast instruction, and the array reduction instruction.Example 44 may include elements of any of examples 36 through 43 where the means for inserting the 15-bit DMA type field that includes information indicative of the direct memory access type may further include means for inserting a 15-bit DMA type field that includes information indicative of an operation performed using the data in the second instruction and the data stored at the respective memory address.According to example 45, there is provided a system for performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry, the system being arranged to perform the method of any of examples 18 through 26.According to example 46, there is provided a chipset arranged to perform the method of any of examples 18 through 26.According to example 47, there is provided at least one non-transitory storage device that includes a plurality of instructions that, in response to be being executed on a computing device, cause the computing device to carry out the method according to any of examples 18 through 26.According to example 48, there is provided a device configured for performing one or more broadcast or reduction operations using direct memory access (DMA) control circuitry, the device being arranged to perform the method of any of examples 18 through 26.Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Accurate determination of gate dielectric thickness is required to produce high-reliability and high-performance ultra-thin gate dielectric semiconductor devices. Large area gate dielectric capacitors with ultra-thin gate dielectric layers suffer from high gate leakage, which prevents the accurate measurement of gate dielectric thickness. Accurate measurement of gate dielectric thickness of smaller area gate dielectric capacitors is hindered by the relatively large parasitic capacitance of the smaller area capacitors. The formation of first and second dummy structures on a wafer allow the accurate determination of gate dielectric thickness. First and second dummy structures are formed that are substantially similar to the gate dielectric capacitors except that the first dummy structures are formed without the second electrode of the capacitor and the second dummy structures are formed without the first electrode of the capacitor structure. The capacitance, and therefore thickness, of the gate dielectric capacitor is determined by subtracting the parasitic capacitances measured at the first and second dummy structures.
1. A wafer comprising:a base layer; an active layer formed on the base layer; a gate dielectric layer formed on the active layer; a conductive layer formed on the gate dielectric layer; and a plurality of isolation regions formed in said wafer, said wafer being divided into a plurality of first portions, second portions, and third portions; said first portions comprise gate dielectric capacitors, said gate dielectric capacitors comprise a first electrode layer, an insulating layer, and a second electrode layer; wherein the first electrode layer is formed from said active layer, the insulating layer is formed from said gate dielectric layer, and the second electrode layer is formed from said conductive layer; said second portions comprise first dummy structures, said first dummy structures comprise a first electrode layer and an insulating layer; wherein the first electrode layer of the first dummy structures is formed from said active layer and the insulating layer of the first dummy structures is formed from said gate dielectric layer, wherein said second portion does not contain said conductive layer; and said third portions comprise second dummy structures, said second dummy structures comprise an insulating layer and a second electrode layer, wherein the insulating layer of the second dummy structures is formed from an isolation region and the second electrode layer of the second dummy structures is formed from said conductive layer, wherein said third portion does not contain said active layer. 2. The wafer of claim 1, wherein the isolation regions are shallow trench isolation regions.3. The wafer of claim 1, wherein said conductive layer comprises polysilicon.4. The wafer of claim 1, wherein said active layer comprises doped silicon.5. The wafer of claim 1, further comprising an interconnect layer formed over said conductive layer.6. The wafer of claim 5, wherein said interconnect layer comprises a metal.7. The wafer of claim 1, further comprising a silicon electrode contacting an isolation region.8. The wafer of claim 1, wherein said gate dielectric capacitor is a transistor.9. A wafer comprising:a base layer; an active layer formed on the base layer; a gate dielectric layer formed on the active layer; a conductive layer formed on the gate dielectric layer; and a plurality of isolation regions formed in said wafer, said wafer being divided into a plurality of first portions, second portions, and third portions; said first portions comprise gate dielectric capacitors, said gate dielectric capacitors comprise a first electrode layer, an insulating layer, and a second electrode layer, wherein the first electrode layer is formed from said active layer; the insulating layer is formed from said gate dielectric layer, and the second electrode layer is formed from said conductive layer, wherein said active layer comprises source/drain regions; said second portions comprise first dummy structures, said first dummy structures comprise a first electrode layer and an insulating layer; wherein the first electrode layer of the first dummy structures is formed from said active layer and the insulating layer of the first dummy structures is formed from said gate dielectric layer, wherein said second portion does not contain said conductive layer; and said third portions comprise second dummy structures, said second dummy structures comprise an insulating layer and a second electrode layer; wherein the insulating layer of the second dummy structures is formed from an isolation region and the second electrode layer of the second dummy structures is formed from said conductive layer, wherein said third portion does not contain said active layer. 10. The wafer of claim 7, wherein said silicon electrode is an electrically isolated polysilicon electrode that contacts an isolation region of the second dummy pattern.11. A wafer comprising:a base layer; an active layer formed on the base layer; a gate dielectric layer formed on the active layer; a conductive layer formed on the gate dielectric layer; and a plurality of isolation regions formed in said wafer, said wafer being divided into a plurality of first portions, second portions, and third portions; said first portions comprise gate dielectric capacitors, said gate dielectric capacitors comprise a first electrode layer, an insulating layer, and a second electrode layer; wherein the first electrode layer is formed from said active layer, the insulating layer is formed from said gate dielectric layer, and the second electrode layer is formed from said conductive layer, wherein said gate dielectric layer is located between said first electrode layer and said second electrode layer; said second portions comprise first dummy structures, said first dummy structures comprise a first electrode layer and an insulating layer; wherein the first electrode layer of the first dummy structures is formed from said active layer and the insulating layer of the first dummy structures is formed from said gate dielectric layer, wherein said second portion does not contain said conductive layer; and said third portions comprise second dummy structures, said second dummy structures comprise an insulating layer and a second electrode layer, wherein the insulating layer of the second dummy structures is formed from an isolation region and the second electrode layer of the second dummy structures is formed from said conductive layer, wherein said third portion does not contain said active layer.
TECHNICAL FIELDThe present invention relates to the field of manufacturing semiconductor devices and, more particularly, to an improved method of measuring gate dielectric thickness and parasitic capacitance.BACKGROUND OF THE INVENTIONAn important aim of ongoing research in the semiconductor industry is the reduction in the dimensions of semiconductor devices. Planar transistors such as metal oxide semiconductor field effect transistors (MOSFET), are particularly well suited for use in high-density integrated circuits. As the size of the MOSFET and other active devices decreases, the dimensions of the gate electrodes and gate dielectric layers decrease correspondingly. Tight control of the gate dielectric thickness is necessary to manufacture reduced-size, high-reliability, high-speed transistors.Gate dielectric capacitors are commonly used in semiconductor devices. Common gate dielectric capacitors found in a semiconductor device include transistors, such as MOSFET. In order to improve gate dielectric capacitor performance, ultra-thin gate dielectric layers with a thickness below about 25 Å are coupled with large area capacitors. Large area capacitors are typically those with a capacitor area greater than 1000 Å<2> .Gate dielectric thickness is an important parameter in gate dielectric capacitor performance. If the gate dielectric thickness is too thin, short-circuiting is a problem. If the gate dielectric layer is too thick, then the device speed will be too slow.The thickness of the gate dielectric layer can be determined by measuring the capacitance of the gate dielectric capacitor. The thickness of the gate dielectric layer is related to the capacitance by the following formula:t=k/Cwherein t is the thickness of the gate dielectric layer, k is the dielectric constant of the gate dielectric layer, and C is the capacitance of the gate dielectric capacitor.The capacitance of large area, ultra-thin gate dielectric capacitors cannot be accurately measured directly. The large area gate dielectric capacitors tend to suffer from high gate leakage through the gate. The capacitance of small area gate dielectric capacitors also cannot be directly measured with accuracy. Gate leakage does not appreciably hinder measuring the capacitance of small area gate dielectric capacitors, rather parasitic capacitance interferes with accurate gate dielectric capacitance measurements in small area gate dielectric capacitors. As the area of the gate dielectric capacitor is reduced, the proportion of the total capacitance due to the parasitic capacitance associated with the wiring structures increases.The term gate dielectric capacitors, as used herein, is not to be limited to the specifically disclosed embodiments. Gate dielectric capacitors, as used herein, include a wide variety of electronic devices in addition to field effect transistors.The term semiconductor devices, as used herein, is not limited to the specifically disclosed embodiments. Semiconductor devices as used herein, include a wide variety of electronic devices including flip chips, flip chip/package assembly, transistors, capacitors, microprocessors, random access memories, etc. in general, semiconductor devices refer to any electrical device comprising semiconductors.SUMMARY OF THE INVENTIONThere exists a need in the semiconductor device art to accurately measure the gate dielectric thickness of gate dielectric capacitors. There exists a need in this art to accurately measure the capacitance of gate dielectric capacitors. There exists a need in this art to subtract the effects of parasitic capacitance from the overall measured capacitance to obtain the actual capacitance of the gate dielectric capacitor.These and other needs are met by embodiments of the present invention, which provide a wafer comprising a base layer and an active layer formed on the base layer. A gate dielectric layer is formed on the active layer and a conductive layer is formed on the gate dielectric layer. A plurality of isolation regions are formed in the wafer and the wafer is divided into a plurality of first portions, second portions, and third portions. The first portions comprise gate dielectric capacitors, wherein the gate dielectric capacitor comprises a first electrode layer formed by the active layer, an insulating layer formed by the gate dielectric layer, and a second electrode layer formed by the conductive layer. The second portions comprise first dummy structures, wherein the first dummy structures comprise a first electrode layer formed by the active layer and an insulating layer formed by the gate dielectric layer. The third portions comprise second dummy structures, wherein the second dummy structures comprise an insulating layer formed by an isolation region and a second electrode layer formed by the conductive layer.The earlier stated needs are also met by other embodiments of the instant invention, which provide a method of measuring the gate dielectric thickness of a gate dielectric capacitor, comprising the steps of providing a wafer comprising a plurality of gate dielectric capacitors, a plurality of first dummy structures, and a plurality of second dummy structures formed on the wafer. The capacitance of one of the gate dielectric capacitors is measured. The capacitance of one of the first dummy structures and one of the second dummy structures is also measured. The capacitance of the first dummy structure and the second dummy structure is subtracted from the capacitance of the gate dielectric capacitor to obtain a difference in capacitance, and the gate dielectric thickness is determined using the difference in capacitance and the known dielectric constant of the gate dielectric.The earlier stated needs are further met by other embodiments of the instant invention, which provide a method of manufacturing a wafer comprising a plurality of gate dielectric capacitors, first dummy structures, and second dummy structures. The method comprises the steps of providing a wafer comprising a base layer, an active layer formed on the base layer, a gate dielectric layer formed on the active layer, a plurality of isolation regions formed on the wafer, and a conductive layer formed on the gate dielectric layer and the isolation regions. Portions of the conductive layer are removed where the first dummy structures are formed and an intermetal dielectric layer is formed over the conductive layer. Openings are formed in the intermetal dielectric layer to expose the conductive layer, in the regions where the gate dielectric capacitors are formed; the isolation region, in the regions where the first dummy structures are formed; and the conductive layer, in the regions where the second dummy structures are formed. An interconnect layer is formed over the intermetal dielectric layer filling the openings in the intermetal dielectric layer.This invention addresses the needs for an improved method of measuring the capacitance and gate dielectric thickness of ultra-thin gate dielectric capacitors. The present invention eliminates parasitic capacitance from the gate dielectric capacitance measurements enabling accurate measurement of the gate dielectric capacitor capacitance.The foregoing and other features, aspects, and advantages of the present invention will become apparent in the following detailed description of the present invention when taken in conjunction with the accompanying drawings.BRIEF DESCRIPTION OF THE DRAWINGSFIGS. 1-5 schematically illustrate the formation of a wafer comprising gate dielectric capacitors, first dummy structures, and second dummy structures according to an embodiment of the present invention.FIG. 6 is a schematic view of the third dummy structure.FIG. 7 is a schematic view of a conventional FET.FIGS. 8 and 9 schematically illustrate the formation of a wafer comprising shallow trench isolation regions and various different gate oxide layers.DETAILED DESCRIPTION OF THE INVENTIONThe present invention enables the accurate measurement of the gate dielectric thickness in ultra-thin gate dielectric capacitors. A wafer is provided with dummy structures and a method for eliminating parasitic capacitance from gate dielectric capacitance measurements. The present invention uses two different types of dummy structures so that the parasitic capacitance can be measured and subtracted from the total capacitance of gate dielectric capacitors. The two types of dummy structures that are formed are substantially similar to gate dielectric capacitor devices formed on the wafer, with the exception that the first type of dummy structure does not include one of the capacitor electrodes and the second type of dummy structure does not include the other capacitor electrode.The invention will be described in conjunction with the formation of a gate dielectric capacitor and first and second dummy structures, as shown in the accompanying drawings. However, this is exemplary only as the claimed invention is not limited to the formation of the specific device illustrated in the drawings.FIG. 1 illustrates a wafer 10 comprising a plurality of first portions 22, second portions 24, and third portions 26. The wafer 10 is a silicon wafer about 100 [mu]m thick. The wafer 10 comprises a base layer 12 of silicon, an active layer 14 of doped silicon, and a gate dielectric layer 16 formed over active layer 14. In certain embodiments of the instant invention, the gate dielectric layer 16 is formed from a gate oxide, such as by chemical vapor deposition (CVD) or thermal oxidation. In certain alternative embodiments, the gate dielectric layer 16 comprises a nitride dielectric or a high-k dielectric, such as zirconium oxide, hafnium oxide, and stacked nitride. A plurality of isolation regions 18 are formed in the wafer 10 to separate and isolate the various device regions of the wafer. In this example, the gate dielectric capacitor will be formed in the first portion 22, the first dummy structure will be formed in the second portion 24, and the second dummy structure will be formed in the third portion 26. A typical wafer of the present invention comprises a large number of gate dielectric capacitors, depending on the size and specific type of gate dielectric capacitor. As an example, about 5 to about 30 first dummy structures and second dummy structures are formed in various locations throughout the wafer 10, although different numbers of dummy structures may be formed.Isolation regions 18 can be formed by local oxidation of silicon (LOCOS) or shallow trench isolation (STI) techniques. The isolation regions 18, as illustrated in FIG. 1, are formed by STI techniques. STI regions are typically formed by etching trenches in the wafer, forming a thermal oxide liner layer, and then filling the trench with an oxide by CVD. After the trench is filled, the wafer is planarized, typically by chemical-mechanical polishing (CMP).Subsequent to forming the isolation regions 18, conductive layer 20 is deposited on wafer 10. Conductive wafer layer 20 is formed from epitaxially deposited polysilicon, in exemplary embodiments.A capacitor comprises a first electrode and a second electrode with an insulating layer separating the two electrodes. In the present invention, the first electrode of the gate dielectric capacitor and the first dummy structure is formed from the active layer 14. The insulating layer is formed from the gate dielectric layer 16. In certain embodiments of the instant invention, the insulating layer is formed from the gate dielectric layer 16 and/or the isolation region 18, as both the gate dielectric layer 16 and the isolation region 18 are formed from silicon oxide. The second electrode in the gate dielectric capacitor and the second dummy structure is formed from the conductive layer 20.Anisotropic etching is performed on wafer 10 to form the three distinct portions 22, 24, 26. As shown in FIG. 2, the conductive layer 20 is removed from second portion 24 and portions of the conductive layer 20 are removed from third portion 26.Subsequent to etching conductive layer 20, an intermetal dielectric layer 28 is deposited over wafer 10, as shown in FIG. 3. The intermetal dielectric layer can be a conventional dielectric, such as CVD silicon oxide, spin-on-glass (SOG), or CVD silicon nitride.As illustrated in FIG. 4, the intermetal dielectric layer 28 is patterned by conventional photolithographic techniques and anisotropically etched to form openings 30 in first portion 22, exposing conductive layer 20; in second portion 24, exposing isolation region 18; and in third portion 26, exposing conductive layer 20.An interconnect layer 32 is deposited over the intermetal dielectric layer 28 and patterned by conventional photolithographic techniques to form gate dielectric capacitor 34, first dummy structure 36, and second dummy structure 38, as illustrated in FIG. 5. Interconnect layer 32 is typically formed from a conductive material, such as a metal. Suitable conductive materials include aluminum, tungsten, copper, and polysilicon, as examples.The present invention allows gate dielectric thickness to be monitored and measured before the wafer is cut into individual chips. Capacitance is measured across the gate dielectric layer 16 and/or isolation region 18. As the second dummy structure 38 does not contain an active layer, an electrode 39, isolated from active layer 14, is attached to the STI region 18 for the purposes of measuring capacitance. The electrode 39 is formed from a conventional conductive material, such as polysilicon. As shown in a side view along the length of the second dummy structure in FIG. 6, the electrode 39 is a lead extending from the end of the isolation region 18 in the exemplary embodiment.The first dummy structure 36, termed a "No poly dummy," does not contain a polysilicon electrode. Instead, the interconnect layer 32 directly contacts the oxide layer 16, 18. The second dummy structure 38, termed a "no active dummy" does not contain the doped silicon active layer. Instead, a polysilicon electrode 39 electrically isolated from the active layer 14 is in contact with STI region 18 to measure the capacitance.The capacitance of the first dummy structure 36 and second dummy structure 38 is added together and then subtracted from the overall capacitance measured at the gate dielectric capacitor 34. The difference in capacitance is used to calculate the thickness of gate dielectric layer 16. Wafers with the proper gate dielectric thickness undergo further processing to form the desired semiconductor devices, while wafers with improper gate dielectric thickness can be rejected or reworked.A field effect transistor (FET) is a typical semiconductor device comprising a gate dielectric capacitor. Field effect transistors can have their gate dielectric thicknesses accurately measured according to this invention. As illustrated in FIG. 7, a FET 40 comprises a base layer 42 of a silicon wafer that corresponds to the base layer 12 of the gate dielectric capacitor 34 illustrated in FIG. 5. The FET 40 further comprises STI regions 44, an intermetal dielectric layer 52, and an interconnect layer 54 in contact with a gate electrode 50. The gate electrode 50 is formed over a gate oxide layer 46, which is formed over the channel 58. The FET 40 further comprises source/drain regions 48 and sidewall spacers 56. The source/drain regions 48 and channel region 58 correspond to the active layer 14 (first electrode layer) of the gate dielectric capacitor 34. The gate oxide layer 46 of the FET 40 corresponds to the gate dielectric layer 16 of the gate dielectric capacitor 34, and the gate electrode 50 corresponds to the conductive layer 20 (second electrode layer) of the gate dielectric capacitor 34. When measuring the capacitance of a typical FET, the first and second dummy structures would substantially correspond to the FET structure 40, with the exception that the first dummy structure would not contain gate electrode 50 and the second dummy structure would not contain source/drain regions 48 and channel region 58.In other embodiments different gate dielectric layers are formed on a wafer comprising shallow trench isolation regions. FIG. 8 illustrates a wafer 150 comprising a silicon base layer 152 and a plurality of shallow trench isolation regions 154. A gate oxide layer 156 is formed on the silicon base layer 152 by thermal oxidation of silicon layer 152 or by silicon oxide deposition techniques. After the formation of the gate oxide layer 156, a mask is formed over the wafer 150 and selected first portions of the gate oxide layer 156 are removed by etching. A first alternate dielectric is subsequently deposited where the first portions of the gate oxide layer 156 were removed. Alternate dielectrics include high-k dielectrics, nitride stack dielectrics, and other known dielectrics. After depositing the first alternate dielectric, the wafer 150 is again masked and second portions of the gate oxide layer 156 are removed by etching. A second alternate dielectric is deposited where the second portions of the gate oxide layer were removed. Masking, etching, and alternate dielectric deposition is repeated until a desired number of different types of dielectric layers are deposited. An exemplary embodiment comprising different types of gate dielectrics is shown in FIG. 9, which comprises a high-k dielectric layer 160, a nitride stack dielectric layer 158, and a gate oxide layer 156.The wafer and method of the present invention provide improved high-reliability semiconductor devices. The wafer and methods of the present invention allow the accurate determination of gate dielectric thickness to ensure that semiconductor devices with high-performance and high-reliability capabilities are produced.The embodiments illustrated in the instant disclosure are for illustrative purpose only. They should not be construed to limit the scope of the claims. As is clear to one of ordinary skill in the art, the instant disclosure encompasses a wide variety of embodiments not specifically illustrated herein.
Apparatus and methods are disclosed, including memory devices and systems. Example memory devices, systems and methods include a buffer interface to translate high speed data interactions on a host interface side into slower, wider data interactions on a DRAM interface side. The slower, and wider DRAM interface may be configured to substantially match the capacity of the narrower, higher speed host interface. In some examples, the buffer interface may be configured to provide multiple sub-channel interfaces each coupled to one or more regions within the memory structure and configured to facilitate data recovery in the event of a failure of some portion of the memory structure. Selected example memory devices, systems and methods include an individual DRAM die, or one or more stacks of DRAM dies coupled to a buffer die.
WHAT IS CLAI MED IS:1. A memory device, comprising:a buffer die coupled to a substrate, the buffer die including a host device interface, and a DRAM interface;one or more DRAM dies supported by the substrate;multiple wire bond interconnections between the DRAM interface of the buffer die and the one or more DRAM dies; andcircuitry in the buffer die, configured to operate the host interface at a first data speed, and to operate the DRAM interface at a second data speed, slower than the first data speed2. The memory device of claim 1, wherein the stack of DRAM dies includes stair-stepped stacked DRAM dies.3. The memory device of claim 2, wherein the stack of DRAM dies includes more than one step direction within a single stack.4. The memory device of claim 1, wherein two stacks of DRAM dies are coupled to the substrate.5. The memory device of claim 1, wherein the buffer die is located at least partially underneath the one or more DRAM dies.6. The memory device of claim 5, wherein the buffer die is located at least partially underneath a portion of two different stacks of DRAM dies.7. The memory device of claim 1, further including solder bails on a backside of the substrate.8. The memory device of claim 7, further including a motherboard coupled to the solder balls on the backside of the substrate, and a processor coupled to the motherboard, wherein the processor is in communication with the host device interface.9. The memory device of claim 1, wherein the circuitry in the buffer die is configured to operate using a pulse amplitude modulation (PAM) protocol at the host interface or the DRAM interface, or both.10. A memory device, comprising:a buffer die coupled to a substrate, the buffer die including a host device interface, and a DRAM interface;a stack of vertically aligned DRAM dies supported by the substrate; multiple through silicon via (TSV) interconnections coupling multiple die in the stack of vertically aligned DRAM dies with the buffer die; andcircuitry in the buffer die, configured to operate the host interface at a first data speed, and to operate the DRAM interface at a second data speed, slower than the first data speed.11. The memory device of claim 10, wherein the buffer die is located at least partially underneath the stack of vertically aligned DRAM dies.12. The memory device of claim 10, wherein two stacks of vertically aligned DRAM dies are coupled to the substrate.13. A method of operating a memory system, comprising:receiving command/address (CA) signals and corresponding data (DQ) signals for a first memory channel at a first memory interface;reallocating the received CA signals and the corresponding DQ signals to at least first and second sub-channels;wherein each sub-channel DRAM interface carries a greater number of DQ signals than the first memory interface, and clocks the DQ signals at a slower speed than the first memory interface; and communicating the CA signals and DQ signals of each sub-channel DRAM interface through wirebond connections to one or more die in a stack of multiple memory die.14. The method of claim 13, wherein the reallocating is performed by a buffer assembly supported by a substrate, and wherein the stack of multiple memory die is supported by the substrate.15. The method of claim 13, wherein communicating the CA signals and DQ signals includes communicating using a pulse amplitude modulation (PAM) protocol.16. The method of claim 13, wherein communicating the CA signals and DQ signals includes communicating through wirebond connections to stair-stepped stacked DRAM dies.17. A method of operating a memory device, comprising:exchanging data between a processor and a buffer die at a first data speed;exchanging data between the buffer die and a stack of vertically aligned DRAM dies at a second speed, slower than the first speed;wherein exchanging data between the buffer die and the stack of vertically aligned DRAM dies includes exchanging data through multiple through silicon vias (TSVs) in the stack of vertically aligned DRAM dies.18. The method of claim 17, wherein exchanging data between the buffer die and a stack of DRAM dies includes exchanging using a pulse amplitude modulation (PAM) protocol.19. The method of claim 17, wherein exchanging data between a processor and a buffer die includes exchanging data over a first number of data paths; and wherein exchanging data between the buffer die and a stack of vertically aligned DRAM dies includes exchanging data over a second number of data paths greater than the first number of data paths. 20. The method of claim 17, wherein exchanging data between the buffer die and the stack of vertically aligned DRAM dies includes exchanging data from a buffer die that is located at least partially underneath the vertically aligned DRAM dies.
MEMORY DEVICE INTERFACE AND METHODPRIORITY APPLICATIONS[0001] This application claims the benefit of priority to U S. ProvisionalApplication Serial Number 62/809,281, filed February 22, 2019 and to U.S. Provisional Application Serial Number 62/816,731, filed March 11, 2019, and to U.S. Provisional Application Serial Number 62/826,422, filed March 29, 2019, all of which are incorporated herein by reference in their entirety.BACKGROUND[0002] The present description addresses relates generally to example structures and methods for a first memory interface to multiple respective second memory interfaces for interfacing with one or more memory devices; and more particularly relates to the memory systems including a buffer (in some examples, a buffer die or buffer assembly), operable to perform such reallocation. In some examples, the buffer can be configured to perform the reallocation to allow the second memory interfaces to be wider, and to operate at a slower data rate, than the first interface. The described buffer may be used in multiple configurations of memory interfaces, may be used with a variety of memory structures, including individual memory devices, any of multiple configurations of stacked memory devices, or other arrangements of multiple memory devices.[0003] Memory devices are semiconductor circuits that provide electronic storage of data for a host system (e.g., a computer or other electronic device). Memory devices may be volatile or non-volatile. Volatile memory requires power to maintain data, and includes devices such as random-access memory (RAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), or synchronous dynamic random-access memory (SDRAM), among others. Non-volatile memory can retain stored data when not powered, and includes devices such as flash memory, read-only memory (ROM), electrically erasable programmable ROM (EEPROM), erasable programmable ROM (EPROM), resistance variable memory, such as phase change random access memory (PCRAM), resistive random-access memory (RRAM), or magnetoresistive random access memory (MRAM), among others.[0004] Host systems typically include a host processor, a first amount of main memory (e.g., often volatile memory, such as DRAM) to support the host processor, and one or more storage systems (e.g., often non-volatile memory, such as flash memory) that provide additional storage to retain data in addition to or separate from the main memory.[0005] A storage system, such as a solid-state drive (SSD), can include a memory controller and one or more memory devices, including a number of dies or logical units (LUNs). In certain examples, each die can include a number of memory arrays and peripheral circuitry thereon, such as die logic or a die processor. The memory controller can include interface circuitry configured to communicate with a host device (e.g., the host processor or interface circuitry) through a communication interface (e.g., a bidirectional parallel or serial communication interface). The memory controller can receive commands or operations from the host system in association with memory operations or instructions, such as read or write operations to transfer data (e.g., user data and associated integrity data, such as error data or address data, etc.) between the memory devices and the host device, erase operations to erase data from the memory devices, perform drive management operations (e.g., data migration, garbage collection, block retirement), etc.[0006] It is desirable to provide improved main memory, such as DRAM memory. Features of improved main memory that are desired include, but are not limited to, higher capacity, higher speed, and reduced costBRIEF DESCRIPTION OF THE DRAWINGS[0007] In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. Like numerals having different letter suffixes may represent different instances of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed in the present document.[0008] FIG. 1A illustrates a system including a memory device in accordance with some example embodiments.[0009] FIG. IB illustrates another system including a memory device in accordance with some example embodiments.[0010] FIG. 2 illustrates an example memory device in accordance with some example embodiments.[0011] FIG. 3 illustrates a buffer die in block diagram form in accordance with some example embodiments.[0012] FIG. 4 illustrates another memory device in accordance with some example embodiments.[0013] FIG. 5A illustrates another memory device in accordance with some example embodiments.[0014] FIG. 5B illustrates another memory device in accordance with some example embodiments.[0015] FIG. 5C illustrates another memory device in accordance with some example embodiments.[0016] FIG. 5D illustrates another memory device in accordance with some example embodiments.[0017] FIG. 6 illustrates another memory device in accordance with some example embodiments.[0018] FIG. 7 illustrates another memory device in accordance with some example embodiments.[0019] FIG. 8A illustrates another memory device in accordance with some example embodiments.[0020] FIG. 8B illustrates another memory device in accordance with some example embodiments.[0021] FIG. 9A illustrates a DRAM die configuration in accordance with some example embodiments.[0022] FIG. 9B illustrates another DRAM die configuration in accordance with some example embodiments. [0023] FIG. 9C illustrates another DRAM die configuration in accordance with some example embodiments.[0024] FIG. 10A illustrates an example method flow diagram in accordance with some example embodiments.[0025] FIG. 10B illustrates another example method flow diagram in accordance with some example embodiments.[0026] FIG. 11A Illustrates an example embodiment of an alternative configuration and functionality for a memory system.[0027] FIG. 11B illustrates the memory system of FIG. 11 A, under an example failure condition.[0028] FIG. 12 illustrates an example configuration for a portion of the memory system of FIG. 11 A.[0029] FIG. 13 illustrates an example method flow diagram in accordance with some example embodiments.[0030] FIG. 14 illustrates an example method flow diagram in accordance with other example embodiments.[0031] FIG. 15 depicts an example embodiment of an alternative configuration and functionality of a memory system.[0032] FIG. 16 illustrates an example block diagram of an information handling system in accordance with some example embodiments.DETAILED DESCRIPTION[0033] The following description and the drawings sufficiently illustrate specific embodiments to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. Portions and features of some embodiments may be included in, or substituted for, those of other embodiments. Embodiments set forth in the claims encompass all available equivalents of those claims.[0034] Described below are various embodiments incorporating memory systems in which an external memory interface operates to transfer data at a first rate, but the memory operates internally at a second data rate slower than the first data rate. In examples described below, such operation can be achieved through use of a buffer interface in communication with the externa! memory interface (which may be;for example, a host interface), and redistributes the data connections (DQs) of the external interface to a greater number of data connections in communication with one or more memory devices (and/or one or more memory banks), which operate at a slower clock rate than that of the externa! memory interface.[0035] In embodiments as described below, the buffer interface may be presented in a separate die sitting between a host (or other) interface and one or more memory die. In an example embodiment, a buffer die (or other form of buffer interface) may include a host physical interface including connections for at least one memory channel (or sub-channel), including command/address connections and data connections. Control logic in the buffer interface may be implemented to reallocate the connections for the memory channel to at least two (or more) memory sub-channels, which connections extend to DRAM physical interfaces for each sub-channel, each sub-channel physical interface including command/address connections and data connections. The DRAM physical interfaces for each sub-channel then connect with one or more memory die.[0036] Also described below are stacked memory structures as may be used in one of the described memory systems, in which multiple memory die may be laterally offset from one another and connected either with another memory die, a logic die, or another structure/device, through wire bond connections. As described below, in some examples, one or more of the memory dies may include redistribution layers (RDLs) to distribute contact pads proximate an edge of the die to facilitate the described wire bonding.[0037] In some embodiments, a buffer interface as described above may be used to reallocate a host (or other) interface including DQs, including data connections, multiple ECC connections, and multiple parity connections.In some such embodiments, the buffer interface may be used in combination with one or more memory devices configured to allocate the data, ECC, and parity connections within the memory device(s) in a manner to protect against failure within the portion of the memory array or data path associated with a respective DRAM physical interface, as discussed in more detail below. This failure protection can be implemented in a manner to improve reliability of the memory system in a manner generally analogous to techniques known to the industry as Chipkill (trademark of IBM), or Single Device Data Correction (SDDC) (trademark of Intel). Such failure protection can be implemented to recover from multi-bit errors, for example as those affecting a region of memory, such as a sub-array, or the data and/or control paths to the sub-array region (i.e., a sub-channel of memory), as will be apparent to persons skilled in the art having the benefit of the present disclosure.[0038] Figure 1A shows an electronic system 100, having a processor106 coupled to a substrate 102. In some examples, substrate 102 can be a system motherboard, or in other examples, substrate 102 may couple to another substrate, such as a motherboard. Electronic system, 100 also includes first and second memory devices 120A, 120B. Memory devices 120A, 120B are also shown supported by substrate 102 adjacent to the processor 106 but are depicted, in an example configuration, coupled to a secondary substrate 124.In other examples, memory devices 120A, 120B can be coupled directly to the same substrate 102 as processor 106.[0039] The memory devices 120A, 120B, each include a buffer assembly, here in the example form of a buffer die 128, coupled to a secondary substrate 124. Although the terminology "buffer die" is used herein for referencing the buffer assembly, any such "buffer die" described herein may be in the form of an assembly including, for example, one or more semiconductor die or other devices and/or other discrete components (whether or not packaged together) providing the described functionality. Thus, unless expressly indicated otherwise in a specific usage, the term "buffer die" as used herein refers equally to a "buffer assembly" and/or "buffer device." The memory devices 120A, 120B can be individual die, or in some cases may each include a respective stack of memory devices, in this example DRAM dies 122. For purposes of the present description, memory devices 120A, 120B will be described in an example configuration of stacked memory devices.Additionally, memory devices 120A, 120B will be described in one example configuration in which the devices are dynamic random access memory (DRAM) dies 122A, 122B are each coupled to the secondary substrate 124. Other types of memory devices may be used in place of DRAM, including, for example FeRAM, phase change memory (PCM), 3D XPointlvlmemory, NAND memory, or NOR memory, or a combination thereof. In some cases, a single memory device may include one or more memory die that uses a first memory technology (e.g., DRAM) and a second memory die that uses a second memory technology (e.g., SRAM, FeRAM, etc.) different from the first memory technology.[0040] The stack of DRAM dies 122 are shown in block diagram form inFigure 1. Other figures in the following description shown greater detail of the stack of dies and various stacking configurations. In the example of Figure 1A, a number of wire bonds 126 are shown coupled to the stack of DRAM dies 122. Additional circuitry (not shown) is included on or within the secondary substrate 124. The additional circuitry completes the connection between the stack of DRAM dies 122, through the wire bonds 126, to the buffer die 120. Selected examples may include through silicon vias (TSVs) instead of wire bonds 126 as will be described in more detail in subsequent figures.[0041] Substrate wiring 104 is shown coupling the memory device 120A to the processor 106. In the example of Figure IB, an additional memory device 120B is shown. Although two memory devices 120A, 120B are shown for the depicted example, a, single memory structure may be used, or a number of memory devices greater than two may be used. Examples of memory devices as described in the present disclosure provide increased capacity near memory with increased speed and reduced manufacturing cost.[0042] Figure IB shows an electronic system 150, having a processor156 coupled to a substrate 152. The system 150 also includes first and second memory devices 160A, 160B In contrast to Figure 1A, in Figure IB, the first and second memory devices 160A, 160B are directly connected to the same substrate 102 as the processor 156, without any intermediary substrates or interposers. This configuration can provide additional speed and reduction in components over the example of Figure 1A. Similar to the example of Figure 1A, a buffer assembly or buffer die 168 is shown adjacent to a stack of DRAM dies 162. Wire bonds 166 are shown as an example interconnection structure, however other interconnection structures such as TSVs may be used.[0043] Figure 2 shows an electronic system 200 similar to memory device 118A or 118B from Figure IB. The electronic system 200 includes a buffer die 202 coupled to a substrate 204. The electronic system 200 also includes a stack of DRAM dies 210 coupled to the substrate 204. In the example of Figure 2, the individual dies in the stack of DRAM dies 210 are laterally offset from one or more vertically adjacent die specifically, in the depicted example, each die is laterally offset from both vertically adjacent die. As an example, the die may be staggered in at least one stair step configuration. The Example of Figure 2 shows two different stagger directions in the stair stepped stack of DRAM dies 210. In the illustrated dual stair step configuration, an exposed surface portion 212 of each die is used for a number of wire bond interconnections.[0044] Multiple wire bond interconnections 214, 216 are shown from the dies in the stack of DRAM dies 210 to the substrate 204. Additional conductors (not shown) on or within the substrate 204 further couple the wire bond interconnections 214, 216 to the buffer die 202. The buffer die 202 is shown coupled to the substrate 204 using one or more solder interconnections 203, such as a solder ball array. A number of substrate solder interconnections 206 are further shown on a bottom side of the substrate 204 to further transmit signals and data from the buffer die into a substrate 102 and eventually to a processor 106 as shown in Figure IB.[0045] Figure 3 shows a block diagram of a buffer die 300 similar to buffer die 202 from Figure 2. A host device interface 312 and a DRAM interface 314 are shown. Additional circuitry components of the buffer die 300 may include a controller and switching logic 316; reliability, availability, and serviceability (RAS) logic 317; and built-in self-test (BIST) logic 318.Communication from the buffer die 300 to a stack of DRAM dies is indicated by arrows 320. Communication from the buffer die 300 to a host device is indicated by arrows 322 and 324. in Figure 3, arrows 324 denotecommunication from command/address (CA) pins, and arrows 322 denote communication from data (DO) pins 322. Example numbers of CA pins and DQ pins are provided only as examples, as the host device interface may have substantially greater or fewer of either or both CA and DQ pins. The number of pins of either type required may vary depending upon the width of the channel of the interface, the provision for additional bits (for example ECC bits), among many other variables. In many examples, the host device interface will be an industry standard memory interface (either expressly defined by a standard setting organization, or a de facto standard adopted in the industry).[0046] In one example, all CA pins 324 act as a single channel, and all data pins 322 act as a single channel in one exam ple, all CA pins service all data pins 322. In another example, the CA pins 324 are subdivided into multiple sub-channels. In another example, the data pins 322 are subdivided into multiple sub-channels. One configuration may include a portion of the CA pins 324 servicing a portion of the data pins 322. in one specific example, 8 CA pins service 9 data (DQ) pins as a sub-combination of CA pins and data (DQ) pins. Multiple sub-combinations such as the 8 CA pin/9 data pin example, may be included in one memory device.[0047] It is common in computing devices to have DRAM memory coupled to a substrate, such as a motherboard, using a socket, such as a dual in line memory (DIMM) socket. However, for some applications a physical layout of DRAM chips and socket connections on a DIMM device can take up a large amount of space it is desirable to reduce an amount of space for DRAM memory. Additionally, communication through a socket interface is slower and less reliable than direct connection to a motherboard using solder connections. The additional component of the socket interface adds cost to the computing device.[0048] Using examples of some example memory devices in the present disclosure, a physical size of a memory device can be reduced for a given DRAM memory capacity. Speed is improved due to the direct connection to the substrate, and cost is reduced by eliminating the socket component.[0049] In operation, a possible data speed from a host device may be higher than interconnection components to DRAM dies such as trace lines, TSVs, wire bonds, etc. can handle. The addition of a buffer die 300 (or other form of buffer assembly) allows fast data interactions from a host device to be buffered. In the example of Figure 3, the host interface 312 is configured to operate at a first data speed. In one example, the first data speed may match the speed that the host device is capable of delivering.[0050] In one example, the DRAM interface 314 is configured to operate at a second data speed, slower than the first data speed. In one example, the DRAM interface 314 is configured to be both slower and wider than the host interface 312. In operation, the buffer die may translate high speed data interactions on the host interface 312 side into slower, wider data interactions on the DRAM interface 314 side. Additionally, as further described below, to maintain data throughput at least approximating that of the host interface, in some examples, the buffer assembly can reallocate the connections of the host interface to multiple memory sub-channels associated with respective DRAM interfaces. The slower, and wider DRAM interface 314 may be configured to substantially match the capacity of the narrower, higher speed host interface 312. In this way, more limited interconnection components to DRAM dies such as trace lines, TSVs, wire bonds, etc. are able to handle the capacity of interactions supplied from the faster host device. Though one example host interface (with both CA pins and DQ pins) to buffer die 300 is shown, buffer die 300 may include multiple host interfaces for separate data paths that are each mapped by buffer die 300 to multiple DRAM interfaces, in a similar manner.[0051] In one example, the host device interface 312 includes a first number of data paths, and the DRAM interface 314 includes a second number of data paths greater than the first number of data paths. In one example, circuitry in the buffer die 300 maps data and commands from the first number of data paths to the second number of data paths. In such a configuration, the second number of data paths provide a slower and wider interface, as described above.[0052] In one example the command/address pins 324 of the host device interface 312 include a first number of command/address paths, and on a corresponding DRAM interface 314 side of the buffer die 300, the DRAM interface 314 includes a second number of command/address paths that is larger than the first number of command/address paths. In one example, the second number of command/address paths is twice the first number of command/address paths in one example, the second number ofcommand/address paths is more than twice the first number ofcommand/address paths. In one example, the second number ofcommand/address paths is four times the first number of command/address paths. In one example, the second number of command/address paths is eight times the first number of command/address paths.[0053] In one example, a given command/address path on the DRAM interface 314 side of the buffer die 300 is in communication with only a single DRAM die. in one example, a given command/address path on the DRAM interface 314 side of the buffer die 300 is in communication with multiple DRAM dies. In one example, a given command/address path on the DRAM interface 314 side of the buffer die 300 is in communication with 4 DRAM dies. In one example, a given command/address path on the DRAM interface 314 side of the buffer die 300 is in communication with 16 DRAM dies.[0054] In one example the data pins 322 of the host device interface312 include a first number of data paths, and on a corresponding DRAM interface 314 side of the buffer die 300, the DRAM interface 314 includes a second number of data paths that is larger than the first number of data paths. In one example, the second number of data paths is twice the first number of data paths. In one example, the second number of data paths is more than twice the first number of data paths in one example, the second number of data paths is four times the first number of data paths. In one example, the second number of data paths is eight times the first number of data paths.[0055] In one example, a data path on the DRAM interface 314 side of the buffer die 300 is in communication with only a single DRAM die. In one example, a given data path on the DRAM interface 314 side of the buffer die 300 is in communication with multiple DRAM dies. In one example, a given data path on the DRAM interface 314 side of the buffer die 300 is in communication with 4 DRAM dies. In one example, a given data path on the DRAM interface 314 side of the buffer die 300 is in communication with 16 DRAM dies.[0056] In one example, the host interface 312 includes different speeds for command/address pins 324, and for data pins 322. In one example, data pins 322 of the host interface are configured to operate at 6.4 Gb/s. In one example, command/address pins 324 of the host interface are configured to operate at 3.2 Gb/s.[0057] in one example, the DRAM interface 314 of the buffer die 300 slows down and widens the communications from the host interface 312 side of the buffer die 300. In one example, where a given command/address path from the host interface 312 is mapped to two command/address paths on the DRAM interface 314, a speed at the host interface is 3.2 Gb/s, and a speed at the DRAM interface 314 is 1.6 Gb/s.[0058] In one example, where a given data path from the host interface312 is mapped to two data paths on the DRAM interface 314, a speed at the host interface is 6.4 Gb/s, and a speed at the DRAM interface 314 is 3.2 Gb/s, where each data path is in communication with a single DRAM die in a stack of DRAM dies. In one example, where a given data path from the host interface 312 is mapped to four data paths on the DRAM interface 314, a speed at the host interface is 6.4 Gb/s, and a speed at the DRAM interface 314 is 1.6 Gb/s, where each data path is in communication with four DRAM dies in a stack of DRAM dies. In one example, where a given data path from the host interface 312 is mapped to eight data paths on the DRAM interface 314, a speed at the host interface is 6.4 Gb/s, and a speed at the DRAM interface 314 is 0.8 Gb/s, where each data path is in communication with 16 DRAM dies in a stack of DRAM dies.[0059] In one example, a pulse amplitude modulation (PAM) protocol is used to communicate on the DRAM interface 314 side of the buffer die 300. In one example, the PAM protocol includes PAM-4, although other PAM protocols are within the scope of the invention. In one example, the PAM protocol increases the data bandwidth. In one example, where a given data path from the host interface 312 is mapped to four data paths on the DRAM interface 314, a speed at the host interface is 6.4 Gb/s, and a speed at the DRAM interface 314 is 0.8 Gb/s using a PAM protocol, where each data path is incommunication with four DRAM dies in a stack of DRAM dies in one example, where a given data path from the host interface 312 is mapped to eight data paths on the DRAM interface 314, a speed at the host interface is 6.4 Gb/s, and a speed at the DRAM interface 314 is 0.4 Gb/s using a PAM protocol, where each data path is in communication with 16 DRAM dies in a stack of DRAM dies.[0060] A number of pins needed to communicate between the buffer die 300 and an example 16 DRAM dies varies depending on the number of command/address paths on the DRAM interface 314 side of the buffer die 300, and on the number of DRAM dies coupled to each data path. The following table show a number of non-limiting examples of pin counts and corresponding command/address path configurations.[0061] A number of pins needed to communicate between the buffer die 300 and an example 16 DRAM dies varies depending on the number of data paths on the DRAM interface 314 side of the buffer die 300, and on the number of DRAM dies coupled to each data path. The following table show a number of non-limiting examples of pin counts and corresponding data pathconfigurations.[0062] As illustrated in selected examples below, the number of pins in the above tables may be coupled to the DRAM dies in the stack of DRAM dies in a number of different ways in one example, wire bonds are used to couple from the pins to the number of DRAM dies in one example, TSVs are used to couple from the pins to the number of DRAM dies. Although wire bonds and TSVs are used as an example, other communication pathways apart from wire bonds and TSVs are also within the scope of the present disclosure.[0063] Figure 4 shows another example of a memory device 400. The memory device 400 includes a buffer die 402 coupled to a substrate 404. The memory device 400 also includes a stack of DRAM dies 410 coupled to the substrate 404. In the example of Figure 4, the stack of DRAM dies 410 are staggered in at least one stair step configuration. The Example of Figure 4 shows two different stagger directions in the stair stepped stack of DRAM dies 410. Similar to the configuration of Figure 2, in the illustrated stair step configuration, an exposed surface portion 412 is used for a number of wire bond interconnections.[0064] Multiple wire bond interconnections 414, 416 are shown from the dies in the stack of DRAM dies 410 to the substrate 404. Additional conductors (not shown) on or within the substrate 404 further couple the wire bond interconnections 414, 416 to the buffer die 402. The buffer die 402 is shown coupled to the substrate 404 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 406 are further shown on a bottom side of the substrate 404 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device[0065] In the example of Figure 4, the multiple wire bondinterconnections 414, 416 are serially connected up the multiple stacked DRAM dies. In selected examples, a single wire bond may drive a load in more than one DRAM die. in such an example, the wire bond interconnections may be serially connected as shown in Figure 4. In one example, a single wire bond may be serially connected to four DRAM dies in one example, a single wire bond may be serially connected to eight DRAM dies. In one example, a single wire bond may be serially connected to sixteen DRAM dies. Other numbers of serially connected DRAM dies are also within the scope of the invention.Additionally, CA connections of the DRAM interface may be made to a first number of the DRAM dies, while the corresponding DQ connections of the DRAM interface may be made to a second number of the DRAM dies different from the first number.[0066] Figure 5A shows another example of a memory device 500. The memory device 500 includes a buffer die 502 coupled to a substrate 504 The memory device 500 also includes a stack of DRAM dies 510 coupled to the substrate 504. In the example of Figure 5A, the stack of DRAM dies 510 are staggered in at least one stair step configuration. The Example of Figure 5 shows two different stagger directions in the stair stepped stack of DRAM dies 510. In the illustrated stair step configuration, an exposed surface portion 512 is used for a number of wire bond interconnections.[0067] Multiple wire bond interconnections 514, 516 are shown from the dies in the stack of DRAM dies 410 to the substrate 404. Additional conductors (not shown) on or within the substrate 504 further couple the wire bond interconnections 514, 451616 to the buffer die 502. The buffer die 502 is shown coupled to the substrate 504 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 506 are further shown on a bottom side of the substrate 504 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device. [0068] In the example of Figure 5A, the buffer die 502 is located at least partially underneath the stack of DRAM dies 510 In one example, an encapsu!ant 503 at least partially surrounds the buffer die 502 The example of Figure 5A further reduces an areal footprint of the memory device 500Further, an interconnect distance between the stack of DRAM dies 510 and the buffer die 502 is reduced.[0069] Figure 5B shows another example of a memory device 520. The memory device 520 includes a buffer die 522 coupled to a substrate 524 The memory device 520 also includes a stack of DRAM dies 530 coupled to the substrate 524. Multiple wire bond interconnections 534, 536 are shown from the dies in the stack of DRAM dies 530 to the substrate 524. in the example of Figure 5B, the multiple wire bond interconnections 534, 536 are serially connected up the multiple stacked DRAM dies. In one example, a single wire bond may be serially connected to four DRAM dies. In one example, a single wire bond may be serially connected to eight DRAM dies in one example, a single wire bond may be serially connected to sixteen DRAM dies. Other numbers of serially connected DRAM dies are also within the scope of the invention.[0070] Figure 5C shows a top view of a memory device 540 similar to memory devices 500 and 520. In the example of Figure 5C, a buffer die 542 is shown coupled to a substrate 544, and located completely beneath a stack of DRAM dies 550. Figure 5D shows a top view of a memory device 560 similar to memory devices 500 and 520. in Figure 5D, a buffer die 562 is coupled to a substrate 564, and located partially underneath a portion of a first stack of DRAM dies 570 and a second stack of DRAM dies 572. In one example, a shorter stack of DRAM dies provides a shorter interconnection path, and a higher manufacturing yield. In selected examples, it may be desirable to use multiple shorter stacks of DRAM dies for these reasons. One tradeoff of multiple shorter stacks of DRAM dies is a larger areal footprint of the memory device 560.[0071] Figure 6 shows another example of a memory device 600. The memory device 600 includes a buffer die 602 coupled to a substrate 604. The memory device 600 also includes a stack of DRAM dies 610 coupled to the substrate 604. In the example of Figure 6, the stack of DRAM dies 610 are staggered in at least one stair step configuration. The Example of Figure 6 shows four staggers, in two different stagger directions in the stair stepped stack of DRAM dies 610. The stack of DRAM dies 610 in Figure 6 includes 16 DRAM dies, although the invention is not so limited. Similar to other stair step configurations shown, in Figure 6, an exposed surface portion 612 is used for a number of wire bond interconnections.[0072] Multiple wire bond interconnections 614, 616 are shown from the dies in the stack of DRAM dies 610 to the substrate 604. Additional conductors (not shown) on or within the substrate 604 further couple the wire bond interconnections 614, 616 to the buffer die 602. The buffer die 602 is shown coupled to the substrate 604 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 606 are further shown on a bottom side of the substrate 604 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device.[0073] Figure 7 shows another example of a memory device 700. The memory device 700 includes a buffer die 702 coupled to a substrate 704. The memory device 700 also includes a stack of DRAM dies 710 coupled to the substrate 704. In the example of Figure 7, the stack of DRAM dies 710 are staggered in at least one stair step configuration. The Example of Figure 7 shows four staggers, in two different stagger directions in the stair stepped stack of DRAM dies 710. The stack of DRAM dies 710 in Figure 7 includes 16 DRAM dies, although the invention is not so limited. Similar to other stair step configurations shown, in Figure 7, an exposed surface portion 712 is used for a number of wire bond interconnections.[0074] Multiple wire bond interconnections 714, 716 are shown from the dies in the stack of DRAM dies 710 to the substrate 704. Additional conductors (not shown) on or within the substrate 704 further couple the wire bond interconnections 714, 716 to the buffer die 702. The buffer die 702 is shown coupled to the substrate 704 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 706 are further shown on a bottom side of the substrate 704 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device.[0075] In the example of Figure 7, the buffer die 702 is located at least partially underneath the stack of DRAM dies 710. In one example, an encapsu!ant 703 at least partially surrounds the buffer die 702. The example of Figure 7 further reduces an areal footprint of the memory device 700.Additionally, an interconnect distance between the stack of DRAM dies 710 and the buffer die 702 is reduced.[0076] Figure 8A shows another example of a memory device 800. The memory device 800 includes a buffer die 802 coupled to a substrate 804. The memory device 800 also includes a stack of DRAM dies 810 coupled to the substrate 804. In the example of Figure 8A, the stack of DRAM dies 810 are vertically aligned. The stack of DRAM dies 810 in Figure 8A includes 8 DRAM dies, although the invention is not so limited.[0077] Multiple TSV interconnections 812 are shown passing through, and communicating with, one or more dies in the stack of DRAM dies 810 to the substrate 804. Additional conductors (not shown) on or within the substrate 804 further couple the TSVs 812 to the buffer die 802. The buffer die 802 is shown coupled to the substrate 804 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 806 are further shown on a bottom side of the substrate 804 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device.[0078] Figure 8B shows another example of a memory device 820. The memory device 820 includes a buffer die 822 coupled to a substrate 824. The memory device 820 also includes a stack of DRAM dies 830 coupled to the substrate 824. In the example of Figure 8B, the stack of DRAM dies 830 are vertically aligned. The stack of DRAM dies 830 in Figure 8B includes 16 DRAM dies, although the invention is not so limited. [0079] Multiple TSV interconnections 832 are shown passing through, and communicating with, one or more dies in the stack of DRAM dies 830 to the substrate 824. Additional conductors (not shown) on or within the substrate 824 further couple the TSVs 832 to the buffer die 822. The buffer die 822 is shown coupled to the substrate 824 using one or more solder interconnections, such as a solder ball array. A number of substrate solder interconnections 826 are further shown on a bottom side of the substrate 824 to further transmit signals and data from the buffer die into a motherboard and eventually to a host device.[0080] Figure 9A shows a block diagram of a single DRAM die 900 that may be included in a stack of DRAM dies according to any of the examples in the present disclosure. In Figure 9A, the DRAM die 900 includes a storage region 902 that contains arrays of memory cells. A first data I/O stripe 904 is shown passing from a first side 901 to a second side 903 of the DRAM die 900.In one example, contacts may be formed on edges of the first data I/O stripe 904 on one or both sides 901, 903 of the first data I/O stripe 904. Contacts may be connected to wire bonds as described in examples above in other examples, TSVs may be coupled to the first data I/O stripe 904, at sides 901, 903, or other locations along the first data I/O stripe 904.[0081] A second data I/O stripe 906 is further shown in Figure 9A. In one example, the second data I/O stripe 906 is substantially the same as the first data I/O stripe 904. In the example, of Figure 9A, each data I/O stripe includes 36 contacts for connection to wire bonds on either side. With two data I/O stripes, and two sides each, the DRAM die 900 includes connections for 144 wire bonds or TSVs.[0082] A command/address stripe 910 is further shown in Figure 9A. in the example shown, the command/address stripe 910 includes 30 contacts for connection to wire bonds or TSVs. In one example, one or more of the DRAM dies may include a redistribution layer redistributing connections of one or more of the data I/O stripes 904, 906, 910 to a second location for wire bonding, such as to one or more rows of wire bond pads along an edge of the die (as depicted relative to the example wire bonded stack configurations discussed earlier herein). The example numbers of DQ contacts in the data stripes 904 and 906 of Figure 9A (and also in the data stripes of the examples of Figures 9B-9C), and the example numbers of command/address contacts in such examples are relative examples only, and different numbers of contacts for either or both signal types, may be used for any of the described examples.[0083] Figure 9B shows a block diagram of a stack of four DRAM dies920 that may be included in a stack of DRAM dies according to any of the examples in the present disclosure. In Figure 9B, each die in the stack 920 includes a storage region 922 that contains arrays of memory ceils. A first data I/O stripe 924 is shown passing from a first side 921 to s second side 923 of the stack 920. In one example, contacts may be formed on edges of the first data I/O stripe 924 on one or both sides 921, 923 of the first data I/O stripe 924. Contacts may be connected to wire bonds as described in examples above. In other examples, TSVs may be coupled to the first data I/O stripe 924, at sides 921, 923, or other locations along the first data I/O stripe 924.[0084] A second data I/O stripe 926 is further shown in Figure 9B. in one example, the second data I/O stripe 926 is substantially the same as the first data I/O stripe 924. In the example, of Figure 9B, each data I/O stripe includes 9 contacts for connection to wire bonds on either side. With two data I/O stripes, and two sides, each DRAM die in the stack 920 includes connections for 36 wire bonds or TSVs. In one example, all four of the dies in the stack 920 are driven by a single data path as described in examples above.[0085] A command/address stripe 930 is further shown in Figure 9B In the example shown, the command/address stripe 930 includes 30 contacts for connection to wire bonds or TSVs.[0086] Figure 9C shows a block diagram of a stack of four DRAM dies940 that may be included in a stack of DRAM dies according to any of the examples in the present disclosure. In Figure 9C, each die in the stack 940 includes a storage region 942 that contains arrays of memory cells. A single data I/O stripe 944 is shown passing from a first side 941 to s second side 943 of the stack 940. In one example, contacts may be formed on edges of the data I/O stripe 944 on one or both sides 941, 943 of the data I/O stripe 944. Contacts may be connected to wire bonds as described in examples above. In other examples, TSVs may be coupled to the data I/O stripe 944, at sides 941, 943, or other locations along the first data I/O stripe 944.[0087] In the example, of Figure 9C, the single data I/O stripe 944 includes 18 contacts for connection to wire bonds on either side. With two sides, each DRAM die in the stack 940 includes connections for 36 wire bonds or TSVs. In one example, all four of the dies in the stack 940 are driven by a single data path as described in examples above.[0088] A command/address stripe 950 is further shown in Figure 9B. In the example shown, the command/address stripe 950 includes 30 contacts for connection to wire bonds or TSVs.[0089] Figure 10A shows a block diagram of one method of operation according to an embodiment of the invention in operation 1002, data is exchanged between a processor and a buffer die at a first data speed in operation 1004, data is exchanged between the buffer die and a stack of DRAM dies at a second speed, slower than the first speed. Operation 1006 explains that exchanging data between the buffer die and a stack of DRAM dies includes exchanging data through multiple wirebonds. Figure 10B shows a block diagram of another method of operation according to an embodiment of the invention in operation 1010, data is exchanged between a processor and a buffer die at a first data speed. In operation 1012, data is exchanged between the buffer die and a stack of vertically aligned DRAM dies at a second speed, slower than the first speed. Operation 1014 explains that exchanging data between the buffer die and the stack of vertically aligned DRAM dies includes exchanging data through multiple through silicon vias (TSVs) in the stack of vertically aligned DRAM dies.[0090] Figure 11A depicts an example memory system 1100 including a buffer, as may be implemented, for example, in a buffer die. For convenience in description of the example embodiments, the buffer will be described in the example configuration of the buffer die 1102. However, the described buffer functionality may be implemented in another structure, for example as a portion of another device, such as a memory device or an interposer. [0091] Memory system 1100 can include a range of configurations of a memory structure 1104. In some examples, memory structure 1104 may be a single memory device; but in many examples will include multiple memory devices. Where multiple memory devices are used, the memory devices may be stacked with one another, and/or may be each placed directly, on a supporting substrate, in some cases a printed circuit board (PCB) (such as for example, a system motherboard, or a memory module, such as a dual in-line memory module (DIMM)). In some examples, buffer die 1102 and the individual memory devices of the memory structure 1104 can be configured such that one or more of the memory devices can be mounted directly to buffer die 1102 (or other buffer structure); and in some examples, a stack of memory die may be mounted on (or over) a buffer die. As one example configuration, a system in accordance with the present description can be implemented as a dual rank DDRS RDIMM, having, for example, 32-40 individual DRAM die, forming two channels.[0092] Buffer die 1102 is cooperatively configured with a memory structure 1104 to avoid loss of data in the event of failure of a portion of a memory array, or of the control and/or data paths associated with a respective portion of the array. To implement this functionality, a first interface 1106, termed herein a "host interface" for the descriptions herein, as described above, will include additional DQ connections for ECC/parity bits. For example, the host interface for the buffer die of Figure 3 was depicted as having 36 DQs, 32 DQs carrying data and 4 DQs carrying ECC/parity bits. In the example of Figure 11, this host interface is expanded, for example by 4 additional DQs carrying ECC/parity bits.[0093] Figure 11A also depicts an alternative embodiment of the CA interface as (which, as discussed earlier herein, may include one or more chip select "CS" paths). In some example systems, the CA paths may be single clocked (in contrast to the DQs, which are typically double clocked), and the CA paths may therefore be increased in number, in comparison with the example of Figure 3. For example, in some systems, a system host may interface with an intermediate device for driving and buffering the interface connections (such as, for example, a Registered Clock Driver (RCD) used with some DDR 5 devices). In selected embodiments, the buffer die 1102 as described herein may be placed between such intermediate buffering device and the memory structure; but in other examples the buffer die 1102 may be implemented to stand in place of such intermediate driving and buffering devices (such as RCDs, for example).[0094] The number of CA paths required for the memory physical interfaces may vary depending upon the addressing methodology for the memory (and, for example, the use, if any, of chip select pins in operating the memory structure). Accordingly, the example numbers of CA paths (including any CS pins), are illustrative only, and the numbers of CA paths on either the host physical interface or the memory physical interfaces may be different than the example numbers identified herein. In some example configurations, just as DQ connections are mapped from four DQs to 16 DQs (in the depicted example), CA paths may be mapped to an increased number of CA paths. Due to the presence of paths for CS, clock, etc. not all control/address paths need to be multiplied. In one example, CA paths at the host interface may be mapped to (by way of example only) from 30 CA paths to 120 CA paths, arranged in 4 DRAM CA PHY interfaces, each having 30 CA paths. Again in an example configuration, each DRAM CA PHY may be configured to drive four DRAM loads; such that the described configuration would therefore be able to service 16 DRAM die.[0095] In the example of Figure 11A, buffer die 1102 is configured to reallocate the example 40 DQ data paths of the host PHY 1106 to multiple DRAM DQ PHYs, each DRAM DQ PHY configured to communicate with at least one respective region of memory structure 1104, with each DRAM DQ PHY being wider, and operating at a slower data transfer speed, than the corresponding portion of the host interface, in a manner analogous to that described relative to Figure 3. The description of multiple DRAM PHYs it is not intended to represent a specific physical structure, but rather a reallocation of a group of pins from the host interface to a greater number of pins at the memory interface in order to maintain the ability to recover from errors impacting multiple pins at the host PHY 1106, in some examples it will be desirable to maintain generally separate sub-channels through the buffer die 1102 and to the memory interface, and then to a generally independently operable logical region (or“slice") of the memory structure, as discussed in more detail below relative Figure 12. Accordingly, the pin connections for communicating data signals (DQs) with the memory structure for each such sub-channel is discussed herein as a DRAM PHY.[0096] In some cases, the host PHY 1106 can represent a channel or sub-channel of a memory bus. For example, in some embodiments, the host PHY 1106 can be configured in accordance with the DDR 5 specification promulgated by JEDEC; and, by way of example only, the example of Host PHY 1106 can represent one independent memory channel in accordance with that standard.[0097] In the depicted example, buffer die 1102 will reallocate the 40DQ pins of the host interface to multiple memory physical interfaces, as discussed relative to Figure 3. Figure 11A does not depict the various functionalities present in the buffer die. However, buffer die 1102 should be understood to include controller and switching logic structure; reliability, availability, and serviceability (RAS) structures; and built-in self-test (BIST) structures; all as discussed relative to buffer die 300 of Figure 3 (though as will be apparent to persons skilled in the art having the benefit of this disclosure the structures may be adapted to accomplish the different data path reallocation of the present depicted embodiment). In the depicted example, the host PHY 1106 may operate, for example, at a data transfer rate of approximately 6.4 Gb/s at the DQs; while the DRAM DQ PHYs may include (collectively) 160 pins operating at a data transfer rate of approximately 1.6 Gb/s. Analogous transfer rates will apply to the CA pins.[0098] Additionally, the pins of the memory physical interfaces will be allocated to at least 10, or a multiple of 10, sub-arrays of memory structure 1104. In one example, every 4 sequential DQs of host physical interface 1106 will be mapped to a respective DRAM DQ PHY, indicated functionally by 16 DQs, identified by example at 1108 (as indicated by arrows 1122 A-J), which extend to multiple sub-arrays. In other examples, DQs may be re-mapped other than in groups of four sequential DQs. In some examples, alternate DQs may be re mapped in place of four sequential DQs (i.e., for example a selected number of "even" DQs may be re-mapped separately from a selected number of "odd" DQs). In other examples, as discussed later herein, pins of the memory physical interfaces may be allocated to a different number of sub-arrays (and/or a different number of slices of the memory structure). For example, as will be discussed relative to Figure 15, the DQs of the memory physical interfaces may be allocated to nine sub-arrays (slices) of the memory structure.[0099] As discussed previously, in many examples, each respectiveDRAM DQ PHY (and datapath 1108) will operate at a data transfer rate slower than that of the host physical interface 1106. For example, each DQ data path 1108 of a respective memory physical interface may operate at one-quarter of the data transfer rate of host physical interface 1106 (in the example discussed above, 1.6 Gb/s). In other examples, every 4 sequential DQ data paths of host physical interface 1106 may be mapped to DRAM DQ PHYs having 8 DQs (rather than 16, as depicted), and operating (for example) at one-half the data transfer rate of the host PHY rather than one-quarter, as in the depicted example of FIG. 11 A. Examples of various potential implementations of host interface pin reallocation include the following (the example pin rates are illustrative only, and are provided as examples for purposes of illustration; as actual pin rates in various implementations may be substantially slower or faster than the provided examples):[0100] Each DRAM DO PHY will be coupled through the respective datapath 1108 to multiple sub-arrays (as indicated by example at 1110A-D, 1112A-D). In general, a DRAM bank may include multiple thousands of rows of memory ceils, and will include includes multiple su b-arrays. Each sub-array will include some subset of the rows of the bank; and will include row buffers for the subset of rows, and sense amplifiers. Allocation of the host physical interface 40 DQ pins across a group of at least 10 sub-channels, and allocation of each consecutive group of 4 host physical interface DQs to separate sub channels (as indicated by arrows 1122A-J), allows recovery of data even in the event of a failed sub-channel or sub-array (as depicted in Figure 11B, at sub- array 1110 A-l) (or a failed "slice" of the memory device, as discussed below), through use of the eight ECC/parity bits at the host interface, in a manner apparent to persons skilled in the art having the benefit of this disclosure. These data recovery mechanisms are commonly found in systems that utilize ChipKill or SDDC, as discussed earlier herein. Such recovery from a failed slice (or other region of memory) can be performed under control of the host in some examples, individual memory die, and/or buffer 1102, may also include internal ECC functionality to recover from local single or dual bit errors, as known to persons skilled in the art.[0101] Each DRAM DQ PHY 1108 may be coupled to sub-arrays in multiple ranks of the memory devices and/or multiple banks of the memory devices (and/or spanning the multiple memory devices and/or banks of the memory devices). As a result, in reference to Figure 11 A, each of the depicted overlapping tiers of identified sub-bank arrays (i.e., sub-arrays), as indicated generally at 1114, 1116, 1118, 1120, may be located in different memory devices, or in different ranks and/or banks of memory devices, within a memory structure 1104.[0102] Fig. 12 depicts an example memory system 1200, depicted in a block diagram representation, showing an example structure for a group of sub arrays analogous to those depicted in each of tiers 1114 ("A" tier), 1116 ("B" tier), 1118 ("C" tier), 1120 ("D" tier), in FIGS. 11A-11B. The depicted example structure of memory system 1200 is a logical representation rather than a physical representation.[0103] For purposes of the present description, the term "slice" will be used to refer to the portions of the ten (or other number) logical regions of the memory array (in the present example, sub-arrays) coupled to a respective DRAM DQ PHY having a respective data path 1108, in combination with the CA paths and the read/write data paths (DQs) to the I/O circuits for that portion of the array. Figure 12 depicts 10 memory slices 1202A-J, each formed of a respective group of memory cells, in this example, a sub-array, as indicated at 1204A-J, with an associated data path 1206A-J. As discussed relative to the embodiment of Figure 3, though not specifically depicted in Figure 12, at least a pertinent portion of the DRAM CA PHY interface will be distributed to each sub- array to provide addressing for each of the sub-arrays. In various embodiments, the depicted memory slices 1202A-J may be formed on one memory die, or may be distributed across multiple memory die. In selected examples, an individual host PHY DQ will be distributed (through a respective sub-array data path 1108) to each memory unit (die, rank, or bank) within a respective slice.[0104] Each sub-array includes multiple array mats, as indicated by example in slice 1202A at 1202A-1 to 1202A-4. The number of memory cells in each array mat, and the related number of word lines and data lines is a matter of design choice. Thus, the configuration of each array mat can be based on design considerations, such as, for example, the process node (as to feature size), the related size and configuration of each memory ceil, as well as desired dimensions for local word lines within the array mat, etc. In many examples, an array mat will extend between rows of sense amps on opposing sides of the array mat (in the column direction) and will have sub-word line decoders on at least one, if not both, remaining sides of the array mat (in the row direction). In some examples, physically adjacent array mats (in some cases separated by sub-word line decoders) may form a bank.[0105] One example configuration includes the depicted 10 memory slices 1202A-J formed on a single memory die; and in accordance with that configuration, a representative global word line 1208 (of multiple global word lines) is depicted extending across the depicted slices and to multiple array mats within each slice. As will be apparent to persons skilled in the art having the benefit of this disclosure, the global word lines 1208 will carry a higher order term across the associated sub-arrays and mats. In many examples, sub word line decoders will use additional terms to drive local word lines within each array mat, or pair of array mats (or other grouping of memory cells) along the global word lines.[0106] In the depicted example, each sub-array includes a large number of separately addressable mats. As depicted, each sub-array includes a matrix array mats, including four mats along the row direction (i.e. in the direction of global word line 1208), and 16 array mats along the column direction(perpendicular to the direction of global word line 1208). The memory structure can be configured that each memory page includes a multiple of 10 array mats. The array mats of a page are not necessarily physically aligned with one another, as in the logical representation of Figure 12. The example of memory system 1200 includes 40 array mats, which can be configured to provide a 4 kB page. In this example configuration, each of the 10 depicted sub-arrays 1204A-J includes 4 array mats, configured such that each sub-array can be configured to provide a respective 8 Bytes of an 80 Byte prefetch (64 Bytes of data, 8 bytes of parity bits, and 8 Bytes of ECC bits) in an example such as that described relative to Figure 11B, in which each slice is distributed across four tiers, each tier will provide one quarter of the prefetch. For example, where each tier in a slice comprises a portion of a respective memory device, each memory device will provide 2B of the 8B per slice prefetch. Other examples may have a different configuration, and be adapted to prefetch a different number of Bytes (example 60 Bytes or 100 Bytes). In many desirable examples, the Bytes of the prefetch will be a multiple of 10.[0107] The ability to configure memory system 1200 to provide a page size of approximately 4 kB, for example in a DIMM, is believed to facilitate having a significantly smaller power envelope than that required, for example, with a conventional DRAM DIMM configuration. In accordance with the present description, the 4 kB page may be allocated across one, two, or four (or more) memory die. In a conventional DRAM DIMM configuration, a page size of 20-40 kB would be common, requiring substantially more power than required for accessing a smaller page in accordance with the described configuration. The current belief is that a 40 DRAM die DIMM, configured with approximately 4 kB pages may require 40 to 60% less power than conventional configurations using a 20-40 kB page.[0108] One consideration in implementing the techniques described and implemented herein is to configure the slices (or other groupings of memory cells and the associated circuitry) to minimize shared points of failure. For example, in conventional DRAM memory devices, sub-word line drivers may extend between a pair of array mats and drive local word lines within each adjacent mat. However, in example system 1200 as described above, including 4 parity bits and 4 ECC/parity bits associated with 32 data bits (at the host physical interface), can only recover from failure associated with 4 of such data bits. In an example memory system, if physically adjacent mats were driven with a shared sub-word line driver, then a failure of the shared sub-word line driver could affect multiple array mats, and therefore result in an irrecoverable error. As a result, in such a structure, it would be advantageous to have separate sub-word line drivers for each array mat, so as to minimize shared points of failure. Similarly, common control circuits within a sub-array (or an analogous grouping of memory ceils), such as those controlling addressing and timing may be provided independently for each array mat. By way of example, sub-word line drivers that cross the sub-array boundaries do not present the same issue, since the column decoder circuits for the separate sub-arrays (i.e., in respective slices) will prevent selection of data from adjacent array mats across those boundaries[0109] In other examples, slices of a page may be allocated across different devices or banks within a device. However, requiring more devices/banks to be activated to read or write a page of data, will typically require activation of multiple global word lines, and therefore may require power beyond desired levels.[0110] FIG. 13 illustrates a flowchart of an example method of operation of the memory system. In method 1300, data and control/address signals are received at a first interface of a buffer structure at a first data rate, as indicated at 1302. In some examples, the data pins will include multiple data pins coupled to carry data bits, multiple data pins coupled to carry ECC/parity bits, and multiple data pins coupled to carry parity bits for use in ECC operations. The data pins of the first interface are mapped to multiple memory sub-channel interfaces, which are operable at a second data rate, which is slower than the first data rate of the first interface, as indicated at 1304.Examples of such reallocation of data pins in the buffer structure are discussed in reference to Figures 11A-11B. In some desirable examples, the first interface will be mapped to at least 10 sub-channel interfaces. Additionally, in some examples, each data pin of the first interface will be mapped to at least two data pins of a memory sub-channel interface in various examples, the reallocation may be performed at least in part by firmware including instructions stored a machine-readable storage device, such as one or more non-volatile memory devices.[0111] As a result of the reallocation, as indicated at 1306, signals may then be communicated from each sub-channel interface to a respective slice of a memory device (also as discussed in reference to Figures 11A-11B). In some examples, each slice of the memory device may include multiple array mats.[0112] FIG. 14 illustrates a flowchart of an alternative example method of operation 1400 of a memory system. In method 1400, As indicated at 1402, signals are received at host physical interface including command/address (CA) pins and data pins (DQs), in which the DQs include multiple ECC/parity pins. [0113] As indicated at 1404 at least the DQs of the host physical interface are mapped to at least two sub-channel memory interfaces, with each sub-channel memory interface including CA pins and DQs, The DQs including multiple ECC pins and multiple parity pins. As discussed relative to method 1300, at least some portion of the reallocation may be performed by firmware including instructions stored in a machine-readable storage device forming a portion of the controller and switching logic 316, as discussed in reference to Figure 3.[0114] Subsequently, as indicated at 1406, signals may becommunicated from the sub-channel memory interfaces to respective regions located in one or more memory die, in which the receiving the signals is divisible by 10. in some examples, each region may be a sub-array of a memory die (though the number of regions may be distributed across multiple memory die. And in some examples each region will include multiple array mats.[0115] FIG. 15 discloses an alternative configuration and functionality for a memory system 1500. Memory system 1500 includes, as an example, an implementation of a host interface PHY 1502 and buffer device 1504 analogous to that described relative to interface 322 and buffer die 300 of Figure 3, but differing, for example, in the memory interface configuration, as described herein. Memory system 1500 further incorporates a memory structure 1506 having some features generally discussed in reference to memory structure 1104 of Figure 11. As a result, except for differences discussed here in reference to Figure 15, the discussion of the structure and operation of memory system 1100, in reference to Figures 11-13 is applicable to a memory system 1500, and will not be repeated here[0116] As can be seen in Figure 15, host PHY 1502 includes (as an example) 36 DQ pins, providing a data interface of approximately 6.4 Gb/s. The 36 DQ pins may include, for example 32 DQ pins allocated to data, and four DQ pins to ECC/parity bits for the associated data. Thus, example host PHY 1502 differs from host PHY 1104 of Figure 11 in including only four DQs forECC/parity bits (for which eight DQs were allocated in host PHY 1104). As described below, these 36 DQ's will be remapped to nine slices (or more, such as a multiple of nine) within the memory structure.[0117] In other examples, the Host PHY may include only CA pins and data DQ pins (i.e., no ECC/parity DQs). For example, 32 data DQs, as may be used, for example in systems consistent with the previously identified DDR 5 standard of JEDEC). in some cases, those 32 Host PHY DQs may then each be remapped to multiple DQs at the DRAM DQ PHYs (as discussed in detail above), and applied to eight slices (or more, such as a multiple of eight) within the memory structure.[0118] In the depicted example, each group of four DQ bits is reallocated to 16 DQs at a DRAM DQ PHY interface, indicated generally at 1508, and by arrows 1510A-I; which in turn connect to a respective slice 1512 A-l of memory structure 1506. In some examples each consecutive four DQ bits will be reallocated to 16 DQs communicated to a respective slice. Other allocations are possible in other examples; for example, some number of even-numbered bits/pins at the host PHY 1502 may be reallocated to a first slice 1510 A-l of memory structure, while adjacent odd-numbered bits/pins may be reallocated to a second slice in other examples, either a greater or lesser number of bits of the host PHY 1502 may be reallocated to the respective slices 1510 A-l In the depicted example, data DQs will be re-mapped to multiple pins in eight slices (1510 A-l), while the ECC/parity bit DQs will be re-mapped to a ninth slice.[0119] As with memory system 1100, memory system 1500 can be implemented with a smaller page size, for example a 4K page size, allocated across one, or multiple memory die. For example, in the depicted embodiment, the 4K page size may be allocated across 36 array mats. Additionally, as discussed relative to memory system 1100, in some examples each sub-array may include four array mats of the page, such that each sub-array provides 8 Bytes of data of a 72 Byte prefetch (64 Bytes of data and 8 Bytes of ECC/parity bits). However as described relative to memory system 1100, a memory system analogous to memory system 1500 may be configured to implement other page sizes and/or prefetch sizes. [0120] In example systems, memory slices (whether containing one or more memory devices), can be of selected granularity. For example, the example configuration of memory system 1500, includes nine slices 1512 - 1512J, with each slice including four array mats 1514 in the row direction (the direction of global word line 1516); and with 4 DQs of the host interface mapped (1 to 4) to each slice. As an alternative, memory system may be configured to allocate the 4 kB page size across, for example, 18 slices, with each slice having a row direction dimension with two array mats. In such a configuration, each slice could be configured for a 4B prefetch (which in a four sub-bank data array, as discussed relative to Figure 11B), would include a IB prefetch per sub-array. In one example configuration, instead of 4 host interface DQs being mapped to each slice as in Figure 15, 2 host interface DQs may be mapped to each slice. Separate from the number of host DQs mapped to a respective slice; each host DQ may be mapped to a selected number of memory interface DQs, as discussed relative to the table of paragraph [0098], depending on the array configurations and desired loading.[0121] As with memory system 1100, memory system 1500 may be implemented in various configu ations including with one or more memory devices in various contexts. In some examples, memory system 1500 may be implemented with multiple memory devices supported directly or indirectly on a substrate, or on buffer 1504; while in other examples memory system 1500 may be implemented in an assembly with multiple memory devices arranged (individually, or in stacks) on a memory module, for example a DIMM module.[0122] In some examples, any of the embodiments described herein may be implemented in accordance with a selected standard. For example, as noted previously, the host interface for the buffers may be configured in accordance with the DDR 5 standard under development by the JC-42 Committee for Solid State Memories, or future iterations thereof. In other examples, the interfaces and memory system functionality may be configured for interoperability in accordance with other industry standards. [0123] Memory system 1500 may be operated in accordance with the example method 1300 discussed in reference to the flowchart of Figure 13. Accordingly, the description of that method is not repeated here.[0124] FIG. 16 illustrates a block diagram of an example machine (e.g., a host system) 1600 which may include one or more memory devices and/or memory systems as described above. As discussed above, machine 1600 may benefit from enhanced memory performance from use of one or more of the described memory devices and/or memory systems, facilitating improved performance of machine 1600 (as for many such machines or systems, efficient reading and writing of memory can facilitate improved performance of a processor or other components that machine, as described further below.[0125] In alternative embodiments, the machine 1600 may operate as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine 1600 may operate in the capacity of a server machine, a client machine, or both in server-client networkenvironments in an example, the machine 1600 may act as a peer machine in peer-to-peer (P2P) (or other distributed) network environment. The machine 1600 may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a mobile telephone, a web appliance, an loT device, automotive system, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein, such as cloud computing, software as a service (SaaS), other computer cluster configurations.[0126] Examples, as described herein, may include, or may operate by, logic, components, devices, packages, or mechanisms. Circuitry is a collection (e.g., set) of circuits implemented in tangible entities that include hardware (e.g., simple circuits, gates, logic, etc.). Circuitry membership may be flexible over time and underlying hardware variability. Circuitries include members that may, alone or in combination, perform specific tasks when operating. In an example, hardware of the circuitry may be immutably designed to carry out a specific operation (e.g., hardwired). In an example, the hardware of the circuitry may include variably connected physical components (e.g., execution units, transistors, simple circuits, etc.) including a computer-readable medium physically modified (e.g., magnetically, electrically, moveable placement of invariant massed particles, etc.) to encode instructions of the specific operation. In connecting the physical components, the underlying electrical properties of a hardware constituent are changed, for example, from an insulator to a conductor or vice versa. The instructions enable participating hardware (e.g., the execution units or a loading mechanism) to create members of the circuitry in hardware via the variable connections to carry out portions of the specific tasks when in operation. Accordingly, the computer-readable medium is communicatively coupled to the other components of the circuitry when the device is operating. In an example, any of the physical components may be used in more than one member of more than one circuitry. For example, under operation, execution units may be used in a first circuit of a first circuitry at one point in time and reused by a second circuit in the first circuitry, or by a third circuit in a second circuitry at a different time.[0127] The machine (e.g., computer system, a host system, etc.) 1600 may include a processing device 1602 (e.g., a hardware processor, a central processing unit (CPU), a graphics processing unit (GPU), a hardware processor core, or any combination thereof, etc.), a main memory 1604 (e.g., read-only memory (ROM), dynamic random-access memory (DRAM) such as synchronous DRAM (SDRAM) or Rambus DRAM (RDRAM), etc.), a static memory 1606 (e.g., static random-access memory (SRAM), etc.), and a storage system 1618, some or all of which may communicate with each other via a communication interface (e.g., a bus) 1630. in one example, the main memory 1604 includes one or more memory devices as described in examples above.[0128] The processing device 1602 can represent one or more general- purpose processing devices such as a microprocessor, a central processing unit, or the like. More particularly, the processing device can be a complex instruction set computing (CISC) microprocessor, reduced instruction set computing (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or a processor implementing other instruction sets, or processors implementing a combination of instruction sets. The processing device 1602 can also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like.The processing device 1602 can be configured to execute instructions 1626 for performing the operations and steps discussed herein. The computer system 1600 can further include a network interface device 1608 to communicate over a network 1620.[0129] The storage system 1618 can include a machine-readable storage medium (also known as a computer-readable medium) on which is stored one or more sets of instructions 1626 or software embodying any one or more of the methodologies or functions described herein. The instructions 1626 can also reside, completely or at least partially, within the main memory 1604 or within the processing device 1602 during execution thereof by the computer system 1600, the main memory 1604 and the processing device 1602 also constituting machine-readable storage media.[0130] The term "machine-readable storage medium" should be taken to include a single medium or multiple media that store the one or more sets of instructions, or any medium that is capable of storing or encoding a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The term "machine-readable storage medium" shall accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. In an example, a massed machine-readable medium comprises a machine- readable medium with multiple particles having invariant (e.g., rest) mass. Accordingly, massed machine-readable media are not transitory propagating signals. Specific examples of massed machine-readable media may include: non-volatile memory, such as semiconductor memory devices (e.g., Electrically Programmable Read-Only Memory (EPROM), Electrically ErasableProgrammable Read-Only Memory (EEPROM)) and flash memory devices; magnetic disks, such as interna! hard disks and removable disks; magneto optical disks; and CD-ROM and DVD-ROM disks[0131] The machine 1600 may further include a display unit, an alphanumeric input device (e.g., a keyboard), and a user interface (Ul) navigation device (e.g., a mouse). In an example, one or more of the display unit, the input device, or the U! navigation device may be a touch screen display. The machine a signal generation device (e.g., a speaker), or one or more sensors, such as a global positioning system (GPS) sensor, compass, accelerometer, or one or more other sensor. The machine 1600 may include an output controller, such as a serial (e.g., universal serial bus (USB), parallel, or other wired or wireless (e.g., infrared (IR), near field communication (NFC), etc.) connection to communicate or control one or more peripheral devices (e.g., a printer, card reader, etc.).[0132] The instructions 1626 (e.g., software, programs, an operating system (OS), etc.) or other data are stored on the storage system 1618 can be accessed by the main memory 1604 for use by the processing device 1602. The main memory 1604 (e.g., DRAM) is typically fast, but volatile, and thus a different type of storage than the storage system 1618 (e.g., an SSD), which is suitable for long-term storage, including while in an "off" condition. The instructions 1626 or data in use by a user or the machine 1600 are typically loaded in the main memory 1604 for use by the processing device 1602. When the main memory 1604 is full, virtual space from the storage system 1618 can be allocated to supplement the main memory 1604; however, because the storage system 1618 device is typically slower than the main memory 1604, and write speeds are typically at least twice as slow as read speeds, use of virtual memory can greatly reduce user experience due to storage system latency (in contrast to the main memory 1604, e.g., DRAM). Further, use of the storage system 1618 for virtual memory can greatly reduce the usable lifespan of the storage system 1618[0133] The instructions 1624 may further be transmitted or received over a network 1620 using a transmission medium via the network interface device 1608 utilizing any one of a number of transfer protocols (e.g., frame relay, internet protocol (IP), transmission control protocol (TCP), user datagram protocol (UDP), hypertext transfer protocol (HTTP), etc ). Examplecommunication networks may include a local area network (LAN), a wide area network (WAN), a packet data network (e.g., the Internet), mobile telephone networks (e.g., cellular networks), Plain Old Telephone (POTS) networks, and wireless data networks (e.g., institute of Electrical and Electronics Engineers (IEEE) 802.15 family of standards known as Wi-Fi®, IEEE 802.16 family of standards known as WiMax®), IEEE 802.15.4 family of standards, peer-to-peer (P2P) networks, among others. In an example, the network interface device 1608 may include one or more physical jacks (e.g., Ethernet, coaxial, or phone jacks) or one or more antennas to connect to the network 1620. In an example, the network interface device 1608 may include multiple antennas to wirelessly communicate using at least one of single-input multiple-output (SIMO), multiple-input multiple-output (MIMO), or multiple-input single-output (MISO) techniques. The term "transmission medium" shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions for execution by the machine 1600, and includes digital or analogcommunications signals or other intangible medium to facilitatecommunication of such software.[0134] The above detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show, by way of illustration, specific embodiments in which the invention can be practiced. These embodiments are also referred to herein as "examples". Such examples can include elements in addition to those shown or described. However, the present inventor also contemplates examples in which only those elements shown or described are provided. Moreover, the present inventor also contemplates examples using any combination or permutation of those elements shown or described (or one or more aspects thereof), either with respect to a particular example (or one or more aspects thereof), or with respect to other examples (or one or more aspects thereof) shown or described herein.[0135] All publications, patents, and patent documents referred to in this document are incorporated by reference herein in their entirety, as though individually incorporated by reference. In the event of inconsistent usages between this document and those documents so incorporated by reference, the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.[0136] In this document, the terms "a" or“an" are used, as is common in patent documents, to include one or more than one, independent of any other instances or usages of "at least one" or "one or more." in this document, the term "or" is used to refer to a nonexclusive or, such that "A or B" includes "A but not B," "B but not A," and "A and B," unless otherwise indicated. In the appended claims, the terms "including" and "in which" are used as the plain- English equivalents of the respective terms "comprising" and "wherein". Also, in the following claims, the terms "including" and "comprising" are open- ended, that is, a system, device, article, or process that includes elements in addition to those listed after such a term in a claim are still deemed to fall within the scope of that claim. Moreover, in the following claims, the terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements on their objects.[0137] In various examples, the components, controllers, processors, units, engines, or tables described herein can include, among other things, physical circuitry or firmware stored on a physical device. As used herein, "processor" means any type of computational circuit such as, but not limited to, a microprocessor, a microcontroller, a graphics processor, a digital signal processor (DSP), or any other type of processor or processing circuit, including a group of processors or multi-core devices.[0138] The term "horizontal" as used in this document is defined as a plane parallel to the conventional plane or surface of a substrate, such as that underlying a wafer or die, regardless of the actual orientation of the substrate at any point in time. The term "vertical" refers to a direction perpendicular to the horizontal as defined above. Prepositions, such as "on," "over," and "under" are defined with respect to the conventional plane or surface being on the top or exposed surface of the substrate, regardless of the orientation of the substrate; and while "on" is intended to suggest a direct contact of one structure relative to another structure which it lies "on"(in the absence of an express indication to the contrary); the terms "over" and "under" are expressly intended to identify a relative placement of structures (or layers, features, etc.), which expressly includes-but is not limited to-direct contact between the identified structures unless specifically identified as such. Similarly, the terms "over" and "under" are not limited to horizontal orientations, as a structure may be "over" a referenced structure if it is, at some point in time, an outermost portion of the construction under discussion, even if such structure extends vertically relative to the referenced structure, rather than in a horizontal orientation.[0139] The terms "wafer" is used herein to refer generally to any structure on which integrated circuits are formed, and also to such structures during various stages of integrated circuit fabrication. The term "substrate" is used to refer to either a wafer, or other structures which support or connect to other components, such as memory die or portions thereof. Thus, the term "substrate" embraces, for example, circuit or "PC" boards, interposers, and other organic or non-organic supporting structures (which in some cases may also contain active or passive components). The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the various embodiments is defined only by the appended claims, along with the full scope of equivalents to which such claims are entitled.[0140] It will be understood that when an element is referred to as being "on," "connected to" or "coupled with" another element, it can be directly on, connected, or coupled with the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly on," "directly connected to" or "directly coupled with" another element, there are no intervening elements or layers present. If two elements are shown in the drawings with a line connecting them, the two elements can be either be coupled, or directly coupled, unless otherwise indicated.[0141] Method examples described herein can be machine or computer-implemented at least in part. Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples. An implementation of such methods can include code, such as microcode, assembly language code, a higher-level language code, or the like. Such code can include computer-readable instructions for performing various methods. The code may form portions of computer program products. Further, the code can be tangibly stored on one or more volatile or non-volatile tangible computer-readable media, such as during execution or at other times. Examples of these tangible computer-readable media can include, but are not limited to, hard disks, removable magnetic disks, removable optical disks (e.g., compact disks and digital video disks), magnetic cassettes, memory cards or sticks, random access memories (RAMs), read only memories (ROMs), and the like.[0142] To better illustrate the method and apparatuses disclosed herein, a non-limiting list of embodiments is provided here:[0143] Example 1 is a memory device, comprising: a buffer device coupled to a substrate, the buffer device including a host device interface, and a DRAM interface; one or more DRAM dies supported by the substrate; multiple wire bond interconnections between the DRAM interface of the buffer device and the one or more DRAM dies; and circuitry in the buffer device, configured to operate the host device interface at a first data speed, and to operate the DRAM interface at a second data speed, slower than the first data speed.[0144] In Example 2, the memory device of Example 1 optionally includes 8 dies.[0145] In Example 3, the memory device of Example 2 optionally includes 16 dies.[0146] In Example 4, the memory device of any one or more of Examples 2-3 wherein the stack of DRAM dies includes stair-stepped stacked DRAM dies. [0147] In Example 5, the memory device of Example 4 wherein the stack of DRAM dies includes more than one step direction within a single stack.[0148] In Example 6, the memory device of any one or more of Examples 1-5 wherein two stacks of DRAM dies are coupled to the substrate.[0149] In Example 7, the memory device of any one or more of Examples 1-6 wherein the buffer device is located at least partially underneath the one or more DRAM dies.[0150] In Example 8, the memory device of any one or more of Examples 6-7 wherein the buffer device is located at least partially underneath a portion of each of the two stacks of DRAM dies.[0151] In Example 9, the memory device of any one or more of Examples 1-8 optionally include solder balls on a backside of the substrate.[0152] In Example 10, the memory device of any one or more of Examples 1-9 wherein the one or more DRAM dies comprises multiple DRAM dies in a stack of DRAM dies coupled to a single buffer device pin.[0153] In Example 11, the memory device of any one or more of Examples 1-10 wherein the circuitry in the buffer device is configured to operate using a pulse amplitude modulation (PAM) protocol at the host device interface or the DRAM interface, or both.[0154] Example 12 is a memory device, comprising: a buffer device coupled to a substrate, the buffer device including a host interface, and a DRAM interface; a stack of vertically aligned DRAM dies supported by the substrate; multiple through silicon via (TSV) interconnections coupling multiple die in the stack of vertically aligned DRAM dies with the buffer device; and circuitry in the buffer device, configured to operate the host interface at a first data speed, and to operate the DRAM interface at a second data speed, slower than the first data speed.[0155] In Example 13, the memory device of Example 12 optionally includes 8 dies.[0156] In Example 14, the memory device of Example 13 optionally includes 16 dies. [0157] In Example 15, the memory device of any one or more of Examples 13-14 wherein the buffer device is located at least partially underneath the stack of vertically aligned DRAM dies.[0158] In Example 16, the memory device of any one or more of Examples 13-15 wherein two stacks of vertically aligned DRAM dies are coupled to the substrate.[0159] In Example 17, the memory device of Example 16 wherein the buffer die is located at least partially underneath a portion of each of the two stacks of vertically aligned DRAM dies.[0160] Example 18 is a system, comprising: a processor coupled to a first substrate; a memory device coupled to the first substrate adjacent to the processor, the memory device including: a buffer device coupled to a second substrate, the buffer device including a host interface, and a DRAM interface; a stack of multiple DRAM dies coupled to the second substrate; multiple wire bond interconnections between the DRAM interface of the buffer device and the stack of DRAM dies; and circuitry in the buffer device, configured to operate the host interface at a first data speed, and to operate the DRAM interface at a second data speed, slower than the first data speed.[0161] In Example 19, the system of Example 18 wherein the first substrate is a motherboard, and the memory device and the processor are both soldered to the motherboard with a ball grid array.[0162] In Example 20, the system of Example 19 wherein the memory device is one multiple memory devices soldered to the motherboard adjacent to the processor.[0163] In Example 21, the system of any one or more of Examples 18-20 wherein the multiple wire bond interconnections includes bothcommand/address interconnections and data interconnections.[0164] In Example 22, the system of any one or more of Examples 18-21 wherein the host interface includes a first number of data paths, and wherein the DRAM interface includes a second number of data paths; and wherein the second number of data paths is at least twice the first number of data paths. [0165] In Example 23, the system of any one or more of Examples 18-22 wherein the host interface includes a third number of command/address paths; and wherein the DRAM interface includes a fourth number ofcommand/address paths which is at least twice the third number of command/address paths.[0166] In Example 24, the system of any one or more of Examples 22-23 wherein at least some data paths of the DRAM interface are in communication with only a single DRAM die.[0167] In Example 25, the system of any one or more of Examples 22-24 wherein at least some data paths of the DRAM interface are in communication more than one DRAM die of the multiple stacked DRAM die.[0168] In Example 26, the system of any one or more of Examples 23-25 wherein at least some command/address paths of the DRAM interface are in communication with a single bank of a single DRAM die.[0169] In Example 27, the system of any one or more of Examples 23-26 wherein at least some command/address paths of the DRAM interface are in communication with multiple banks of the multiple stacked DRAM die.[0170] In Example 28, the system of any one or more of Examples 18-27 wherein each DRAM die includes multiple 10 data stripes.[0171] In Example 29, the system of Example 28 wherein each data stripe terminates to two opposing sides of a DRAM die.[0172] In Example 30, the system of Example 29 wherein wire bonds from the second substrate extend to the multiple stacked DRAM dies from both the two opposing sides.[0173] In Example 31, the system of Example 30 wherein at least some of the wire bonds are serially connected up the multiple stacked DRAM dies.[0174] Example 32 is a method of operating a memory device, comprising: exchanging data between a processor and a buffer device at a first data speed; exchanging data between the buffer device and one or more DRAM dies at a second data speed, slower than the first data speed; wherein exchanging data between the buffer device and the one or more DRAM dies includes exchanging data through multiple wirebonds. [0175] In Example 33, the method of Example 32 wherein exchanging data between the buffer device and the one or more DRAM dies includes exchanging using a pulse amplitude modulation (PAM) protocol.[0176] In Example 34, the method of any one or more of Examples 32-33 wherein exchanging data between a processor and a buffer device includes exchanging data over a first number of data paths; and wherein exchanging data between the buffer device and the one or more DRAM dies includes exchanging data over a second number of data paths greater than the first number of data paths.[0177] Example 35 is a method of operating a memory device, comprising: exchanging data between a processor and a buffer device at a first data speed; exchanging data between the buffer device and a stack of vertically aligned DRAM dies at a second speed, slower than the first speed; wherein exchanging data between the buffer device and the stack of vertically aligned DRAM dies includes exchanging data through multiple through silicon vias (TSVs) in the stack of vertically aligned DRAM dies.[0178] In Example 36, the method of Example 35 wherein exchanging data between the buffer device and a stack of DRAM dies includes exchanging using a pulse amplitude modulation (PAM) protocol.[0179] In Example 37, the method of any one or more of Examples 35-36 wherein exchanging data between a processor and a buffer device includes exchanging data over a first number of data paths; and wherein exchanging data between the buffer device and a stack of vertically aligned DRAM dies includes exchanging data over a second number of data paths greater than the first number of data paths.[0180] Example 38 is a memory system, comprising: multiple memory die stacked above one another over a substrate; a buffer assembly, including, a host physical interface including connections for at least one memory channel, the connections for the memory channel including command/address connections and data connections, control logic mapping the connections for the at least one memory channel to at least two sub-channels, and DRAM physical interfaces for each sub-channel, each sub-channel physical interface including command/address connections and data connections; and interconnections between the DRAM physical interfaces for each sub-channel and one or more memory die of the multiple DRAM die.[0181] In Example 39, the memory system of Example 38 wherein the stacked multiple memory die are each laterally offset with respect to at least one vertically adjacent memory die; and wherein individual memory die of the stacked multiple memory die are wire bonded to respective connections of the DRAM physical interfaces.[0182] Example 40 is a method of operating a memory system, comprising: receiving command/address (CA) signals and corresponding data (DQ) signals for a first memory channel at a first memory interface; mapping the received CA signals and the corresponding DQ signals to at least first and second sub channels; wherein each sub-channel DRAM interface carries a greater number of DQ signals than the first memory interface, and clocks the DQ signals at a slower speed than the first memory interface; and communicating the CA signals and DQ signals of each sub-channel DRAM interface through wirebond connections to one or more die in a stack of multiple memory die.[0183] In Example 41, the method of Example 40 wherein the mapping is performed by a buffer assembly supported by a substrate, and wherein the stack of multiple memory die is supported by the substrate.[0184] Example 42 is a memory system, comprising: multiple DRAM memory die stacked above one another over a substrate, wherein vertically adjacent memory die are laterally offset from at least one vertically adjacent die; a buffer assembly, including, a host physical interface including connections for multiple memory channels, the connections for each memory channel including command/address (CA) connections and data (DQ) connections, control logic mapping the connections for the at least one memory channel to at least two sub-channels, and DRAM physical interfaces for each sub-channel, each sub channel DRAM physical interface including command/address connections and data connections, wherein the host physical interface includes a first number of DQ connections for the memory channels, and wherein the DRAM physical interfaces for the respective sub-channels includes a second number of DQ connections which is at least a multiple of the first number of DQ connections, and wherein the DQ connections of the sub-channel DRAM interfaces clock data at a speed less than a speed at which the host physical interface receives data; and wirebond interconnections between the DRAM physical interfaces for each sub-channel and one or more memory die of the multiple DRAM memory die.[0185] In Example 43, the memory system of Example 42 wherein the DQ connections of the sub-channel DRAM interfaces clock data at an even fraction of the speed at which the host physical interface receives data.[0186] In Example 44, the memory system of Example 43 wherein the sub channel DRAM interfaces clock data at one half the speed at which the host physical interface receives data.[0187] In Example 45, the memory system of any one or more of Examples 43-44 wherein the sub-channel DRAM interfaces clock data at one quarter the speed at which the host physical interface receives data.[0188] In Example 46, the memory system of any one or more of Examples 42-45 wherein the CA connections of at least one sub-channel DRAM interface are coupled to mu ltiple banks of DRAM memory in the stacked multiple DRAM memory die[0189] In Example 47, the memory system of Example 46 wherein the CA connections of the at least one sub-channel DRAM interface are coupled to ban ks in different DRAM memory die.[0190] In Example 48, the memory system of any one or more of Examples 42-47 wherein at least one of the DRAM memory die includes a redistribution layer (RDL), the RDL comprising wirebond pads.[0191] In Example 49, the memory system of Example 48 wherein the wirebond pads are located adjacent an edge of the at least a first DRAM memory die, and wherein the wirebond pads of the first DRAM memory die are accessible as a resu lt of the lateral offset of at least one vertically adjacent DRAM memory die relative to the first DRAM memory die.[0192] Example 50 is a memory system, comprising: at least one memory die; a buffer coupled to the at least one memory die, including, a host physical interface including pins for a memory channel, the pins for including command/address pins and data pins, and wherein the data pins include multiple ECC pins and multiple parity pins; control logic mapping the data pins for the at least one memory channel at the host physical interface to at least two memory physical interfaces, each memory physical interface including multiple data pins, including multiple ECC pins and multiple parity pins; and Interconnections between the memory physical interfaces and one or more memory die of the at least one memory die, wherein the memory physical interface data pins are mapped between multiple regions of the at least one memory die, and wherein the number of the multiple regions is divisible by 10 [0193] In Example 51, the memory system of Example 50 wherein the host physical interface and the memory physical interfaces each include multiple pins for parity bits[0194] In Example 52, the memory system of any one or more of Examples 50-51 wherein the host physical interface includes a first number of data pins; and wherein each sub-channel physical connection includes a second number of data pins which is at least twice the first number of data pins[0195] In Example 53, the memory system of Example 52 wherein the second number of data pins is four times the first number of data pins[0196] In Example 54, the memory system of any one or more of Examples 50-53 wherein each memory physical interface is a greater number of command/address pins than the host physical interface.[0197] In Example 55, the memory system of any one or more of Examples 50-54 wherein the at least one memory device comprises a DRAM memory device; and wherein each memory physical interface is a DRAM physical interface.[0198] In Example 56, the memory system of any one or more of Examples 50-55 optionally include regions of the at least one memory device each comprise a sub-array[0199] In Example 57, the memory system of Example 56 wherein each sub array comprises multiple array mats. [0200] In Example 58, the memory system of any one or more of Examples 56-57 optionally include regions are distributed across at least two memory devices.[0201] In Example 59, the memory system of any one or more of Examples 56-58 optionally include regions are distributed across multiple banks of a memory device.[0202] Example 60 is a memory system, comprising: at least one memory die; a buffer coupled to the at least one memory die, the buffer configured to reallocate data pins of a first interface operable at a first data rate, to multiple memory interfaces, the memory interfaces operable at a second data rate slower than the first data rate, the buffer further configured to reallocate groups of the data pins of the first interface to at least 10 slices of the at least one memory die.[0203] In Example 61, the memory system of Example 60 wherein the buffer is further configured to reallocate each data pin of the first interface to at least two data pins of the multiple memory interfaces.[0204] In Example 62, the memory system of any one or more of Examples 60-61 wherein the buffer is further configured to reallocate control/address pins of the first interface to the multiple memory interfaces.[0205] In Example 63, the memory system of Example 62 wherein the buffer is further configured to reallocate each control/address pin of the first interface to multiple pins in the multiple memory interfaces; and wherein the control/address pins of the first interface are operable at a third data rate, and the control/address pins of the multiple memory interfaces are operable a fourth data rate.[0206] In Example 64, the memory system of Example 63 wherein the first data rate is the same as the third data rate; and wherein the second data rate is the same as the fourth data rate.[0207] In Example 65, the memory system of any one or more ofExamples 60-64 wherein the multiple data pins of the first interface comprise multiple data pins coupled to carry data, multiple data pins coupled to carry ECC bits, and multiple data pins coupled to carry parity bits. [0208] In Example 66, the memory system of Example 65 optionally includes slices of the at least one memory die comprises multiple array mats.[0209] In Example 67, the memory system of any one or more ofExamples 60-66 optionally include slices are coupled to common global word lines.[0210] In Example 68, the memory system of any one or more ofExamples 66-67 optionally include slices includes local word lines operable independently of word lines in a physically adjacent array mat.[0211] In Example 69, the memory system of Example 68 wherein the local word lines of the array mats are operable through use of respective global word lines.[0212] In Example 70, the memory system of any one or more ofExamples 62-69 optionally include sub-channel interfaces, and wherein each sub-channel interface is connected to a respective slice of the at least one memory die[0213] In Example 71, the memory system of any one or more ofExamples 60-70 optionally include slices are located in at least two memory devices.[0214] In Example 72, the memory system of any one or more ofExamples 60-71 optionally include slices are located in at least two ranks of memory die.[0215] In Example 73, the memory system of any one or more ofExamples 60-72 optionally include slices are located in multiple banks of a memory die.[0216] In Example 74, the memory system of any one or more ofExamples 60-73 wherein the at least one memory die comprises a stack of at least two memory die.[0217] In Example 75, the memory system of any one or more ofExamples 60-74 wherein the at least one memory die comprises a DRAM memory die. [0218] In Example 76, the memory system of any one or more ofExamples 70-75 wherein the at least one memory die comprises multiple DRAM memory die.[0219] In Example 77, the memory system of any one or more ofExamples 70-76 wherein each sub-channel interface is coupled to at least two slices of the at least one memory die.[0220] In Example 78, the memory system of Example 77 wherein the at least two slices coupled to each sub-channel interface are located in different banks of a memory die.[0221] In Example 79, the memory system of any one or more ofExamples 60-78 wherein the buffer comprises controller and switching logic operable to reallocate the pins of the first interface.[0222] In Example 80, the memory system of Example 79 wherein the buffer further comprises row address select (RAS) logic.[0223] In Example 81, the memory system of Example 80 wherein the buffer further comprises a built-in self test (BIST) engine.[0224] Example 82 is a method of operating a memory system, comprising: receiving data and control/address signals at a first interface of a buffer structure, and at a first data rate; mapping data pins of the first interface to multiple memory sub-channel interfaces, the memory sub-channel interfaces operable at a second data rate slower than the first data rate; andcommunicating signals from each sub-channel interface to at least one slice of a memory device.[0225] In Example 83, the method of Example 82 optionally includes sub-channel interfaces.[0226] In Example 84, the method of any one or more of Examples 82-83 wherein mapping data pins of the first interface to multiple memory sub channel interfaces comprises mapping each data pin of the first interface to at least two data pins of a memory sub-channel interface.[0227] In Example 85, the method of any one or more of Examples 82-84 wherein the first interface comprises multiple data pins coupled to carry data, multiple data pins coupled to carry ECC bits, and multiple data pins coupled to carry parity bits.[0228] In Example 86, the method of any one or more of Examples 83-85 optionally include slices of a memory system and were in each slice of the memory system comprises multiple array mats.[0229] In Example 87, the method of any one or more of Examples 83-86 optionally include slices are coupled to a common global word line.[0230] In Example 88, the method of any one or more of Examples 86-87 optionally include slices includes local word lines operable independently of word lines in a physically adjacent array mat.[0231] Example 89 is a method of operating a memory system, comprising: receiving signals at a host physical interface, the host physical interface including command/address pins and data pins (DQs), and wherein the DQs include multiple ECC pins and multiple parity pins; mapping the DQs of the host physical interface to at least two sub-channel memory interfaces, each sub-channel memory interface including command/address pins and DQs, including multiple ECC pins and multiple parity pins; and communicating signals from the sub-channel memory interfaces to respective regions located in one or more memory die, wherein the number of the regions receiving the signals is divisible by 10.[0232] In Example 90, the method of Example 89 wherein each of the respective regions located in one or more memory die is a sub-array of a memory die[0233] In Example 91, the method of Example 90 wherein each sub array includes multiple array mats[0234] In Example 92, the method of Example 91 whereincommunicating signals from the sub- channel memory interfaces to respective regions comprises communicating signals to multiple array mats within each sub- array, and wherein the signals are communicated to a number of array mats which is a multiple of 10.[0235] Example 93 is a memory system, comprising: at least one memory die; a buffer coupled to the at least one memory die, the buffer configured to reallocate data pins of a first interface operable at a first data rate, to multiple memory interfaces, the memory interfaces operable at a second data rate slower than the first data rate, the buffer further configured to reallocate groups of the data pins of the first interface to at least nine slices of the at least one memory die.[0236] In Example 94, the memory system of Example 93 wherein the buffer is further configured to reallocate each data pin of the first interface to at least two data pins of the multiple memory interfaces.[0237] In Example 95, the memory system of any one or more ofExamples 93-94 wherein the buffer is further configured to reallocate control/address pins of the first interface to the multiple memory interfaces.[0238] In Example 96, the memory system of Example 95 wherein the buffer is further configured to reallocate each control/address pin of the first interface to multiple pins in the multiple memory interfaces; and wherein the control/address pins of the first interface are operable at a third data rate, and the control/address pins of the multiple memory interfaces are operable a fourth data rate.[0239] In Example 97, the memory system of Example 96 wherein the first data rate and the third data rate are the same; and wherein the second data rate and the fourth data rate are the same.[0240] In Example 98, the memory system of any one or more ofExamples 93-97 wherein the multiple data pins of the first interface comprise multiple data pins coupled to carry data, multiple data pins coupled to carry ECC bits, and multiple data pins coupled to carry parity bits.[0241] In Example 99, the memory system of Example 98 wherein each slice of the at least nine slices of the at least one memory die comprises multiple array mats.[0242] In Example 100, the memory system of any one or more ofExamples 98-99 wherein multiple array mats of the at least nine slices are coupled to common global word lines.[0243] In Example 101, the memory system of any one or more ofExamples 99-100 wherein each array mat within the at least nine slices includes local word lines operable independently of word lines in a physically adjacent array mat.[0244] In Example 102, the memory system of Example 101 wherein the local word lines of the array mats are operable through use of respective global word lines.[0245] In Example 103, the memory system of any one or more ofExamples 95-102 wherein the multiple memory interfaces comprise at least nine sub-channel interfaces, and wherein each sub-channel interface is connected to a respective slice of the at least one memory die.[0246] In Example 104, the memory system of any one or more ofExamples 93-103 wherein the at least nine slices are located in at least two memory devices.[0247] In Example 105, the memory system of any one or more ofExamples 93-104 wherein the at least nine slices are located in at least two ranks of memory devices.[0248] In Example 106, the memory system of any one or more ofExamples 93-105 wherein the at least nine slices are located at least two banks of a memory device.[0249] In Example 107, the memory system of any one or more ofExamples 93-106 wherein the at least one memory die comprises a stack of at least two memory die.[0250] In Example 108, the memory system of any one or more ofExamples 93-107 wherein the at least one memory die comprises a DRAM memory die.[0251] In Example 109, the memory system of any one or more ofExamples 93-108 wherein the at least one memory die comprises multiple DRAM memory die.[0252] In Example 110, the memory system of any one or more ofExamples 103-109 wherein each sub-channel interface is coupled to at least two slices of the at least one memory die. [0253] In Example 111, the memory system of Example 110 wherein the at least two slices coupled to each sub-channel interface are located in different ban ks of a memory device.[0254] In Example 112, the memory system of any one or more ofExamples 93-111 wherein the buffer comprises controller and switching logic operable to real locate the pins of the first interface.[0255] In Example 113, the memory system of Example 112 wherein the buffer further comprises row add ress select (RAS) logic.[0256] In Example 114, the memory system of any one or more ofExamples 112-113 wherein the buffer fu rther comprises a built-in self test (BIST) engine.[0257] Example 115 is a method of operating a memory system, comprising: receiving data and control/add ress signals at a first interface of a buffer structure, and at a first data rate; mapping data pins of the first interface to multiple memory sub-channel interfaces, the memory su b-channel interfaces operable at a second data rate slower than the first data rate; andcommunicating signals from each sub-channel interface to at least one slice of a memory device.[0258] In Example 116, the method of Example 115 wherein mapping data pins of the first interface to multiple memory sub-channel interfaces comprises mapping groups of the data pins of the first interface to at least nine sub-channel interfaces.[0259] In Example 117, the method of any one or more of Examples115-116 wherein mapping data pins of the first interface to mu ltiple memory sub-channel interfaces comprises mapping each data pin of the first interface to at least two data pins of a memory sub-channel interface.[0260] In Example 118, the method of any one or more of Examples115-117 wherein the first interface comprises multiple data pins coupled to carry data, and multiple data pins coupled to carry parity bits.[0261] In Example 119, the method of any one or more of Examples115-118 wherein each of the memory sub- channel interfaces extends to a respective slice of least nine slices of a memory array, and wherein each slice of the memory array comprises multiple array mats.[0262] In Example 120, the method of any one or more of Examples116-119 wherein multiple slices of the at least nine slices are coupled to a common global word line.[0263] In Example 121, the method of any one or more of Examples119-120 wherein each array mat within the at least nine slices includes local word lines operable independently of word lines in a physically adjacent array mat.[0264] Example 122 is a method of operating a memory system, comprising: receiving signals at a host physical interface, the host physical interface including command/address pins and data pins (DQs); mapping the DQs of the host physical interface to at least two sub-channel memory interfaces, each sub-channel memory interface including command/address pins and DQs; and communicating signals from the sub-channel memory interfaces to respective regions located in one or more memory die, wherein a number of the multiple regions receiving the signals is divisible by nine.[0265] In Example 123, the method of Example 122 wherein each of the respective regions located in one or more memory die is a sub-array of a memory die.[0266] In Example 124, the method of Example 123 wherein each sub array includes multiple array mats.[0267] In Example 125, the method of Example 124 wherein communicating signals from the sub-channel memory interfaces to respective regions comprises communicating signals to multiple array mats within each sub-array, and wherein the signals are communicated to a number of array mats which is a multiple of nine.[0268] Example 126 is a memory system, comprising: multiple memory die supported by a substrate; a buffer assembly electrically coupled to the multiple memory die, including, a host memory channel interface including command/address connections and data connections, control logic mapping the data connections of the at least one memory channel to at least two memory su b-channel interfaces coupled to the memory devices, including mapping each host data connection to at least two data connections at a memory su b-channel interface; wherein the memory sub-channel data connections are operable to transfer data at a slower rate than data connections of the host memory channel interface.[0269] In Example 127, the system of Example 126 wherein the buffer assembly comprises a buffer die stacked with at least one memory device.[0270] In Example 128, the system of Example 127 wherein multiple memory devices are stacked with the buffer die.[0271] In Example 129, the system of any one or more of Examples 126-128 wherein the memory system forms a portion of a memory module.[0272] In Example 130, the system of Example 129 wherein the memory module is a dual-inline memory module (DIMM).[0273] In Example 131, the system of any one or more of Examples 126-130 wherein the multiple memory devices comprise multiple DRAM memory devices.[0274] In Example 132, the system of any one or more of Examples 126-131 wherein the data connections include ECC/parity connections.[0275] In Example 133, memory devices or systems of any of Examples1 - 39 and 93 - 114 may be modified with structures and fu nctionality of other of such Examples.[0276] In Example 134, the memory devices or systems of any ofExamples 1 - 39 and 93 - 114 may be configu red or adapted to perform the methods of any of Examples 32 - 37, 40 - 41, 82 - 92, 42 - 81, or 115 - 125.[0277] In Example 135 the methods of any of Examples 32 - 37, 40 - 41,82 - 92, 42 - 81, or 115 - 125 may be modified to include operations of other of such Examples.[0278] In Example 136, the methods of any of Examples 32 - 37, 40 -41, 82 - 92, 42 - 81, or 115 - 125 may be implemented th rough one or more of the devices of any of Examples 1 - 39 and 93 - 114. [0279] The above description is intended to be illustrative, and not restrictive For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. Other embodiments can be used, such as by one of ordinary skill in the art upon reviewing the above description. The Abstract is provided to comply with 37 C.F.R. §1.72(b), to allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Also, in the above Detailed Description, various features may be grouped together to streamline the disclosure. This should not be interpreted as intending that an unclaimed disclosed feature is essential to any claim. Rather, inventive subject matter may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment, and it is contemplated that such embodiments can be combined with each other in various combinations or permutations. The scope of the invention should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.